AWS Cloud Enterprise Strategy Blog

4 Reasons to Re-Consider Lift-and-Shift to the Cloud

“… Reform the environment and not man; being absolutely confident that if you give man the right environment, he will behave favorably.” — Buckminster Fuller

As an enterprise strategist at AWS, I often talk to customers about strategies that will help them move workloads to the cloud. Occasionally, I’ll hear from senior executives who don’t want to take any of their legacy workloads to the cloud; instead, they want to focus on developing net new architectures using severless services like AWS Lambda. Even with the potential cost savings, I’m sympathetic to why an organization wouldn’t want to take any of its legacy technical debt to the cloud avoiding the “mess-for-less” approach. Also, given that we live in an era of intense security scrutiny, organizations apply higher standards in the cloud than in their on-premise data centers. This potentially causes refactoring on applications. But, based on my experience, and what I’ve seen from a wide range of customers, lift-and-shift should be one of your core migration paths for taking workloads to the AWS cloud.

Stephen Orban, Global Head of Enterprise Strategy for AWS, has made a very strong case that explains why organizations should look at lift-and-shift as part of their migration strategy. Some of the benefits Stephen cites include reduction of cost and improved performance and resiliency. I’d like to dive a little deeper in this post, because I’ve found that real-life examples illustrating how lift-and-shift can help organizations frequently lead customers to consider a balanced and holistic migration approach.

Before I get into the reasons why lift-and-shift helps inject new life into applications, I’d like to introduce what may be a new mental model for how to think about software applications. I think of applications as being organic in nature. They are born, evolve, morph, communicate, and interact with other organisms in the environment in which they exist. To extend the analogy further, these applications communicate with other applications and live in an ecosystem or environment that is the data center. I believe the ability of these applications to perform and evolve is just as much dependent on their environment as the code or DNA that governs their behavior. The argument I’d like to make here is that the AWS cloud provides a better environment — in terms of size and diversity of services — that is well beyond what most on-premise data centers can provide.

Reason 1 — SSD’s Rule

AWS provides 13 families of compute, ranging from memory-optimized computer to optimized storage to optimized server instances. Most organizations aren’t able to provide this broad choice, despite the flexibility that virtualization provides. That’s unfortunate, because one benefit of this variety is the boost in performance that occurs when solid-state drives (SSD’s) are leveraged, particularly for storage I/O-intensive workloads like databases. The price of all storage types continues to go down, but it’s still relatively expensive for organizations to upgrade their fleet of physical hosts with SSD’s. Having said that, though, SSD’s are 2 to 5 times faster than spinning disk, so the performance gains can be substantial for certain classes of workloads. And, with AWS, organizations can be judicious in the application of SSD-backed instances. Or, like my previous organization, they can give everyone a performance bump by just moving all databases to SSD-backed instances.

Reason 2 — Covering Application Sins

Most people think about the cloud’s elasticity from a scale-out perspective, but scale-up is just as valid. This can be done on-premise, but AWS provides an upper threshold that’s greater than most environments. For example, one of AWS’ largest instances in memory and CPU comes from our X1 family of virtual servers. The x1e.32xlarge has 128 vCPU’s, 4TB of memory, and it’s backed by SSD storage with dedicated bandwidth to EBS (14,000 Mbps). This is an instance that is typically used for workloads like SAP HANA.

One customer I know had an application that was in a critical period and realized there were some bad queries causing performance bottlenecks. Changing the code was too risky, so the database server was upped to an X1 instance and then ramped back down to a more reasonable instance size once the critical period was over. Being on the application development side of the IT house, I always appreciated when infrastructure had the ability to cover application sins. I’d rather catch application issues earlier in the development cycle, but it’s sure nice to know that AWS can provide help when you’re in a bind.

Reason 3 — Horses for Courses

The relational database (RDMS) has been the de facto back-end for applications over the past 40 years. While the RDMS is great at many kinds of queries there are some workloads that the RDMS is simply not well-suited for. Full-text search is a good example, which explains why Lucene-based technologies such as Apache Solr and ElasticSearch are so popular and much better suited for this use case.

Another story from my professional past — One of the architecture principles I had established in my past is “Horses for Courses” to help our teams make better technology decisions by choosing the best technology for the given use case versus going with what they know or are comfortable with. An example of this principle is I once worked with an application team that was trying to develop innovative ideas as the business was growing. Users were frustrated with the lack of innovation and development agility and complained greatly when it came to in application search. The idea we had was to stand up an ElasticSearch instance alongside the application, integrate the application data over to ElasticSearch, and then do a minor refactor of the front-end web application. (ElasticSearch provides some great REST-based API’s.) What I love about this story is that the team didn’t have to take a big risk refactoring the application and instantiating Amazon ElasticSearch or Amazon CloudSearch. Plus the team did not need to invest in specialized skills provisioning and managing NoSQL clusters. The AWS cloud provides a stable of many service “horses” to help evolve applications.

Reason 4 — Evolving Monoliths

A lot has been written about microservices, but, distilled down, it’s clear that their primary benefits are independent deployment and scalability. And, if microservices offer the right size of granularity or “bounded context,” then the blast radius of many types of risk can be mitigated (e.g. performance, changes, etc.).

Companies like Netflix and Amazon have implemented microservices architectures to help innovate and scale their applications. But an essential area of microservices that’s generally misunderstood is the independence aspect, or how solid the boundaries should actually be between other microservices. One litmus test I deploy with teams that are implementing microservices is pretty simple — if I whack your database, how many other teams or microservices will be impacted? The usual, and sheepish, response I get is that the back-end is shared with a number of other teams. My belief is that, in order to be independently deployable and scalable, the microservice should be isolated from the code repository, presentation layer, business logic, down through the persistence store.

If my definition of microservice isolation resonates for you, then the ramifications can be quite expensive from an infrastructure perspective. In on-premise infrastructure, spinning up a new code repository, web servers, application servers, and database servers can get really expensive (both to provision and operate); this will also likely slow the provisioning process. But spinning up these infrastructure components is fast and cheap in the cloud, especially if you leverage services like AWS RDS or AWS Lambda.

One of the best examples of how to evolve a monolith is contained in a presentation by gilt.com that has been delivered several times at re:Invent. What’s great about the presentation is that the evolution of the gilt.com application can be applied to many enterprise applications. In a nutshell, Gilt needed to evolve its ecommerce platform because of scalability and agility issues. So it started by standing up microservices alongside its application and kept doing this to the point where it had “a forest of microservices” that grew up around the core application, which was originally written. I could make a strong argument that it would have been really hard to stand up all those microservices in an on-premise environment, especially if there was diversity in the front- and back-end technologies.

If the “mess-for-less” approach has been holding you back, I hope this post has helped evolve your thinking and set the stage for lift-and-shift to become a pillar of your migration strategy.

Never stop innovating,

Joe
chung@amazon.com
@chunjx
http://thinkwithwp.com/enterprise/

TAGS:
Joe Chung

Joe Chung

Joe joined AWS as Enterprise Strategist & Evangelist in November 2016. In this role, Joe works with enterprise technology executives to share experiences and strategies for how the cloud can help them increase speed and agility while devoting more of their resources to their customers. Joe earned his bachelor's degree in mechanical engineering from the University of Michigan at Ann Arbor. He also earned his master's in business administration from Kellogg's Management School of Business at Northwestern University.