We left off our last article with a look at the trend of microservices, and how organizations are looking towards embracing this new approach. In this stage of the adoption curve, there are more companies investigating this more modular way of deployment.
From what we are seeing, it’s clear that microservices adoption is progressing in the market below web-scale (e.g., Netflix, Google) such as enterprises and even some mid-market companies; but how has this 11% been successful, while others are hesitant to adopt microservices?
Finding the Greener Grass with Microservices
We’ve considered many of the benefits to microservices, but let’s also consider the strain this puts on our underlying infrastructure and IT teams that might be implicit in this data. This includes:
- Anticipated pressure on development teams
- Distributed transactions across multiple services that are complex to implement
- Increased coordination between teams during deployment and maintenance of services
- Re-strategizing application deployment and self-service delivery implications
- Real time management of application demands once workloads are provisioned
- Infrastructure resource volatility and performance degradation
There are clearly other factors, including cost, and others not included here but let’s assume that these are some of the main concerns regarding adoption. If we agree that current microservice deployments are primarily VM based in market below web-scale, relative to its slower adoption of containerization, infrastructure challenges must be a greater perceived challenge preventing the 34% and 66% noted above from moving forward with adoption.
Knowing the Impact
Chris Richardson breaks this approach down nicely when he describes his idea of Microservices Architecture based on VMs: Essentially a microservices architecture “replaces N monolithic application instances with NxM services instances. If each service runs in its own JVM (or equivalent), which is usually necessary to isolate the instances, then there is the overhead of M times as many JVM runtimes. Moreover, if each service runs on its own VM (e.g. EC2 instance), as is the case at Netflix, the overhead is even higher.”
This means that not only do administrators need to deploy applications faster, but they have to understand the relationship between the demands of those services as it relates to a broader set of VM workloads underpinning the virtualization layer underneath – the platform every IT shop must deliver 24x7. And on the demand side of this equation, we have exponentially more moving parts and interdependencies in order to satisfy our QoS adherence expectations: a big shift in IT Operations for those used to managing a much simpler, monolithic architecture. Let’s also not forget about the challenges during hand-off from development to real time ops management and architecture teams.
The Challenge of Making the Jump
As Martin Fowler jokes: “When you use Microservices you have to work on automated deployment, monitoring, dealing with failure, eventual consistency, and other factors that a distributed system introduces. There are well-known ways to cope with all this, but its extra effort, and nobody I know in software development seems to have acres of free time.” [Fowler: http://martinfowler.com/bliki/MicroservicePremium.html ]
Thus, the real underlying issue becomes the invisible web of dependencies across IT Domains and the corresponding latency between our application services both horizontally and vertically down into the virtualization layer once microservices are deployed and in place.
Interestingly, for all participants the survey, the majority focused on minimizing application latency across various IT domains (i.e. compute, storage, and network) (83.1%, X = 3.49) as opposed to within their Specific domain (71.8%, X = 3.15)
When scoped to the 81 participants with a clear majority of Latency-Critical workloads, the mix shifted to 100% (x̄ = 4.00) and 76.2% (x̄ = 3.29), respectively.
The gap, and perhaps the biggest reason that organizations aren’t moving forward with microservices might be related to how IT teams actually track and measure latency today, or the lack there-of we should say.
The results are simply staggering. From the survey:
- 32% of participants either don’t measure latency or don’t know if their organization measures latency
- 58% of respondents use infrastructure monitoring software and manual troubleshooting to mitigate application latency
- And lastly, 49% of respondents run virtualized workloads on dedicated clusters to mitigate application latency.
The trade-offs that organizations must make today in order to avoid application latency is daunting. The one thing we do know is that ever since virtualization began, workload interference has become an increasingly complex battle in progressing onward the shared everything journey. Moreover, when it comes to Microservices and its adoption, it is evident that many factors must be considered to predict the success or failure of its implementations.
In our next blog, we will explore how splitting out the components using SOA affects our East-West communication across the datacenter. In later discussions, we will explore how cloud-like architecture and bursting to the public cloud affects when and how this should be done given the context of service-oriented-architecture. Stay tuned….
Image Sources: https://turbonomic.com/resources/the-state-of-latency-containers-microservices-e-book/ , Tristan Cobb
Survey source: https://github.com/vmturbo/VMTurboSurvey (Raw data), Full survey report: https://turbonomic.com/resources/the-state-of-latency-containers-microservices-e-book/
Reference Sources: http://martinfowler.com/microservices/, http://microservices.io/patterns/microservices.html , http://thenewstack.io/microservices-four-essential-checklists-getting-started/ , http://radar.oreilly.com/2015/05/the-unwelcome-guest-why-vms-arent-the-solution-for-next-gen-applications.html, https://medium.com/@OReillyMedia/microservices-have-you-met-devops-8a5a432c5900