Turbonomic Blog

A Peek into the Turbonomic Journey to Microservices

Posted by Laci Long on Dec 7, 2016 9:38:23 AM

The software development landscape can be daunting: it’s crowded, and filled with myriad tools, techniques and approaches to building applications. Software application design naturally evolves alongside the web, and the landscape is only getting busier as the scale and speed of the web continue to grow at an incredible rate.

One of the most common challenges that enterprise software applications face is one of scale: how can we practically handle an exponentially growing amount of “work”? Traditionally, one would scale an application horizontally or vertically, but it’s costly and increases the complexity of management.

With a service-oriented architecture, we look at scaling in another dimension - through functional decomposition. Simply put, we take a large set of features and break it down into individual components, allowing small parts of the application to fail or be upgraded and serviced without bringing down the entire house. This is what creates resiliency and enables an application to scale on a massive level.

That’s not to say that a microservices architecture is “one size fits all”, or that new applications should always be built with a fully service-oriented architecture (often, these considerations are long lasting and expensive to change later), but in high-volume or large scale enterprise applications, the benefits often considerably outweigh the downsides.

Earlier this month, we hosted Boston’s Microservices Meetup Group, led by Ajit Umrani (@ajitumrani), formerly President of Assembla. Our very own Sylvia Isler, VP of Architecture at Turbonomic was the featured guest speaker. In her presentation, Sylvia detailed our company’s efforts in introducing a microservice approach to our monolithic software application. Sylvia also highlighted a few valuable lessons learned along the way.

Watch the presentation or read the summary of Sylvias presentation below:


Setting the Stage

Turbonomic is, by our standards, a legacy platform: it is a monolithic Java application that runs on Tomcat, with layers like abstraction and analytics running in a JVM container. We engineered our abstraction using the Eclipse Modeling Framework - which, at the time was a good approach - but the web has evolved, and we want to take advantage of new technologies that have emerged.

Why not build a service-oriented architecture in the first place?

The technical advantages of adopting microservices are clear, but from a business perspective, committing too early can be costly. Building a service-oriented architecture from the ground-up costs valuable development time which is often hard to come by, especially when the needs of a business and its customers need to be satisfied.

Showing a proof of concept through a minimally viable product is crucial to building lasting software applications. When you’re developing a totally new system, you are partly constrained by the unknown - you don’t know the typical system workloads, behaviors or types of customer environments that you may encounter. By first showing proof of concept and value, and then developing the product (and maybe getting some customers) before any decomposition, you can test the validity of your product before you invest too much time in crafting the ideal architecture.

Why change at all?

So if the monolith works, and adopting microservices can be risky - why change at all? At the heart of it, it’s a matter of scale and inevitability.

Turbonomic’s growth and customer base has expanded enormously, with six years of consecutive growth and a roster of over 1500 customers worldwide, some placing our software in environments with over 100,000 virtual machines. At this scale, the technology starts to see the strain of managing these large environments.

Additionally, there were a few other challenges that we wanted to address:

1. The Turbonomic engineering team is distributed across the world and over many time zones, so the tribal knowledge that’s required to maintain and develop a monolithic application can be hard to capture and communicate.

2. Our release cycle was simply too slow for the amount of growth and the speed at which some of our customers operate. With our monolithic architecture and distributed team structure, we ended up releasing every 6 months while not measuring agility, velocity or the quality of code from the team (our engineering team has since adopted parts of the agile methodology, such as peer code review, feature roadmaps, and scheduled sprints. This enabled us to report on our goals, and develop faster with more stable code.).

3. Turbonomic’s first generation analytics algorithm allows real time capacity management and capacity planning at scale, but we were reaching the upper bounds of what we could achieve. The monolithic services architecture requires multiple instances of the product, which are then formed with an aggregation instance. This approach technically works, but it introduces system upgrade complexity and performance bottlenecks.

4. We wanted to be more resilient. The monolith does possess homegrown resilience, and the application has a few ways of doing self-repair, but this is an area that we think we can vastly improve with microservices.

5. Lastly, the interface was built with Flex, an aging framework with more than a few security issues. Some organizations refuse to deploy Flex into their environments at all because of these security issues.

Be Mindful of the Separation of Concerns (and document everything!)

The separation of concerns is the idea of keeping any particular aspect or functionality of the software mostly independent of other components (in our case, our concerns are our components and they only do one or two things). This allows a component to be developed, tested and updated quickly. For instance, when it’s time to deliver upgrades or replace a feature, you can update components individually, rather than taking down your application and updating the entire monolith.

For us, every component should be known through their interfaces. This leads to more robust testing and less confusion for developers. We made sure to package specific documentation with each component.

Additionally, we were careful in documenting build requirements, annotations, code examples and detailed instructions on how to build any component of our application. Our engineering team now has a rich source of knowledge for any of our developers to learn and draw from, and quickly get set up to build on top of any part of the application.

Borrow Freely

The open source community is powerful, and leverages the shared contributions of many talented developers across the world.

Embracing open source, and not starting from scratch saves an enormous amount of precious engineering time and effort - allowing us to deliver the custom features and functionality that our customers need today, while also enabling us to contribute back to the community in the future.

We started to write our new framework from scratch, but quickly realized that using open source tools allowed our teams to work on custom components rather than building from the ground up. In our case, the architectural requirements that we defined already existed within the Spring community, so this allowed us to “buy” vs. “build” - saving a massive amount of internal resources. If we had decided to build our foundation from the ground up, it is likely that we would still be working on it to this day.

One small note - before you commit to using any open sourced frameworks, it’s important to have a good idea of the features that you’re after, what you’re getting, and any shortcomings or compromises that may exist. There are plenty of open source solutions, but being too hasty in adoption can come back and haunt you.

Open source tools are not the best choice for every situation, but in our case, it allowed us to develop our core product much more quickly.

Don’t Jump Ship Too Quickly

This is especially true if you have existing customers or users to support. Changing your application architecture is not a simple feat, and alienating users (if something goes wrong) is a very real risk.

We faced the truth and recognized that we had too much technical debt in the monolith to resolve before we could even begin thinking about microservices. We began conservatively, by creating an initial set of interfaces so that existing functionality could be preserved and extracted.

Taking a more cautious approach lets us continue development on the new generation while still supporting and delivering updates to existing users of the legacy application.

Security Should Not Be An Afterthought

Once you’ve starting the decomposition your application, you are inherently presenting more potential vectors of attack for malicious forces. A good rule of thumb is to only run the services that you absolutely need to run, and secure inter-service communications when possible (our services transfer tokens amongst each other, much like Kerberos).

Especially when dealing with larger enterprise users, one of the first questions you’ll be asked is about application security is - and if you leave security as an afterthought, your application may no longer have a place in that environment.


The growing buzz around service-oriented architectures cannot be ignored, and certainly exists for good reason: the scale of enterprise software is increasing at an incredible rate, and microservices are one of the most practical ways to grow with it. For Turbonomic, we may take our decomposition to much greater lengths, but both our engineering team and our customers are already seeing the benefits of a service-oriented architecture.

Topics: Containers, Events, Applications

Subscribe Here!

Recent Posts