Turbonomic Blog

Microservices in the Real World: How We Used Red Hat’s Operator Framework to Automate Our Application Lifecycle

Posted by Endre Sara on Dec 20, 2019 8:34:04 AM

In this series Microservices in the Real World, I’ve been sharing our journey to cloud native. First there was the challenge of automating memory management in our containerized Java application, then we had to secure our containers (thank you, Red Hat), and then we had to figure out how to run an application, which we started building before Kubernetes was a thing, on Kubernetes. Which puts us at “cloud native-ish.” The next question for us was: How do we fully automate our application lifecycle?


Lemur 4 Twitter-1

Problem: Helm is great for deploying and configuring applications, but we also needed a way to manage the application configuration and lifecycle.

As I mentioned previously, we used Helm charts for an easy way to deploy and configure an application. But, this approach only deploys the application; it does not manage its configuration and lifecycle. Our developers had to remember and script around the deployment with different options, for example different optional dependent sub-charts and sub-chart configurations for vertical and horizontal scaling. We tried a Terraform Helm provider, but we had challenges with dynamic variables, which was later improved in Terraform 0.12.  But Terraform still does not track the state changes, not as natively and continuously as we wished.

Figure 1. Below, the Turbonomic business application is made up of multiple services, each requiring different configurations for deployment. For a full list of Turbonomic integrations, visit our Integrations page.

Screen Shot 2019-10-28 at 10.24.59 AM

Solution: Kubernetes Operators deliver the continuous lifecycle management our microservices application needs.

Then we came across Operators, courtesy of Red Hat and the Kubernetes community.  The Operator Framework provides a Kubernetes-native way of managing application configurations, the application lifecycle using application specific code captured in the Operator, as well as custom resource definitions for the application configuration. The Operator SDK enables native go-lang Operators to be developed or existing Ansible cookbooks or Helm charts to be converted to Operators. Since we already had a pretty good Helm chart, we decided to go with a Helm Operator. For reference, see the t8c-install on GitHub

Results: We get to focus on building the right application for our customers. Hello, multicloud.

There’s a lot that goes into getting an application to run, especially a microservice application. You have to automate as much as you can to achieve scalability. The alternative for us was our DevOps engineers getting frustrated with having to remember all the Helm options for every update or deployment of the application. That doesn’t scale. 

Operators, on the other hand, capture more, defining instructions like “you need to update the database first, then this, then that…” -- this is how we plan to build out the functionality of our Operator. They maintain the operational knowledge, remembering the options you gave them yesterday, the day before, or the week before that. Our Devops team can focus on continuing to increase automation in our pipeline, our application can scale with the business, and we developers get to figure out how to make our application run across a multicloud estate with Istio. Yes, that’s my next post!

 

Topics: Containers

Subscribe Here!

Recent Posts