Turbonomic Blog

Microservices in the Real World: Now We Run Our Pre-Kubernetes Application on Kubernetes. So Can You.

Posted by Endre Sara on Oct 8, 2019 10:45:00 AM

Today, it’s become almost sacrilegious to think about running microservices on anything but Kubernetes. But in 2016, when we first began our journey to microservices, Kubernetes didn’t have the level of maturity that it has now. Back then, we just wanted the benefits of breaking up our monolithic application into loosely coupled services. Like most organizations, the monolith served us as well as it could. But when Docker made containers easy, a whole new world opened up. At the time, docker-compose was the easiest way to start a group of containerized application services, but the very next consideration was orchestrating those containers. We soon found that docker-compose does not allow the easy distribution and orchestration of components across cluster nodes. There were also challenges with multi-node networking and shared storage across the nodes. And then Kubernetes came along.

Problem: How do we run an application on Kubernetes that we started building before Kubernetes was a thing?

The technology landscape changes rapidly. With a multiyear application modernization journey, you’re bound to have to adapt—it’s just what you have to do to leverage the best of what’s out there. Kubernetes provides multi-node container orchestration and an abstraction for shared networking and shared storage across the cluster. First, we used kompose to convert the docker-compose yams files to kubernetes deployments, but the static yams files it created did not make it easy to deploy and configure the application.  We wanted to be able to use different container limits, different container tags and with static yaml files, you have to edit them individually for every change.

Solution: Templatize your application deployments with Helm charts.

Helm charts provide an easy way to templatize application deployments. It makes telling Kubernetes how to set up networking and storage, how to create containers, what type of containers, etc. much easier. And, once deployed, let Kubernetes do its thing. So, we converted our application configuration into a main chart, which drove the global configuration. For managing required and optional dependencies, we converted each component into a dependent sub-chart. 

With Helm the whole application can be easily deployed using the main helm chart (helm install xl) with the dependencies enabled or disabled for the specific environment using values. The global values in the helm chart can drive the versioning of the application, the source of the deployment and other global configurations, such as enable/disable java debug options in the subcomponents. The sub-charts can be configured individually for their resource allocations (check out my earlier post about using cgroup memory limit for max java heap size) and for their horizontal scaling. For reference, check out t8c-install on GitHub.  The XL main chart has mandatory and optional subchart dependencies on:

  • Base components, providing monitoring, logging, persistence
  • Probes, providing mediation
  • Platform components, providing the common services for creating and maintaining a common abstraction and common analytics.

Result: Our templated application deployment quickly starts up our microservices on Kubernetes. Hello, scalability.

It’s one thing to re-architect an application into microservices. It’s another thing entirely to make it manageable. It’s not enough just to think about features and functionality. We have to build for scalability and with that leveraging Helm charts for configurations and templates that make for more seamless deployments was good. And of course, there’s security, which, as I discussed in a previous blog post, we use Red Hat Universal Base Image to secure our containers

Navigating the constantly changing cloud native landscape is a challenge, but a fun one. Now, if only we had a way to assure the performance of our microservices application running on Kubernetes all the time on any cloud or infrastructure. ;-)

 

Topics: Containers

Subscribe Here!

Recent Posts