How Autonomic Control is Transforming IT
Software is eating the world but humans control IT. Does this sound right to you?
Four decades of technology innovations: PCs, cell phones, smart phones, internet, GPS, social media…. IT enables all of these disruptive innovations. Yet when it comes to its own operations, nothing has changed. IT operates today the same way it operated four decades ago.
VMTurbo was founded in 2009 to transform IT operations from human-controlled operations to software-controlled operations. This transformation is a journey. It doesn’t happen overnight. The status-quo needs to be challenged. Psychological and emotional objections must be overcome. Trust must be earned. Seven years and over 1500 customers later, I am excited to see the acceleration of this transformation. Our customers are embracing an autonomic control platform for their IT operations. It’s being done in small steps, but these steps represent a huge leap in the journey to control.
There are many forces that are driving IT on this journey. Regardless of the reasons, the world is ready to embrace the transformation and IT is ready to let it happen.
Since its inception, IT operations has evolved and created many silos along several dimensions:
- The Technology Dimension – Compute, Storage and Network silos
- The IT Stack Dimension – Virtualization, IaaS, CaaS, PaaS, Applications silos
- The Operational Dimension – Plan, Build, Run silos
- The Management Functions Dimension – Capacity, Performance, Fault, Configuration, Compliance, etc. silos
- The Organizational Dimension – An organization silo for each of the above silos
Each silo comes with its own set of tools driving industries of niched tools to satisfy niche demand resulting in a management nightmare. IT operations is then challenged to scale to meet the continuously changing technology landscape. These silos must be torn down. The technology silos and the IT layers of the stack cannot be managed in isolation. The operational and management silos must be integrated into a unified autonomic control platform.
Our autonomic platform enables and drives this transformation through an economic engine that solves the Intelligent Workload Management (IWM) problem. We created a platform that is technology and vendor agnostic, and controls any type of workload on any type of infrastructure, anywhere, anytime.
While companies and industries have formed around each and every silo—collecting more and more data, automating more scripts—we have focused on one transformative goal: to deliver a platform that controls any workload on any cloud or infrastructure, anytime, anywhere. Call it what you want: self-managing applications, real-time decision automation, autonomic performance…when software does what it does best, people can do what they do best.
So VMTurbo is Turbonomic and here’s why.
Assuring Application Performance is Bigger than “Performance Management”
The one and only goal of IT is to assure application performance. Everything we do as IT professionals is to assure application performance. By solving the IWM problem, our platform assures application performance. It controls the environment in a desired state. It does not give you metrics and analyses after performance degradation. It prevents performance degradation. Self-managing workloads determine exactly what resources they need to perform, and then allocate the resources they need when they need them.
The power of assuring application performance is often overlooked because too many vendors use the same or similar language to describe the traditional process of giving data to humans and expecting them to make the right workload placement, sizing, and provisioning decisions to assure performance. They cannot.
Today’s environments are too complex. The right workload placement, sizing, and provisioning are the result of evaluating N-dimensional tradeoffs in real-time. A human can’t do it. Attempting to do so only results in disruptive unplanned work, as many IT teams have experienced.
Furthermore, assuring application performance can’t be done in silos. We can’t manage compute, storage and network in isolation and hope that every application gets the amount of resources it needs across all of these functions. We can’t schedule resources in each of the layers of the stack in isolation and hope that somehow, magically, the applications at the top of the stack will get the resources they need at the bottom. We can’t plan and deploy workloads based on assumed consumptions without continuously analyzing the actual consumption and driving the placement and configuration of existing and new workloads based on both. These all have to come together within a single platform that brings all of these silos under unified control.
So what happens when you truly assure application performance? What is the impact of preventing problems before they happen? How do self-organizing workloads transform IT?
Why should you care about autonomic performance?
Turbonomic is for Any Workload on Any Infrastructure, Anytime, Anywhere
The IT technology landscape is transforming, and transforming rapidly
In 2009 our customers were ramping up virtualization initiatives and running their applications on virtual machines running predominately on a VMWare hypervisor. A few years later, OpenStack (with or without KVM) became the hottest new technology for cloud platform and organizations experimenting with offering an alternative IaaS platform. Just a couple of years ago, the new buzz was around Docker and containers, and enterprises embarked on containerizing their applications as micro-services. In no time this led to the emergence of “cloud OS” platforms, such as Kubernetes and Mesos, for running these micro-services and providing Containers as a Service (CaaS) platforms. In parallel, coming from the top, new PaaS platforms emerged to enable rapid application development and deployment, such as CloudFoundry and OpenShift. And while all of these technologies were emerging, public cloud options became more and more viable and economically attractive. As the technology landscape evolved, so did our customers.
Today we are seeing more and more customers with applications running as micro-services across Docker containers, adopting open source technologies, and moving to the cloud. Environments are getting larger, more complex—and more heterogeneous.
When I founded our company in 2009, our pitch was that when workloads are transparently mobile across the universe, the problem that needs to be solved is what workload to run where and when to maximize the ROI from any compute, storage and network it may consume, on–demand. We built a platform to solve this problem and control any type of workload, on any type of infrastructure, anywhere, any time. The aforementioned technologies are enabling and providing the building blocks for a world of globally mobile workloads. But without an autonomic control platform like the one we have built, this new world won’t be able to deliver the QoS that mission-critical applications demand. We will build it, but they won’t come!
As the landscape evolved so did we. We first supported environments running VMs on hypervisors, such as ESX from VMware and Hyper-V from Microsoft, running on a broad range of storage platforms. Then we added support for IaaS platforms, such as vCAC, OpenStack and CloudStack. When containers emerged, we added control of Docker containers within our platform. We plugged instrumentations into CloudFoundry, OpenShift, Mesos and Kubernetes. We took advantage of the open source nature of these systems and contributed our instrumentation to the community, in addition to building our own distribution of Kubernetes. As our customers started to look at the public cloud, we developed instrumentation to control workloads running on AWS, Azure and Softlayer. With a single platform, our customers control their on-prem and off-prem workloads. Each and every step we have taken has been yet another proof point in our journey to a unified, integrated autonomic platform that controls any type of workload on any type of infrastructure, anywhere, all the time. A vision we have been pursuing since Day One.
The beauty of the journey is that we are not alone. Our customers are with us, inspiring and driving every one of the steps we are taking.
When one of our biggest customers, a global 2000 banking customer who controls more than 90,000 VMs on ESX with our platform, experimented with OpenStack, we partnered with the customer to extend our coverage to OpenStack. Since then, the bank embarked on Docker, OpenShift and Kubernetes, and we were there to provide the single autonomic control platform. In parallel, the bank introduced new storage (Pure) and network (Arista) platforms and we were there. As their CTO likes to say, there are a lot of moving parts that keep changing, but the one constant is Turbonomic.
We partnered with Verizon, providing the foundation for Verizon’s Intelligent Cloud Control, which provides real-time automatable price, performance, and compliance-based workload placements, as well as sizing and configuration decisions to deploy and migrate workloads to and across cloud service providers.
These are just a couple customer stories among the over 1500. Environments will always be a mix of different workloads—applications, containers, VMs; infrastructure—all flash arrays, hyper-converged, and converged; clouds—AWS, Azure, IBM Softlayer, Google Cloud, etc.; and software—vSphere, Hyper-V, KVM, Kubernetes, OpenStack, and so on. Multiple technologies and vendors serve different needs and different types of applications.
We are the only vendor that can provide unified integrated autonomic control across all of these heterogeneous diverse environments. Why? Because the core of our platform is a common abstraction that provides a common semantic representation of the entire IT environment across all of its layers. This common abstraction enables us to control any type of workload on any type of infrastructure anywhere. And this platform not only enables us to control today’s environments, but it also enables our customers to easily adopt any future technologies.
Turbonomic Understands All Layers of the Stack
Again, performance is only as good as your weakest link. Resource allocation decisions cannot be made in isolated layers of the stack. Applications need compute, storage and network in order to perform. Scheduling and allocating resources at the PaaS, CaaS and IaaS separately, without understanding and analyzing the interdependencies, runs the risk of interference and starvation that may lead to performance degradation and unhappy customers.
Moreover, as I mentioned earlier, resource allocation cannot be done based only on the presumed need of the applications without understanding the actual real-time consumption at any given point in time. Only by considering both, sound resource allocation decisions can be made that would prevent performance degradations while maximizing resource efficiencies.
Turbonomic is the only platform that understands all of the interdependencies across all of the layers of the stack, as well as the presumed need and the real-time consumption of each application along the entire resource supply chain.
Turbonomic is for All IT Functions
The trend towards rapid service delivery is driven by the now ubiquitous digital economy. Applications are the lifeblood of today’s organizations. They must be architected, developed, and deployed—then continuously updated with new functionalities or bug fixes and re-deployed. Agile development practices have increased the pace at which application teams develop and deploy new builds. Containers have further increased the pace ushering in a new era of ‘continuous integration, continuous deployment’ (CICD).
Self-service portals, distributed micro-service architectures, scale-out containerized applications, and elastic compute are making environments too complex, too dynamic to manage without autonomic IT. The days of manually spinning up a VM days or weeks after a request has been made are dead. Static, siloed capacity planning is dead. Monitoring is dead. Building scripts that automate threshold-based processes cannot scale with dynamic heterogeneous environments. Dedicate human resources to any of these activities and IT will only be a bottleneck. You can try, but you will fail. And you will waste valuable resources for your efforts.
Real-time autonomic performance brings these silos together. It enables Operations and Infrastructure to deliver the service they are tasked to deliver. It empowers Architects to think big and drive the adoption of new technologies. It enables managers to streamline processes and develop their people. It helps engineers scale applications without worrying about performance. It gives executives a path to lead with a vision, not just a bottom-line.
Turbonomic Assures Application Performance
An autonomic performance platform is the ONLY software platform that assures application performance by continuously placing and sizing workloads. In addition, in order to truly assure application performance, our autonomic platform delivers:
- Storage latency control
- Network latency control
- IO storm control
- Compliance control
- Licensing control
- Workload priority control
- QoS adherence
- Horizontal auto-scaling
- Vertical auto-scaling
- On-demand elastic infrastructure
- Dynamic affinity control
- Workload deployment
- Cost control
- Budget control
- Auto bursting
- Cross-layer orchestration
Only a platform that can understand all the interdependencies across the entire environment can deliver a unified control system, an autonomic platform.