Pivotal Cloud Foundry - Why is it Complex?
Containers and microservices are rapidly taking hold as they enable faster deployment of applications, services, and scaling on demand. The catch: more moving parts, greater density, and more layers to manage.
How do you assure performance, while maintaining compliance and minimizing cost? It was complicated enough with monolithic (VM) applications. With containers, it’s only more difficult—simply because the nature of shared environments creates complex tradeoffs:
- How do you manage a composite or hybrid application that runs some services on containers and other components on VMs—and how do you have a single source of truth for the whole architecture?
- How do you avoid “noisy neighbor” congestion due to containers peaking together?
- How do you avoid performance issues due to the underlying infrastructure resource congestion?
- Do you provision for the worst-case scenario? That’s expensive.
Or, do you provision for the average? That’s risky.
Assuring performance, maintaining compliance, and minimizing cost—all at the same time—is a multi-dimensional problem that is not unique to containers. In fact, solving it is critical to managing the full stack of infrastructure with automation.
The Benefit of Pivotal Cloud Foundry as a Container Platform
Pivotal Cloud Foundry (PCF) is one of several container platforms that organizations are using to unleash developer productivity and streamline operations. Its more prescriptive approach to application deployment is particularly attractive to the organization that wants to focus on application development vs. container management. The platform also provides high availability of components and your services. But you still need to manage the full stack: the deployment and the virtualized compute, storage and network.
One of the primary challenges IT faces is simply understanding the dependencies between the components of the PCF platform and the underlying infrastructure. As adoption matures from test/dev into production, these dependencies and the implications for performance, compliance, and cost matter. It’s why our customers are thrilled to see how Turbonomic stiches it all together—but, as we remind them, that’s not the end-game. Automating to free them from a challenge that is beyond human scale is.
Pivotal Cloud Foundry and Turbonomic - Real-Time Elasticity
Turbonomic provides full-stack visibility as it stitches the Pivotal Cloud Foundry PAS service (application) to the droplet (container) to the Diego Cell (VM) to the underlying providers—Private Cloud or Public Cloud. Now for the first time, you can have insight into the performance, utilization, policies and risks throughout the stack in one single system. Because Turbonomic understands the relationships between entities at every layer of the stack, it is able to accurately analyze resource demands vs. availability and drive actions that assure the containers and the infrastructure have the right resources at the right time: managing the trade-offs of performance, efficiency and compliance.
The analytics use this full-stack understanding to determine the specific actions required whether it is how to optimize the container (vertical scaling), how to manage the Cloud Foundry infrastructure to move VMs to better compute or storage, or when to scale out your Diego Cells with the knowledge of the infrastructure, all based on your application demand. Turbonomic then leverages Pivotal Operations Manager + BOSH to execute and orchestrate these actions.
That’s right, software (not you) makes decisions to drive real-time elasticity in the infrastructure.
Continuous analysis of the dependencies allows Turbonomic to make the multi-dimensional trade-offs that ensure its actions are trustworthy. Only trustworthy actions can be full automated to achieve that self-managing state. Now that you understand how Turbonomic stitches everything together—and why—in my next post I’ll cover specifically the types of actions that come out of Turbonomic’s full-stack analysis. Stay tuned!