Today is a fantastic day as we share the news of our next-generation Turbonomic platform. Turbonomic 8—now in Preview—is built to scale to even the largest and most complex multi-cloud and hybrid environments as proven by our current early access customers who are each running with hundreds of thousands of objects managed by a single Turbonomic platform instance!
The most common request I get from tech folks in the field is that they don’t have enough time to experiment and learn new technologies. As a systems architect in my career, I had to really work hard to stay on top of trends and new technologies with both research and hands-on usage., so I know your pain.
It's a bird! It's a plane! No, it's Super Clusters!
One of the most valuable capabilities that organizations enjoy with Turbonomic is the ability to create what we call “super clusters”. A super cluster is a virtual resource pool comprised of physical clusters in your environment.
There are no shortage of confusion talking about how CPU queueing works and how it ultimately affects your application and environment performance. Virtualization gave the industry something wonderful by enabling sharing of physical hardware resources, but it also opened the door to hidden issues that IT ops and application developers still struggle with every day.
Let’s quickly review what CPU queueing is and how processor wait times can have a catastrophic effect further up the stack.
Now that we have our requirements and constraints defined from our first post, and our working single VM infrastructure-as-code built from our second post, it’s time to start the big build!
We already built out our desired multi-VM architecture that we have a map of from our initial discussions with the dev team. The bonus from this architecture is that we are also leaning into the services-style approach. That means we may be able to break out our SQL and eventual NoSQL clusters as shared services and even port them to a PaaS on the cloud if desired. Everything we do should be done with an eye on the future desired state.
Our first post in the series introduced the scenario where our IT teams on the Cloud Rush application had an application needing to make its way to production. This is often the case where the Ops team is handed the working version and asked to work backwards. It’s also important that the reason this happens is that development teams often feel like they have to do a lot on their own to get products built faster. This is our chance to bring those two teams together and use the power of good IT architecture and Infrastructure-as-Code to ensure both speed and consistency of outcome.
There are many questions in the air when it comes to architecting your application for a cloud or virtual environment. Designing with a systems thinking approach means that we want to look at our requirements, constraints, assumptions, and risks. The best way to do this is to look at a specific scenario played out which is what we are going to here in the first of our Couch to Cloud Native (C2CN) series.
We all know the headline that will read “automation is taking away jobs”. What we have to see beyond the headline is that it’s a good thing. It doesn’t take long to realize the benefits you can gain if you take a few moments each day and track the repetitive and relatively mundane tasks we do.
Automation is much more than just the mundane and repetitive stuff. The goal of automation is about increasing the flow (and value) of your work. What makes automation successful are five very key features:
The world of IT Operations has been changing in many ways over the past few years. The rise of public cloud created an urgency for traditional operations teams to adapt their own systems and processes to either embrace a public cloud infrastructure, or to create a more cloud-like experience with their internal infrastructure to keep pace.
Public cloud is on a rocket rise in adoption but will not unseat every data center regardless of the rate of adoption. Many data centers are moving towards managed colocation and on-demand providers, mostly because it opens the door for more programmatic approaches to building and managing infrastructure and removes the overhead of managing environment (e.g. power, cooling).
Infrastructure-as-Code, or IaC, has become one of the most dominant features which allows for the accelerated and more consistent management of on-premises or public cloud infrastructure.
If you’re like me, you probably had, or know someone who had a house with a room that nobody was allowed to go into except for “special occasions”. The furniture was often covered in plastic, and the carpet was impeccably clean, vacuumed weekly despite nobody walking on it except the person using the vacuum.