How many times do you get a call from the help desk about some application being slow or down? More than you would like, I’m sure. The more troubling part is that they call about some business application (or many of them) that are affected and now you have to chase down what the underlying root cause of the issue is.
It may sound bold to think that a simple Linux utility can be life changing. In a world of remote access and remote administration this nifty little program just got a whole lot more valuable. How often have you found that you want to run some long-running task on a remote server and your connection fails because of a timeout or just forgetting you had a remote session open? (hint: it happens to me a lot).
How many times have you been chasing between tools and products to find out what is going on in your environment? We have seen incredible results with our Turbonomic Application Resource Management (ARM) across thousands of environments, and a few very interesting use-cases kept coming up that customers were experiencing.
Application performance and resource availability is critical both in real-time and over time. You already have the power of application response time in the real-time environment with Turbonomic whether with our own APEX capabilities or with our APM partners including Cisco AppDynamics, Dynatrace, New Relic, AppInsights, direct app targeting for SQL servers and web applications.
What if you could have that same insight of the relationship between resources and the application and infrastructure performance and availability with highly granular data, over a 13-month history?
As we talked about in the first blog about, we have our new Application Performance Extensibility (APEX) features that have been introduced for your Turbonomic 8 platform:
- Application Technology Definition (ATD) – brand new ability to define your business applications and services using a wide variety of dynamic criteria
- Data Ingestion Framework (DIF) – open-source declarative framework for creating customizable entities in Turbonomic ARM
The DIF is a very powerful and flexible framework which enables the ingestion of many diverse data, topology, and information sources to further DIFferentiate (see what I did there) the Turbonomic platform in what it can do for you.
Imagine now that you get a call from the application operations team or the help desk and they tell you “customers are calling because the website is really slow”. Now what? Well, it’s actually not a problem if you have Turbonomic!
With Turbonomic 8, we have an amazing set of new capabilities that solve this problem for you in more than one way. Our new application-centric user experience has proven to be very popular with the ability to automatically propagate application and infrastructure risk visibility for all of your applications by showing you all the dependent infrastructure and service resources.
Today is a fantastic day as we share the news of our next-generation Turbonomic platform. Turbonomic 8—now in Preview—is built to scale to even the largest and most complex multi-cloud and hybrid environments as proven by our current early access customers who are each running with hundreds of thousands of objects managed by a single Turbonomic platform instance!
The most common request I get from tech folks in the field is that they don’t have enough time to experiment and learn new technologies. As a systems architect in my career, I had to really work hard to stay on top of trends and new technologies with both research and hands-on usage., so I know your pain.
It's a bird! It's a plane! No, it's Super Clusters!
One of the most valuable capabilities that organizations enjoy with Turbonomic is the ability to create what we call “super clusters”. A super cluster is a virtual resource pool comprised of physical clusters in your environment.
There are no shortage of confusion talking about how CPU queueing works and how it ultimately affects your application and environment performance. Virtualization gave the industry something wonderful by enabling sharing of physical hardware resources, but it also opened the door to hidden issues that IT ops and application developers still struggle with every day.
Let’s quickly review what CPU queueing is and how processor wait times can have a catastrophic effect further up the stack.
Now that we have our requirements and constraints defined from our first post, and our working single VM infrastructure-as-code built from our second post, it’s time to start the big build!
We already built out our desired multi-VM architecture that we have a map of from our initial discussions with the dev team. The bonus from this architecture is that we are also leaning into the services-style approach. That means we may be able to break out our SQL and eventual NoSQL clusters as shared services and even port them to a PaaS on the cloud if desired. Everything we do should be done with an eye on the future desired state.