We are thrilled to announce Amazon EBS gp3 as a supported storage tier in the Turbonomic 8 Next-Generation Cloud Volumes Optimization Engine.
Co-authored by Ying Wei, Kshitij Dholakia and Rick Ochs
The Business Value of Cloud Storage Optimization
Storage is one of the top cloud services globally. AWS and Azure offer various types of storage services, such as AWS object storage, Azure Blobs, Azure Queues, and more. This blog will focus on the block level storage service, which is used with AWS EC2 instances and Azure Virtual Machine instances and provides boot and data volumes.
Throughout years of experience, I have encountered and handled performance issues for the application, server and network layers, but storage layers were often the hardest to address. Enterprise storage used to be monolithic – unmovable, little changes encouraged and a reluctance to reconfigure or adapt to changing needs of the applications. Modern enterprise storage allows easier, more flexible management with capability to control the application facing as well as backend configuration. Managing storage in the enterprise, however, is still a full-time task since more flexibility and more capabilities mean more options. Software Defined Storage (SDS) have been abstracting and hiding daily management decisions, but since no two environments are alike, storage admins should not rely only on SDS to cater for their specific environment needs.
What to consider before investing in an all-flash array
There have been a lot of studies on best practices for managing storage I/O and its complexity. When it comes to storage capacity, controlling I/O and maintaining performance isn’t easy for small or large SAN environments. So it begs the question: How can we optimize our environments so we could support large amounts of bandwidth while supporting as many I/Os (transactions) as possible?
Over the past year and a half, I’ve been fortunate to work in an alliance capacity with a handful of wonderfully disruptive technologies. My first among these was Pure Storage, who at the time offered a single product line, FlashArray, in three flavors: FA-405, FA-420, and FA-450. The FlashArray product line was uniquely positioned as the leader in scale-up, all-flash array (AFA) enterprise storage, differentiated as the most economic combination of performance and efficiency in its category. This past Monday at Pure’s inaugural Pure//Accelerate Conference, Pure Storage announced a product called FlashBlade. And this is where our story begins.
With constant budget pressure, how do you address steadily increasing demands on your IT service model? How will you assure you are not leaving money on the table? There is a misconception about the relationship between hardware and application performance in the IT Industry. It seems so easy to respond to performance questions with more hardware. However, that answer is wasteful, and eventually incurs higher costs for cooling, powering and licensing. Even worse, it can lead to the unintended consequence of decreased application performance. To put it more bluntly:
In today’s Internet age, we implement new technology and build new products to achieve one goal - to run applications faster, more fluent, and less “laggy”. However, even when we do have the capacity to power our applications, IT admins and developers often battle against the elusive frustration of storage and network latency [for some reason compute gets a pass :-) ].
“I’m using NetApp as my SAN, how can I have VMTurbo see and manage my storage layer?” Due to the nature of how VMTurbo is often introduced, many immediately feel that VMTurbo is a platform exclusively for the compute and VM layers of the IT stack. Most likely, it’s due to the fact that the industry has traditionally created products this way, either focused on the compute layer, or focused on the storage layer. Or perhaps it’s because companies like NetApp, EMC, HP, Dell and Pure have put large investments into creating software products to manage their storage environments when it comes to auto-tiering, replication and thin provisioning. Either way, we often lack a method of easily mapping the interdependencies of an entire IT stack within our environment, and the ability to understand how to drive out risk when it comes to the relationship between the virtual environment and the SAN.