As consumers seek solutions that meet their needs and requirements (the “demand”) a market develops and matures around a set of product and service offerings to meet those needs (the “supply”). To simplify transactions in the marketplace a set of product and service descriptions develop, which include a standard system of weights and measures to ensure quick, simple, fair, and equitable exchange. The standard weights and measures set a common, independent way to assess (measure) and assure that consumers receive what they’re promised and are paying for. These definitions, measures, and assurances enable a robust marketplace where multiple suppliers compete for the consumer’s business based on the better cost, quality, or speed of delivery.
The economic model in Technology’s evolution, beginning with the mainframe.
In early computing, the mainframe community developed a definition of standard weights and measures referred to as MIPs (Millions of Instructions Per Second). MIPs is a measure of executing a set of standard instructions on the Mainframe that represent a simulated transaction leveraging all resources (computing, memory, and storage). MIPs represent a proxy measure of potential throughput – the potential outcome measure for a generic application running on the mainframe host. With MIPs as the standard benchmark for performance, pricing of mainframe services developed around the consumption of MIPs. Having a universal MIPs measures and a robust technical specification enabled vendors to enter the mainframe market over time, improving both the cost of delivery per MIP and the quality of offerings in a competitive market. It is important to note that in addition to the Mainframe Hardware, Mainframe Software packages became priced on the MIPs deployed – the assumption being a correlation between more powerful underlying system and the value the software on it will deliver. MIPs enabled the development of a Mainframe ecosystem which billed and thrived due to a robust product description, universal set of service measures and associated economic pricing model that aligned with application performance.
Distributed Systems Evolution: SPEC components performance focus vs. application performance focus.
As applications moved to distributed systems there was once again a need to define a set of descriptions, measures and assurances that aligned to application performance. Applications operated on dedicated systems that housed dedicated compute, memory, and storage. Given the maturity of Windows and Unix operating systems capable of running on hardware from multiple vendors (IBM, HP, Compaq, Dell, and more) all on Intel chip architecture, a set of performance standard benchmarks developed through a consortium of hardware suppliers and SPEC (Standard Performance Evaluation Corporation) was born. SPEC set benchmarks and standard testing methodology to validate performance claims of the underlying server systems (including compute, storage and virtualization). With the hardware industry’s assumption that faster components enable faster application transactions, SPEC naturally shifted focus away from application performance and toward component performance. While a few SPEC tests did focus on Application proxy measures (outcome measures), they were the exception vs. the rule. SPEC is where the industry broke away from alignment of performance measures that directly tied to application potential throughput (outcome measure).
In today’s world, application performance has never been more important.
Today’s IT environment is far more complex, with applications deployed across an array of distributed technologies in a data center, in the cloud, as well as on serverless technologies. And this is just the beginning. This complexity is leading to a set of terminology naturally developing across the industry to reference the application, the application components, and their relationship to the infrastructure. This taxonomy enables discussions about the design, delivery, management and optimization of application delivery. This is where the term ‘workload’ was first introduced as a description of the behavior of the application code. However, as was true in the early mainframe and early distributed systems days, the meaning is fluid and interpreted by different groups to mean different things. Today there is no agreement on a standard definition and associated measures for ‘workload’.
Today’s IT complexity requires a standard definition of Workload Profile.
While general agreement exists that the workload is the application demand placed on the platform when delivering the desired outcome, this is also where agreement stops. Most define workload as the concurrent resource consumption of the application code on the underlying platform or infrastructure. This definition is used in ‘Workload Migration to Cloud’ in which each workload represents a server or instance to be migrated. The workload is extremely dynamic based on number of users, code releases, time of day, day of month, special events, resource constraints, and many more considerations. More recently, Workload Profile is the term increasingly used to represent concurrent consumption and expand it to include load variability, service predictability requirements, service constraints, risk and priority of consumption – and ultimately total cost. In other words, the Workload Profile can be represented by a set of variables that describe and define utilization patterns; regulatory, security, service prioritization (all risk management) and total cost of delivery. This robust definition of ‘workload profile’ in addition to a set of measures and development of testing tools and methodology will enable automated provisioning, testing and quality validation to ensure the application has the resources available to deliver on the given workload profile at any given time: an application focused outcome measure.
Turbonomic is excited to be working with leading organizations who are focusing on ensuring quality application delivery for customers through open collaboration to build an open industry standard definition of Workload Profiles.
Stay tuned for more developments!