As we enter a new decade, we decided to take a look back at the history of cloud computing and how the space has evolved from the early 2000s to today.
This article is the first part of a blog series. This article will cover the 2000s decade (2000-2009), which marked the emergence of the cloud computing space.
The modern Cloud computing space enabled many of the innovative technologies and solutions we have seen over the last two decades.
Technically, concepts of cloud computing can be traced back to the 1960s - but to me, the origin story of the modern cloud computing can be attributed to Salesforce.com, which was founded in 1999 and later launched one of the first successful public Software-as-a-Service (SaaS) offerings.
As cloud computing gained momentum during the mid-2000s, many organizations struggled to understand what exactly 'Cloud Computing' is. A memorable example is when Larry Ellison, the founder and CEO of Oracle Corporation at the time, shared his thoughts in 2008 on Cloud Computing (a must listen). While Larry Ellison’s provocative comments highlight his lack of understanding of cloud computing at the time, most people were in the same boat and did not fully realize its benefits yet either.
One of the contributors for the confusion was the common practice of ‘Cloudwashing,' where vendors took their legacy software solutions, made them accessible over the internet and marketed them as cloud solutions.
So, what is Cloud Computing?
I prefer the definition by the National Institute of Standards and Technology (NIST). In 2011, they defined Cloud Computing as:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.”
The Five Essential Characteristics they listed in their paper are:
- On-Demand Self Service – ability to deploy, control and delete in a self-service manner (although not mentioned, I would also add ‘through APIs’)
- Broad Network Access – accessible over a network, not just over the Internet
- Resource Pooling – secure and isolated multi-tenant model on shared resources
- Rapid Elasticity – scale vertically and horizontally
- Measured Service – resource usage is monitored, measured and reported (although not mentioning it, also ‘billed’ in the context of public cloud)
The three Service Models are:
- Platform-as-a-Service (PaaS) – a model that enables users to consume and manage a set of software or development components usually by leveraging programming language, libraries or tools that are supported by the provider
- Infrastructure-as-a-Service (IaaS) – a model that allows users to deploy and manage servers with compute, network, and storage. Users manage everything from the operating system and up
- Software-as-a-Service (SaaS) – a model that provides users with access to software running on the provider’s infrastructure, users manage the application’s settings only and create their own content on top
And the four deployment models are:
- Public Cloud – for example, the primary public cloud offerings from Azure, AWS, Google, Salesforce, etc.
- Private Cloud – for example, OpenStack, Cloudstack and even VMware
- Community Cloud – a cloud built for specific use by specific users or organizations. For example: Azure Government, AWS GovCloud, Salesforce Government Cloud and to some extent, the AWS China region
- Hybrid Cloud – a mix of the above models, where applications can communicate and pass data between clouds
The NIST definition is definitely due for an update given that it was written nine years ago. Since then, new service models and technologies emerged, but its core definition still holds.
2000 – 2009: The Emergence of Cloud Computing
The 2000s decade started with a problem, the Y2K Problem. Luckily for us, all the ominous predictions of a complete technology meltdown never materialized.
The modern Cloud Computing space was started by an online retailer called Amazon.
There are a lot of myths on how Amazon Web Services (AWS) came to be, one being that AWS started when Amazon wanted to rent out excess compute capacity left idle after the holiday shopping season (which folks from Amazon say is not true).
When AWS was launched in 2002, they offered few services and tools, mostly focusing on helping their partners to integrate with their e-commerce platform. In 2004, Amazon introduced a beta of Amazon Simple Queue Service (SQS), an online queue service for developers. Later, in 2006, AWS launched several new services, including Amazon Simple Storage Service (S3) and an IaaS offering (in beta) called Amazon Elastic Compute Cloud (EC2). EC2 & S3 were launched with a unique pricing model at that time called On-Demand where users only had to pay for the capacity they use.
Amazon focused on developers with their new services - a strategy that would later lead to “shadow IT”, where users utilized the cloud to bypass slow internal corporate IT workflows. This strategy proved to be hugely successful for Amazon.
|Fun fact: Did you know that some of the names of the services announced by AWS in 2006 carried the name 'Alexa'? It was well before it was introduced as Amazon’s virtual assistant in 2014.|
In 2009, AWS introduced many of the services we know today. Some noteworthy include:
- Amazon CloudWatch - introduced in May 2009, offered real-time monitoring service for EC2 instances using metrics gathered from the underline Xen-variant Hypervisor. In 2017 AWS released a CloudWatch agent for guest OS-level and custom metrics, it supported servers running on-premises as well.
- Amazon Virtual Private Cloud (VPN) – announced in August 2009, allowing users to create logical and isolated networks on AWS
- Amazon Relational Database Service (RDS) – announced in October 2009, it is a cloud-based rational database service based on MySQL. Today the service offers various databases including Amazon Aurora, PostgreSQL, MariaDB, Oracle Database and SQL Servers. Database-a-as-Service (DBaaS) is considered a type of SaaS.
- AWS Auto Scaling – unveiled in May 2009, along with ELB and CloudWatch, the service allowed users to horizontally scale EC2 instances up and down by leveraging CloudWatch metrics and scaling policies.
In 2009 Amazon also unveiled two new pricing models for its EC2 service, which only offered an On-Demand pricing model. The first model was Amazon EC2 Reserved Instances (or RIs for short), which allowed users to reserve capacity and reduce costs vs. on-demand rates. Reserved Instances commitments also assisted Amazon with its capacity planning for its global infrastructure, especially in the early days of the business, as capital investments in infrastructure had to be meticulously made.
The second new pricing model was Amazon EC2 Spot Instances, which allowed users to “bid” on unused compute capacity at a fraction of the cost of on-demand. Spot instances are still considered the cheapest pricing model on AWS (savings up to 90%); however, due to their unexpected termination within 2 minutes (when the market price is higher than the bid price), they are mostly used with fault-tolerant and flexible applications.
In 2007, IBM, a company with a long legacy and history in virtualization and cloud dating all the way back to the 1950’s, announced that it planned to build clouds for enterprise and provide additional services on top. IBM released several software and hardware solutions for the cloud (such as IBM CloudBurst in 2009) but officially launched its own cloud computing offering IBM SmartCloud in 2011.
In 2008, a new cloud challenger emerged; Google! Google’s first public PaaS service, Google App Engine, was introduced. Similar to AWS’ approach, Google focused on the development and the hosting of web applications on Google’s infrastructure with Google App Engine.
The same year, another major player ascended; Microsoft! In October 2008, at the Professional Developers Conference, Microsoft announced Windows Azure (in tech preview mode), allowing users to host a web application on Microsoft data center.
|Fun Fact: Did you know that the code name for Windows Azure at Microsoft was “Project Red-Dog"? The hypervisor initially was a fork of Windows code-named “Red-Dog OS.”|
In 2009, a new cloud provider was born, and yet again, by an e-commerce company from China, Alibaba Group. Alibaba Cloud was founded in 2009 and opened its first data center the following year.
Another noteworthy event from that decade was the foundation of VMTurbo, Inc., now Turbonomic. VMTurbo was incorporated in December 2008 and was open for business in 2009 after raising $7.5M in Series A funding.
When VMTurbo was launched, it focused on optimizing virtualized environments running VMware and other hypervisors - but, the founders of the company had a clear vision of the future, which was to assure performance of any workload regardless of where it runs, including in a multicloud deployment model.
The below image is from a deck that was used by the founders of VMTurbo to raise funding for the company:
To be continued…
The ascension of cloud computing in the 2000s made waves in the IT industry, but it only scratched the surface. In the next articles we will focus our gaze on the 2010s decade and explore how the space has evolved to the present day.
The recap in this series is based on extensive research and my own experience working in the field. To keep the articles short, I had to make editorial decisions on what to include. If there is a notable milestone that you feel is missing, please let me know in the comments.
The series continues in part 2.
See what Application Resource Management can do for your cloud strategy.