Welcome to the fourth article in our "Mastering Cloud Cost Optimization" series. I’m joining you from ParkMyCloud, a Turbonomic company that focuses on cloud cost optimization – in particular, scheduled workload suspension, which is what we’ll focus on in this article.
Topics: cloud optimization
Now that we have our requirements and constraints defined from our first post, and our working single VM infrastructure-as-code built from our second post, it’s time to start the big build!
We already built out our desired multi-VM architecture that we have a map of from our initial discussions with the dev team. The bonus from this architecture is that we are also leaning into the services-style approach. That means we may be able to break out our SQL and eventual NoSQL clusters as shared services and even port them to a PaaS on the cloud if desired. Everything we do should be done with an eye on the future desired state.
Codifying our Multi-VM Deployment
Let’s begin with the simplest layout of what our virtual machine infrastructure needs to look like. We have our public code repository which is showing the real version of what we are building. It’s important because we may save some space in the blog by just focusing on some specific snippets. Please refer to the repository for the full code as you test this out yourself.
In the Monolith to Multi-VM folder under Part 2, we built our single monolithic VM using a VMware vSphere template and Terraform to quickly spin up and tear down a clone from that template. That’s the easy bit.
This is where we turn our attention to Monolith to Multi-VM Part 3 which is the more robust version. For now, we are using a basic template and we will worry about the application deployment in part 4. We just want to take this on in steps that are repeatable and consistent which is the goal of any automation: consistency of outcome that allows you to speed up the velocity of deployment, safely.
Start with Bad Ideas which Become Good Practices
You’re very literally getting some bad practices in this code set because we want to show the progression from doing things manually to making the jump to efficient, codified infrastructure-as-code.
What this repository will do that is not the best idea is to literally map out the code for each VM clone as standalone code blocks. Then the part 4 of the series will show you how to DRY (Don’t Repeat Yourself) up that code. What I always dread as a blog reader is when the new methods are so rapidly introduced that it doesn’t give you a chance to see how the machine is made. That’s where I hope you’re finding this progressive blog series style helpful.
Any file you create in the folder with a .tf extension will be evaluated by Terraform as part of the provisioning. You can create these as a large single file, or separate logical files depending on your preference. I prefer to do a separate file for logical groupings of infrastructure. We won’t get into Modules yet…save that for the next blog series!
Building the Three Application VMs
You will be using a single basic template across all of these clustered nodes which makes it easy. Then we can move the goal post a bit further and add in the app deployments as we close out the series.
Let’s use a single file to describe all three of our application VMs. This file we will reference here is the app-vms.tf file located here in the GitHub repository.
Our new code will include three resources, each coded with the name of the VM to make them unique. The uniqueness is needed or else Terraform will complain of duplicate resources. In our first blog we only used the name “vm” whereas this will use “vm-app-1”, “vm-app-2”, and “vm-app-3” for both the app-vms.tf file and the outputs.tf file.
The outputs generated by the Terraform configuration in our outputs.tf are also explicitly set by the machine names.
There is repetitive code that we will remove as part of the part 4 of the blog series next. Typing terraform output renders the IPv4 and IPv6 addresses which are picked up by the VMs as they launch.
Next up is the two database clusters which are 3 virtual machines to each cluster.
Building the SQL 3-Node Cluster VMs
This file we will reference for our SQL cluster is the sql-3-node.tf file located here in the GitHub repository. We are using the same simple template just to illustrate the cloning process. The next blog will show how to run inline command and remote commands to complete some install scripts for the applications.
That takes care of three app servers and three SQL servers to prepare for the MariaDB cluster. Last step is our future requirement of a NoSQL environment. This is an advantage of doing IaC because we can do a lot of heavy lifting easily in code. Yay!
Building the MongoDB 3-Node Cluster VMs
The final part of our multi-VM deployment is our MongoDB servers. The file we will reference for the NoSQL cluster configuration is the mongo-3-node.tf file located here in the GitHub repository.
The same will go for our actually MongoDB install which will be done with some easy scripts that we have in the next blog.
Just like that we now have 9 servers which can be torn down and redeployed just by typing terraform destroy and then a terraform apply again to clone a new set.
You can also see the final outputs file with all 9 output stanzas together here.
What Have We Learned and What’s Next?
Now you know how to expand your deployment with multiple virtual machines using the Terraform configuration files. What you’ve learned here is that:
- Every file ending with .tf will be evaluated and provisioned by Terraform
- You can use one or more .tf files – multiple files can be easier to organize by groups of VMs
- Repetitive code creates work but seems simple at first – welcome to technical debt
- Codifying at the outset means we can easily add components on the fly
The next step is to bring in some simple loops to remove repetitive code and to tighten up our scripts a bit. This won’t change the output at all but will simplify how much code we have. This will prove to be helpful when we go beyond just 3 machines in the cluster. Every step makes your IaC chops more versatile!
Welcome to the third article in our "Mastering Cloud Cost Optimization" series. This series was designed to help cloud users maximize the value of the cloud by sharing best practices and expert knowledge based on our experience. In this article, we will focus on leveraging the right cost model for your cloud workloads.
Topics: cloud optimization
Turbonomic is thrilled to announce that we have taken the ParityPledge. This is a public commitment that for every open position Vice President and above (including C-suite and Board), we will interview at least one woman for the job. While Turbonomic has always made a conscious effort to incorporate this practice into our hiring, we believe that publicly committing to the ParityPledge will promote better accountability, and urge other companies to do the same.
Topics: cloud optimization
Turbonomic recently hosted its first virtual conference, Turbonomic Live. Joined by leaders from AWS, IBM, Microsoft, SevOne, ParkMyCloud, Allergan and Carhartt, the conference was centered around how companies can maintain business resiliency through an ever-changing IT landscape. 3,000+ registrants tuned in to hear about a variety of topics – including CIO and customer perspectives, application resource management, demystifying cloud-native, unlocking elasticity with cloud economics and network resiliency.
Our first post in the series introduced the scenario where our IT teams on the Cloud Rush application had an application needing to make its way to production. This is often the case where the Ops team is handed the working version and asked to work backwards. It’s also important that the reason this happens is that development teams often feel like they have to do a lot on their own to get products built faster. This is our chance to bring those two teams together and use the power of good IT architecture and Infrastructure-as-Code to ensure both speed and consistency of outcome.
Application owners focus their efforts on developing new capabilities, driving innovation and tend to be agile. IT operations teams focus on keeping existing applications performant and available, meeting agreed upon Service Level Agreement (SLA), while still helping the business innovate.
While there is a subtle difference between the focus of an application owner and IT operations, the end goal is the same; deliver the best customer experience, innovate faster, and improve operating leverage.
Infrastructure teams focus on keeping existing applications performant and available, meeting agreed upon Service Level Agreement (SLA), while helping the business innovate. Application owners focus their efforts on developing new capabilities, driving innovation and tend to be agile.
While there is a subtle difference between the focus of IT operations and application owners, the end goal is the same; deliver the best customer experience, innovate faster, and improve operating leverage.
Applications are at the heart of every business and power the way enterprises interact with customers, employees, and suppliers. By 2023, IDC predicts that the global economy will finally reach "digital supremacy" with more than half of all GDP worldwide driven by products and services from digitally transformed enterprises. Across any industry vertical, applications are the new currency.