Today, Turbonomic was named to Fast Company’s Best Workplaces for Innovators List! Joined by 100 other companies like Dell, Google, Etsy, and Nestle, Turbonomic came in at #57 for its ability to create a culture that empowers employees at all levels to improve processes, create new products, or invent new ways of doing business. This designation is awarded to organizations who demonstrate a serious and established commitment to building a culture of innovation that yields tangible results.
It's a bird! It's a plane! No, it's Super Clusters!
One of the most valuable capabilities that organizations enjoy with Turbonomic is the ability to create what we call “super clusters”. A super cluster is a virtual resource pool comprised of physical clusters in your environment.
As a part of Turbonomic’s continued effort to support diversity in leadership, we are excited to sign on as founding members of the ParityPledge in Support of People of Color. This pledge asks companies to commit to interview at least one qualified person of color for every open leadership role, VP and higher, including the C-suite and Board of Directors.
Turbonomic is thrilled to announce that we have been named an EMA Vendor to Watch! This designation is awarded to companies that deliver unique customer value or provide value in innovative ways. It honors vendors that dare to go off the beaten path and have defined their own market niches – Turbonomic’s market niche being Application Resource Management (ARM). This designation categorizes Turbonomic’s ARM as a leading solution in the AIOps market for its valuable combination of analytics and automation that produce revolutionizing AI.
There are no shortage of confusion talking about how CPU queueing works and how it ultimately affects your application and environment performance. Virtualization gave the industry something wonderful by enabling sharing of physical hardware resources, but it also opened the door to hidden issues that IT ops and application developers still struggle with every day.
Let’s quickly review what CPU queueing is and how processor wait times can have a catastrophic effect further up the stack.
Welcome to the fourth article in our "Mastering Cloud Cost Optimization" series. I’m joining you from ParkMyCloud, a Turbonomic company that focuses on cloud cost optimization – in particular, scheduled workload suspension, which is what we’ll focus on in this article.
Topics: cloud optimization
Now that we have our requirements and constraints defined from our first post, and our working single VM infrastructure-as-code built from our second post, it’s time to start the big build!
We already built out our desired multi-VM architecture that we have a map of from our initial discussions with the dev team. The bonus from this architecture is that we are also leaning into the services-style approach. That means we may be able to break out our SQL and eventual NoSQL clusters as shared services and even port them to a PaaS on the cloud if desired. Everything we do should be done with an eye on the future desired state.
Codifying our Multi-VM Deployment
Let’s begin with the simplest layout of what our virtual machine infrastructure needs to look like. We have our public code repository which is showing the real version of what we are building. It’s important because we may save some space in the blog by just focusing on some specific snippets. Please refer to the repository for the full code as you test this out yourself.
In the Monolith to Multi-VM folder under Part 2, we built our single monolithic VM using a VMware vSphere template and Terraform to quickly spin up and tear down a clone from that template. That’s the easy bit.
This is where we turn our attention to Monolith to Multi-VM Part 3 which is the more robust version. For now, we are using a basic template and we will worry about the application deployment in part 4. We just want to take this on in steps that are repeatable and consistent which is the goal of any automation: consistency of outcome that allows you to speed up the velocity of deployment, safely.
Start with Bad Ideas which Become Good Practices
You’re very literally getting some bad practices in this code set because we want to show the progression from doing things manually to making the jump to efficient, codified infrastructure-as-code.
What this repository will do that is not the best idea is to literally map out the code for each VM clone as standalone code blocks. Then the part 4 of the series will show you how to DRY (Don’t Repeat Yourself) up that code. What I always dread as a blog reader is when the new methods are so rapidly introduced that it doesn’t give you a chance to see how the machine is made. That’s where I hope you’re finding this progressive blog series style helpful.
Any file you create in the folder with a .tf extension will be evaluated by Terraform as part of the provisioning. You can create these as a large single file, or separate logical files depending on your preference. I prefer to do a separate file for logical groupings of infrastructure. We won’t get into Modules yet…save that for the next blog series!
Building the Three Application VMs
You will be using a single basic template across all of these clustered nodes which makes it easy. Then we can move the goal post a bit further and add in the app deployments as we close out the series.
Let’s use a single file to describe all three of our application VMs. This file we will reference here is the app-vms.tf file located here in the GitHub repository.
Our new code will include three resources, each coded with the name of the VM to make them unique. The uniqueness is needed or else Terraform will complain of duplicate resources. In our first blog we only used the name “vm” whereas this will use “vm-app-1”, “vm-app-2”, and “vm-app-3” for both the app-vms.tf file and the outputs.tf file.
The outputs generated by the Terraform configuration in our outputs.tf are also explicitly set by the machine names.
There is repetitive code that we will remove as part of the part 4 of the blog series next. Typing terraform output renders the IPv4 and IPv6 addresses which are picked up by the VMs as they launch.
Next up is the two database clusters which are 3 virtual machines to each cluster.
Building the SQL 3-Node Cluster VMs
This file we will reference for our SQL cluster is the sql-3-node.tf file located here in the GitHub repository. We are using the same simple template just to illustrate the cloning process. The next blog will show how to run inline command and remote commands to complete some install scripts for the applications.
That takes care of three app servers and three SQL servers to prepare for the MariaDB cluster. Last step is our future requirement of a NoSQL environment. This is an advantage of doing IaC because we can do a lot of heavy lifting easily in code. Yay!
Building the MongoDB 3-Node Cluster VMs
The final part of our multi-VM deployment is our MongoDB servers. The file we will reference for the NoSQL cluster configuration is the mongo-3-node.tf file located here in the GitHub repository.
The same will go for our actually MongoDB install which will be done with some easy scripts that we have in the next blog.
Just like that we now have 9 servers which can be torn down and redeployed just by typing terraform destroy and then a terraform apply again to clone a new set.
You can also see the final outputs file with all 9 output stanzas together here.
What Have We Learned and What’s Next?
Now you know how to expand your deployment with multiple virtual machines using the Terraform configuration files. What you’ve learned here is that:
- Every file ending with .tf will be evaluated and provisioned by Terraform
- You can use one or more .tf files – multiple files can be easier to organize by groups of VMs
- Repetitive code creates work but seems simple at first – welcome to technical debt
- Codifying at the outset means we can easily add components on the fly
The next step is to bring in some simple loops to remove repetitive code and to tighten up our scripts a bit. This won’t change the output at all but will simplify how much code we have. This will prove to be helpful when we go beyond just 3 machines in the cluster. Every step makes your IaC chops more versatile!
Welcome to the third article in our "Mastering Cloud Cost Optimization" series. This series was designed to help cloud users maximize the value of the cloud by sharing best practices and expert knowledge based on our experience. In this article, we will focus on leveraging the right cost model for your cloud workloads.
Topics: cloud optimization
Turbonomic is thrilled to announce that we have taken the ParityPledge. This is a public commitment that for every open position Vice President and above (including C-suite and Board), we will interview at least one woman for the job. While Turbonomic has always made a conscious effort to incorporate this practice into our hiring, we believe that publicly committing to the ParityPledge will promote better accountability, and urge other companies to do the same.
Topics: cloud optimization