Turbonomic Blog

IaC Powering a Monolith to Multi-VM Architecture – Part 3

Posted by Eric Wright on Jun 25, 2020 2:43:32 PM
Eric Wright
Find me on:

Now that we have our requirements and constraints defined from our first post, and our working single VM infrastructure-as-code built from our second post, it’s time to start the big build!

We already built out our desired multi-VM architecture that we have a map of from our initial discussions with the dev team. The bonus from this architecture is that we are also leaning into the services-style approach. That means we may be able to break out our SQL and eventual NoSQL clusters as shared services and even port them to a PaaS on the cloud if desired. Everything we do should be done with an eye on the future desired state.


C2CN-CloudRush-Physical-Desired

Codifying our Multi-VM Deployment

Let’s begin with the simplest layout of what our virtual machine infrastructure needs to look like. We have our public code repository which is showing the real version of what we are building. It’s important because we may save some space in the blog by just focusing on some specific snippets. Please refer to the repository for the full code as you test this out yourself.

In the Monolith to Multi-VM folder under Part 2, we built our single monolithic VM using a VMware vSphere template and Terraform to quickly spin up and tear down a clone from that template. That’s the easy bit.

This is where we turn our attention to Monolith to Multi-VM Part 3 which is the more robust version. For now, we are using a basic template and we will worry about the application deployment in part 4. We just want to take this on in steps that are repeatable and consistent which is the goal of any automation: consistency of outcome that allows you to speed up the velocity of deployment, safely.

Start with Bad Ideas which Become Good Practices

You’re very literally getting some bad practices in this code set because we want to show the progression from doing things manually to making the jump to efficient, codified infrastructure-as-code.

What this repository will do that is not the best idea is to literally map out the code for each VM clone as standalone code blocks. Then the part 4 of the series will show you how to DRY (Don’t Repeat Yourself) up that code. What I always dread as a blog reader is when the new methods are so rapidly introduced that it doesn’t give you a chance to see how the machine is made. That’s where I hope you’re finding this progressive blog series style helpful.

Any file you create in the folder with a .tf extension will be evaluated by Terraform as part of the provisioning. You can create these as a large single file, or separate logical files depending on your preference. I prefer to do a separate file for logical groupings of infrastructure. We won’t get into Modules yet…save that for the next blog series!

Building the Three Application VMs

You will be using a single basic template across all of these clustered nodes which makes it easy. Then we can move the goal post a bit further and add in the app deployments as we close out the series.

Let’s use a single file to describe all three of our application VMs. This file we will reference here is the app-vms.tf file located here in the GitHub repository.

Our new code will include three resources, each coded with the name of the VM to make them unique. The uniqueness is needed or else Terraform will complain of duplicate resources. In our first blog we only used the name “vm” whereas this will use “vm-app-1”, “vm-app-2”, and “vm-app-3” for both the app-vms.tf file and the outputs.tf file.

Example apps-vms.tf:

c2cn-app-vms-example

The outputs generated by the Terraform configuration in our outputs.tf are also explicitly set by the machine names.

Example outputs.tf:

c2cn-outputs.tf

There is repetitive code that we will remove as part of the part 4 of the blog series next. Typing terraform output renders the IPv4 and IPv6 addresses which are picked up by the VMs as they launch.

c2cn-part3-app-vms-outputs

Next up is the two database clusters which are 3 virtual machines to each cluster.

Building the SQL 3-Node Cluster VMs

This file we will reference for our SQL cluster is the sql-3-node.tf file located here in the GitHub repository. We are using the same simple template just to illustrate the cloning process. The next blog will show how to run inline command and remote commands to complete some install scripts for the applications.

c2cn-vm-db-example

That takes care of three app servers and three SQL servers to prepare for the MariaDB cluster. Last step is our future requirement of a NoSQL environment. This is an advantage of doing IaC because we can do a lot of heavy lifting easily in code. Yay!

Building the MongoDB 3-Node Cluster VMs

The final part of our multi-VM deployment is our MongoDB servers. The file we will reference for the NoSQL cluster configuration is the mongo-3-node.tf file located here in the GitHub repository.

The same will go for our actually MongoDB install which will be done with some easy scripts that we have in the next blog.

c2cn-mongo-vms-example

Just like that we now have 9 servers which can be torn down and redeployed just by typing terraform destroy and then a terraform apply again to clone a new set.

You can also see the final outputs file with all 9 output stanzas together here.

What Have We Learned and What’s Next?

Now you know how to expand your deployment with multiple virtual machines using the Terraform configuration files. What you’ve learned here is that:

  1. Every file ending with .tf will be evaluated and provisioned by Terraform
  2. You can use one or more .tf files – multiple files can be easier to organize by groups of VMs
  3. Repetitive code creates work but seems simple at first – welcome to technical debt
  4. Codifying at the outset means we can easily add components on the fly

The next step is to bring in some simple loops to remove repetitive code and to tighten up our scripts a bit. This won’t change the output at all but will simplify how much code we have. This will prove to be helpful when we go beyond just 3 machines in the cluster. Every step makes your IaC chops more versatile!

Subscribe Here!

Recent Posts

Posts by Tag

See all