How to Create Infrastructure as Code with Packer and Terraform on GCP: Your second step towards DevOps automation
Everyone wants to adopt Infrastructure as Code to configure and automate the infrastructure of their application as it embraces DevOps best practices. Figuring out ways to automate your CI/CD pipeline is at the heart of this transition, and is a huge interest of mine.
What we did last time
My last blog post was called ‘How to Deploy Consul in GCP using Terraform: Your first step towards DevOps automation.’
In my last post, I showed you how to create a fully automate the process for spinning an instance in GCP. It demonstrated how to use Hashicorp’s Terraform and Ansible to install within your instance a vanilla version of Consul, a Hashicorp tool that discovers and configures services in your infrastructure. This presented you with a blank canvas to configure however you want.
What we’ll do this time
We’ll now bring that process to actual use case and run it for production. While the last post gave you the context for automating instances in GCP, this post will show you how to automate the deployment of multiples instances. This will be done with a managed instances group that can auto-scale and auto-heal your environment. In turn, this will allow you to run these deployments in the cloud and, in particular, fully leverage the auto-scaling and auto-healing abilities of GCP as a cloud provider.
Deploying these instances in the cloud allows you to both scale your application according to demand, and have a multi-zone, even multi-region, deployment. You could deploy your servers in, for example, both the ‘northamerica-northeast1’ and ‘us-central1’ regions, or within one region using all the zones available. This distributes your infrastructure and means that there’s a deployment for you to use if one of your servers goes down. You need to deploy a minimum of three instances and can use up to five for auto-scaling. By having a Managed Instance Group automate the deployment of your instances, you can fully leverage the power of the Google Cloud Platform.
How we’ll do it
Instead of talking about Consul, we’ll focus on leveraging Apache Web Server this time. We’ll again leverage Terraform but, most importantly, we’ll be baking Apache into an Ubuntu image taken from Google. By baking, I mean putting your own spin on a standard image to create a template. This template will save you time whenever you spin an Apache server instance. The automation will let you skip some preparatory steps, such as installing Apache into every VM or instance.
Note. GCP updates their images very frequently (even more frequently than Ubuntu Cloud images), and it’s, therefore, important to include image baking in your CI/CD pipeline if you hope to have a consistently up-to-date infrastructure. Your automated processes should always be running freshly-baked images, and your deployment based on a current OS.
Preparation
Step 1 – Get your service account information by following the steps in my first blog post (Steps 2 and 3).
Make sure you have the following roles (Compute Admin, Service Account User, Storage Admin).
Step 2 – Get the code from my repo https://github.com/sveronneau/gcp-mig-lb.
Step 3 – Update the apache.json and variables.tf files with your GCP account information.
Step 4 – Install Packer and Terraform in the gcp-mig-lb folder to make things simple.
Workflow
Step 1 – Use Packer to bake an Apache Web Server inside a standard GCP Ubuntu 16.04 LTS image and making it your own custom version.
packer validate apache.json
packer build apache.json
Step 2 – Use Terraform to build our infrastructure.
terraform init
terraform plan
terraform apply
What does the Terraform script do?
From the custom image, a template will be created and from that, we can create a managed instance group. This will deploy 3 identical instances based on our baked image. It will also inject metadata that will create a static web page that contains the server’s name, internal and external IP.
Create a Firewall rule that will allow HTTP traffic to come to that group. This is useful if we want it all to reach our servers.
Create a front-end and back-end service fronted by a load balancer and forwarding rules.
Step 3 – Step back and appreciate what you’ve done.
Wait a bit and open a Browser with the IP of your GCP Load Balancer. The IP of your GCP Load Balancer can be found in (Network Services / Load balancing). Click on http-lb-url-map and look in the Frontend section, protocol HTTP. You'll see your Public IP there.
Open your browser http://frontend_public_ip.
Hit refresh. You'll see that you are going randomly to your Apache servers.
CleanUp – When you are ready to destroy what was created by Terraform.
terraform destroy
The Golden Image created by Packer will not be deleted. You'll need to go in (Compute Engine / Images) to delete it.
There you have it!
You have successfully created an auto-scaling, auto-healing managed instance group that is fronted by a Google Cloud load balancer. While the entire deployment shown above can be done manually, we’ve automated it with Terraform. This allows you to deploy it at scale, in the cloud, and in multiple zones or regions. It thereby creates an immutable infrastructure from beginning to end constituting an Infrastructure as Code, DevOps approach.
Testing Auto-Healing: You can easily see this in action by going into (Compute Engine / Instance groups / apache-rmig ) and select one of the instance and delete it. You'll see a new one taking its place automatically.
Testing Auto-Scaling: The stress tool as been installed in the golden image we've baked. To stress an instance and trigger auto-scaling, go into (Compute Engine / Instance groups / apache-rmig ) and click SSH under the Connect option of the instance of your chosing to go in. Once you are inside the instance, just run (stress -c 4) and CPU utilization will spike to 100% on that instance and will trigger auto-scaling after a minute. When you terminate the stress tool, the scale-down process can take up to 10 minutes.
More on Managed Intance Groups (MIG): MIGs allow you to perform rolling upgrade in a Canary way when you want to push an updated version of your template. We've only touched the surface of MIGs in this post. To know more, just follow this link.
What next?
Finally, in an all-in approach, we could redo all this and more by using Google Cloud’s own Deployment Manager instead of Terraform. Built by Google, Deployment Manager is own their version of Terraform that is specifically designed for their cloud platform. It includes beta and alpha features that customers want to use, but are not usually available on Terraform until generally available.
This will be the topic of my third post in this series on automation and GCP. So keep an eye out for more!
Now go forth and Automate All The Things!
CloudOps offers DevOps solutions with a wide range of expertise. Check out our hands-on workshops on Infrastructure as Code, and contact us to learn more about our expertise and what we can do for your organization.
Stacy Véronneau
A Senior Cloud Architect at CloudOps, Stacy Véronneau also works closely with Google Cloud Platform (GCP) and OpenStack. He’s currently working with Google to help customers migrate to GCP, and fully leverage its power. Additionally, he is an official OpenStack Ambassador, and has spoken at OpenStack Summits and runs meetups throughout Canada.