Challenge:
One of the real benefits of the public cloud is scalability and an organization’s main cloud success criteria lies in how we are automating application deployment by utilizing full capabilities of the public cloud.
Immutable Infrastructure Paradigm: Is an ideology, process or framework of deploying and managing a cloud resource. In this paradigm, when a resource is deployed it cannot be updated externally (manual/UI changes are not accepted). If an update is required, a new resource is created to replace the existing one or code is updated to deploy a new version of resource.
Benefits of Immutable Infrastructure:
1. Elimination of configuration drift
2. Defining images as code
3. Easy software rollbacks in case of failure
4. Zero downtime upgrades
5. Predictable environments
Immutability: The term immutability applies Software Development, Database Theory and Infrastructure.
In Software Development – an immutable object is one whose state cannot be modified after it is created. If the object needs to be updated, it is either destroyed and replaced with a new one or the existing resource is updated with the required attributes.
In the Database Management – an immutable database is where when a row is created, it cannot be updated. In case of any change required, the existing row is either deleted or made obsolete by increasing the version number and a new row is created with the updated data.
The same theory is applies to infrastructure.
In immutable infrastructure, when a resource is deployed, it cannot be updated. If an update is required, a new resource is created and the existing one is deleted. For example, after a Virtual Machine (VM) is deployed in a cloud environment where immutable infrastructure is being enforced, one cannot log in to the VM to update a configuration file, apply an operating system patch, update an application, etc. To update any of these items, a new VM with the updated items would be created to replace the existing VM.
Benefits and Uses Cases of Immutable Infrastructure
Eliminate Configuration Drift
Summary: The overall state of your infrastructure is known at all times, helping you avoid manual errors or having to deal with failure points that come with using configuration management tools.
In the mutable infrastructure paradigm, updates to a server are made either manually or using a configuration management tool such as Ansible, Puppet or Chef. However, both of these approaches introduce many points of failure, such as forgetting to update a file on one server vs. another or connectivity issues between an agent and its control server.
The immutable infrastructure approach avoids these pitfalls by introducing the idea of a single image. This greatly simplifies the deployment process, increases reliability and implements consistency.
Codified definition of images
Summary: Images deployed in the Immutable Infrastructure paradigm are codified using templates and are stored in version control.
Tools such as HashiCorp Packer allow us to create text-based templates that clearly define the image format, operating system, software and configuration of the server that we want to deploy. This means the templates can be added to version control systems such as Git and can reap all the benefits that brings to the table (e.g., audits, merge/pull requests for approval, etc.) Some organizations also store the deployment images in an artifact repository such as Artifactory which allows even more versioning.
Easy rollbacks
Summary: Images are either created by templates that are part of version control or images stored in a version-controlled artifact repository so reverting to an older version of your infrastructure is as easy as building and/or deploying a previous version of the template or image.
In the mutable infrastructure paradigm, servers are deployed and then updated either manually or via configuration management. If an update results in unexpected behavior and needs to be reverted, things can become cumbersome and error-prone depending on the state of each server and which part of the update failed.
In the immutable infrastructure paradigm, reverting an update simply involves rebuilding and deploying a previous version of the infrastructure or simply deploying the previously built image.
Zero-downtime upgrades
Summary: Existing infrastructure is intact during the update process so there are no "planned outages"
In the mutable infrastructure paradigm, if an application is deployed on multiple servers and managed behind a load-balancer, reverting an update on a given server requires that server to be removed from the pool. This increases the load on the remaining servers or requires planned downtime.
In the immutable infrastructure paradigm, to revert an update in this scenario, simply add an older version of the image to the server pool and phase out the newer image. Do this until all the newly imaged servers are replaced by older imaged servers.
Predictable environments
Summary: All instances of an image are based on the same template so it is easier to maintain consistency from a testing perspective.
Each server in every environment is created using the same image so it is easy to create identical test, stage and production environments. This eliminates variances between environments and making troubleshooting issues easier allowing greater confidence when prompting updates to production environments.
Software and Tools used in this Test Deployment: I will be using the below platforms and tools to demonstrate a live environment setup.
Microsoft Azure
HashiCorp Packer
Environment Setup: Preparing Microsoft Azure and HashiCorp Packer for Image Management
Requisites: Azure Subscription and Azure Resource Group
Creating a Service Principal: A service principal is the local representation, or application instance, of a global application object in a single tenant or directory.
A service principal is a concrete instance created from the application object and inherits certain properties from that application object.
Practically, a service principal is often used as a service account. Packer uses a service principal to authenticate with Azure.
Azure CLI
# az ad sp create-for-rbac --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"
We now have the required authentication and authorization pieces that Packer needs to create our custom VM image and store it in Azure.
Use export to set client_id, client_secret, tenant_id and subscription_id as environment variables. This will allow packer to use these credentials.
Packer Template:
A Packer template is a file written in JSON that tells Packer how to create a machine image.
Here is the Packer Template: Taking the base CentOS image and using provisioners to install and enable nginx
{
"variables": {
"my_client_id": "{{env `ARM_CLIENT_ID`}}",
"my_client_secret": "{{env `ARM_CLIENT_SECRET`}}",
"my_tenant_id": "{{env `ARM_TENANT_ID`}}",
"azure_subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}",
"my_id": "{{env `ID`}}"
},
"sensitive-variables": [
"my_client_id",
"my_client_secret",
"my_tenant_id",
"azure_subscription_id"
],
"builders": [
{
"type": "azure-arm",
"client_id": "{{user `my_client_id`}}",
"client_secret": "{{user `my_client_secret`}}",
"tenant_id": "{{user `my_tenant_id`}}",
"subscription_id": "{{user `azure_subscription_id`}}",
"custom_managed_image_resource_group_name": "sandeep-cus-rg",
"custom_managed_image_name": "centos-developer-image",
"managed_image_resource_group_name": " sandeep-cus-rg {{user `my_id`}}",
"managed_image_name": "myPackerImage {{user `my_id`}}",
"os_type": "Linux",
"location": "Central US",
"vm_size": "Standard_A1_v2",
"plan_info": {
"plan_name": "centos8minimal",
"plan_product": "centos8",
"plan_publisher": "tunnelbiz"
}
}
],
"provisioners": [
{
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
"inline": [
"dnf update -y",
"dnf install -y python3 nginx",
"dnf autoremove -y",
"sudo systemctl enable nginx",
"sudo systemctl start nginx",
"/usr/sbin/waagent -force -deprovision && export HISTSIZE=0 && sync"
],
"inline_shebang": "/bin/sh -x",
"type": "shell"
}
]
}
Packer Template Blocks:
Variables – The variables block defines what variables we will be using in the Packer template.
Sensitive Variables - The sensitive-variables block lets us define which variables to exclude from logs that Packer produces.
Builders - A builder in Packer defines what kind of machine image Packer should produce. A Packer template can contain multiple sections in the builders block and is a major feature of Packer since it allows Packer to build machine images for different platforms, such as Amazon Web Services, Microsoft Azure, Google Cloud Platform or VMware vSphere all in a single file.
Provisioners - Packer uses provisioners to install and configure software on running machines before converting them into machine images.
Now that Packer has all that it needs to build a VM image, let's invoke the command to actually create and store the image in Azure:
During the process, Packer will:
- Authenticate with the provided credentials
- Creates a temporary Resource Group
- Validate the Packer template
- Create a VM from the base image selected
- Executes the provisioning steps outlined in the Packer template
- Powers off the VM
- Captures an image of the VM
- Stores the image in the Resource Group set defined in the Packer template
- Deletes the temporary Resource Group created in the earlier step
Once image is created and available, we can use the image to launch VM instances using it
Immutable Infrastructure Workflow
In the immutable infrastructure paradigm, the update is deployed the same way as the initial VM:
- Update the required file (locally or in a version control system)
- Execute Packer to generate an image with the updated file in it
- Deploy the updated image
Comments