To understand Infrastructure as Code (IaC), first let’s look at how infrastructure has traditionally been managed.
Historically, a consumer of infrastructure would manually file a ticket and at the other side of the ticketing queue, someone else would log into a management portal and provision that piece of infrastructure. This could involve mounting servers on racks, installing operating systems, and/or connecting and configuring networks.
After hardware was requisitioned and installed, if the need receded, the money spent couldn’t be reallocated. So, while demand typically varied, an owned and operated data center historically had no elasticity.
The next step was virtual machines (VMs). This involved buying less, but bigger hardware and allocating portions of resources. This didn’t solve the overall problem of elasticity, but it did reduce the amount of provisioning required, as software installation could be bottled up as part of a standard image.
This manual provisioning process worked okay when infrastructure demands were relatively low and didn’t involve a high infrastructure churn, which was indeed the case for many private data centers.
Adapting to the Cloud
As businesses increasingly migrate to the cloud and require greater on-demand scalability, the traditional infrastructure management processes aren’t sustainable.
At the same time, applications have undergone a shift in their architectural patterns with the introduction of microservices and containers. Hence, engineering teams now need to support, not just the management of the infrastructure, but the design of the services running on that infrastructure, to take advantage of cloud provider services. This is one of the reasons behind the widespread adoption of Kubernetes (k8s), which along with cloud resources, offers the scalability that modern engineering teams demand.
Armon Dadgar, Hashicorp CTO, pinpoints two main reasons for the need for change in how infrastructure was provisioned: (i) the move to a cloud environment, which is predominantly API-driven; (ii) a much higher elasticity of infrastructure than previously. We also need to take into account the overall shift within IT towards DevOps and agile practices, which has dramatically reduced software development cycles.
Supporting DevOps Practices
In a cloud/edge environment, development cycles are typically shorter than in a traditional data center setting, and releases are more frequent. This means features appear more often, and progress can be seen more frequently. Production feedback also arrives sooner, lessening the risk that what has been built won’t work for end users.
Because IaC promotes higher repeatability through declarative languages that can be version controlled, humans no longer have to set up every piece of infrastructure by hand, leading to a reduction in human trial and error.
The demand for cloud/edge infrastructure is typically cyclical and, in order to optimize efficiency, should be scaled up or down depending on load at a given time. It makes sense to only use the infrastructure you need and spin it up or down to meet real-time demands. IaC makes this possible.
“Most outages are self-inflicted”
This was a point made in a recent talk on IaC by Brendan Burns, Kubernetes co-founder and DE for Microsoft Azure. As he highlighted, it is rare that anyone intends to break the system and cause a multi-hour or multi-day outage, but accidents happen and lead to performance problems. Burns says, “The primary reason that these accidents happen is these snowflakes.” By snowflakes, he means every time a developer makes an imperative change to the world that will never exist again. Then when you have to do a rollback, you have to recreate it, but you can’t.
By moving to a more declarative, cloud-native mode of operations, Burns describes us as moving to a world “where we are automating everything that we do, where we are securing and addressing the compliance and really achieving all of the goals that we set out to do in moving from the imperative world of scripting to a more declarative, more reproducible future.”
“And so we have to move from this world where we have a bunch of snowflakes to this world where we are automating everything that we do, where we are securing and addressing the compliance and really achieving all of the goals that we set out to do in moving from the imperative world of scripting to a more declarative, more reproducible future.” Brendan Burns, Kubernetes co-founder and DE for Microsoft Azure
We can go one step further with the idea that DevOps might eventually create a new category: FinOps, where automation leads to the ability to set up and tear down infrastructure in an automated fashion to best optimize costs – for example, to match a day/night cycle or seasonal trends. Of course, that leads to challenges around how to store, view, and investigate bottlenecks that are the result of a greater amount of metrics.
Underpinning the cloud-native approach to application delivery is the need to reduce operational complexity and maintain freedom of choice. DevOps teams thrive on flexibility and control over their workloads, particularly at the edge. Part of Wikipedia’s definition of DevOps includes “establishing a culture and environment where building, testing, and releasing software, can happen rapidly, frequently, and more reliably.” IaC is a critical part of this.
The benefits of IaC
Some of the benefits to utilizing IaC include:
- Speed: IaC allows you to create and configure infrastructure elements in seconds simply by running a script. You can do it for every environment, from dev to prod, passing through staging, QA, and other areas. IaC can help make the whole software development cycle more efficient.
- Consistency: This approach codifies the provisioning management process so that every time it’s performed, it can be automated, meaning one-off configurations are avoided. Human error is much less likely to creep in when the infrastructure management process is operationalized instead of performed manually.
- Reusability: Since an IaC system is deployed from source code, it is easy to reuse over time simply by executing the same code again. Organizations can reuse the same system whenever necessary. They can also refine the source code over time, as well as concepts like testing, project structure, and monitoring.
- Version control: As defined by Git, version control is “a system that records changes to a file or set of files over time so that you can recall specific versions later”. In simpler terms, version control allows you to see who has changed what when. This transparency of documentation allows you to look back at every version of a file so that you can easily revert files or the entire project to a previous state, compare changes over time, and see how a modification may have caused a problem. It provides the transparency and visibility lacking in a traditional point and click environment.
- Flexibility: There is real benefit to not being locked into an infrastructure layer in a similar way that you can benefit from when adopting a multi-cloud strategy. DevOps teams leverage a range of different tools when it comes to IaC, e.g. Terraform, bash, Go, CLIs or simply APIs that are scripted. This approach enables teams to employ a common deployment process across multiple providers. It also typically leads to lower costs associated with infrastructure management, and allows DevOps teams to focus on what matters most instead of performing manual, slow, error-prone tasks.
Infrastructure as Code enables DevOps teams to establish more repeatable processes through automation, which results in benefits that span across organizational functions. IaC is a key technology underpinning Section’s flexible deployment options across an expansive global edge network and is driving innovation around more performant and efficient edge computing.