Orchestrating Kubernetes with Terraform: A Step-by-Step Guide to Building Your Container Empire
When Terraform and Kubernetes work in tandem, Terraform can define and configure resources within a Kubernetes cluster using its declarative configuration language, encompassing various elements. Explore further with Control Plane.
As the world of infrastructure management continues to evolve, the adoption of declarative practices has become paramount. This approach empowers operators to precisely define their infrastructure’s desired state, ensuring consistency and reliability. The alternative, by contrast, is the imperative approach which outlines explicit steps for achieving a desired state.
By employing Terraform to provision Kubernetes resources, we not only harness the power of declarative practices but also gain access to Terraform’s robust state management capabilities. This combination sets the stage for highly reproducible infrastructures and portable applications.
Datadog’s 2022 container report says Kubernetes has a commanding 83% market share among container orchestration tools. With Kubernetes firmly establishing itself as the leading container orchestrator, the integration of Terraform and Kubernetes becomes an unparalleled synergy. Together, they facilitate the creation of highly portable and repeatable infrastructure.
How do Terraform and Kubernetes work together?
Terraform’s strength lies in its ability to interact with various APIs of cloud providers, services, and platforms through what it calls ‘providers.’ A provider is essentially a plugin that allows Terraform to communicate with a specific service or technology. It removes the intricacies of API calls, authentication, and resource management, providing a uniform interface to interact with different environments.
Kubernetes has its own Terraform provider, meaning that Terraform can natively interact with Kubernetes clusters to create a multi-tenancy environment. The Terraform provider for Kubernetes acts as a bridge, allowing Terraform to manage resources within a Kubernetes cluster using the Kubernetes API.
When Terraform and Kubernetes work in tandem, Terraform can define and configure resources within a Kubernetes cluster using its declarative configuration language, encompassing various elements like pods, deployments, services, and other Kubernetes objects. The use of Terraform can help mitigate the challenges encountered maintaining all these different components, including simplifying the complexity of working with Kubernetes Secrets. Terraform’s state management ensures that the desired state specified in the configuration files aligns with the actual state of the cluster.
Why should you orchestrate Kubernetes with Terraform?
Before diving into a hands-on demo, let’s understand the benefits of orchestrating Kubernetes using Terraform.
Robust state management capabilities
You can extend Terraform’s robust state management capabilities to Kubernetes clusters, ensuring precise tracking of infrastructure changes. This feature means you can trust that your clusters remain in the state you intended, eliminating the headache of manual corrections. Moreover, the dreaded indentation errors that sometimes plague YAML-based configuration files are a thing of the past with HashiCorp Configuration Language (HCL). Its structured syntax ensures a clean and error-free codebase.
Integration with existing Terraform deployment
If you’re already provisioning your infrastructure with Terraform, integrating Kubernetes is a natural next step, providing you with a unified approach to manage both your underlying infrastructure and containerized workloads. For those who appreciate thorough testing, tools like Terratest exist to validate your setups, providing an added layer of confidence in your infrastructure deployment.
Modularity
Terraform’s modularity is another game-changer. Its concept of modules allows you to break down your configuration into manageable, reusable pieces. This feature promotes code cleanliness and greatly simplifies working on larger projects. Coupled with the strength of workspaces, Terraform can effectively alleviate the headache of setting up distinct environments for development, production, and more. With workspaces, each team member can have their own dedicated workspace, akin to having their own sandbox.
Orchestrating Kubernetes with Terraform: A step-by-step guide to building your container empire
Let’s look at how to successfully orchestrate Kubernetes with Terraform.
1. Prerequisites
This guide assumes a basic understanding of Kubernetes and Terraform. In addition, you need the following installed to follow along:
- A Kubernetes cluster. For this guide, we’ll be using KinD. However, Minikube or any other Kubernetes cluster of your choosing should work.
- Kubectl
- Terraform CLI
2. Initializing Terraform
Let’s set up the necessary providers for communication with external Kubernetes APIs. In a folder of your choice, create a file named main.tf and populate it with the following code:
terraform { required_providers { kubernetes = { source = "hashicorp/kubernetes" version = "2.23.0" } } } provider "kubernetes" { config_path = "~/.kube/config" }
In the code block above, we specify the Kubernetes provider. By indicating the config_path, we inform Terraform about the location of your kubeconfig file.
Verify that your kube-context is accurately configured and aligned with your target Kubernetes cluster. This guide assumes your kubeconfig is correctly set up.
If you are unsure of your kube-context or how to use it, check out this guide that explains how to configure your kube-context.
Next, let’s initialize Terraform. Open your terminal or command prompt, navigate to the directory containing your configuration file (main.tf), and run:
terraform init
3. Creating a Namespace
To keep your resources organized and isolated, creating a dedicated namespace within your Kubernetes cluster is good practice. This helps prevent naming conflicts and allows for better management of your resources.
In your main.tf file, add the following code:
resource "kubernetes_namespace" "whoami" { metadata { name = "whoami" } }
In the above code, we’re using Terraform to define a Kubernetes namespace resource. This resource will create a new namespace named “whoami” within your cluster.
To apply this configuration, run:
terraform apply
This command will prompt you to confirm the changes. Once confirmed, Terraform will create the namespace in your Kubernetes cluster, as in the image below.
Next, verify that the namespace was successfully created by running the following command in your terminal or command prompt:
kubectl get namespaces
You should see the “whoami” namespace in the output as in the image below.
4. Creating a Deployment
Now that we have a dedicated namespace let’s deploy the Traefik Whoami service using a Kubernetes Deployment.
In your main.tf file, add the following code:
resource "kubernetes_deployment" "whoami_deployment" { metadata { name = "whoami-deployment" namespace = kubernetes_namespace.whoami.metadata[0].name } spec { replicas = 2 selector { match_labels = { app = "whoami" } } template { metadata { labels = { app = "whoami" } } spec { container { image = "traefik/whoami" name = "whoami" } } } } }
In this Terraform configuration, we define a Kubernetes Deployment named “whoami-deployment” within the “whoami” namespace. This Deployment will manage two replicas of the Traefik Whoami service.
In this case, we’re using interpolation to ensure that the kubernetes_deployment resource knows which namespace it should belong to. Specifically, we’re using the kubernetes_namespace.whoami.metadata[0].name expression.
Here’s a breakdown:
- kubernetes_namespace.whoami_namespace: This refers to the kubernetes_namespace resource we defined earlier for creating the “whoami” namespace. The format is <resource_type>.<resource_name>.
- .metadata[0].name: This accesses the metadata block of the namespace resource. In Kubernetes, metadata includes information like the name and labels of a resource. Since metadata is an object, we use [0] to indicate that we want the first (and in this case, only) element. Then, we access the nameattribute.
So, by using this interpolation, we’re essentially telling Terraform to use the name of the namespace it created for the deployment’s namespace attribute. This ensures that the deployment is created in the same namespace that Terraform managed, providing the necessary linkage between the resources. This way, the deployment knows where to place its pods and services.
5. Creating a Service
To enable access within the cluster, we’ll create a Kubernetes Service with ClusterIP.
In your main.tf file, add the following code:
resource "kubernetes_service" "whoami_service" { metadata { name = "whoami-service" namespace = kubernetes_namespace.whoami.metadata[0].name } spec { selector = { app = "whoami" } port { protocol = "TCP" port = 80 target_port = 80 } type = "ClusterIP" } }
In this Terraform configuration, we define a Kubernetes Service named “whoami-service” within the “whoami” namespace. This service will provide a stable internal IP address for accessing the Traefik Whoami deployment.
For the purpose of this demo, we’ve chosen ClusterIP as the service type. This ensures that the service works reliably across a wider range of Kubernetes clusters. In a production environment, you would likely use an [Ingress] (<https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress>) resource for external access.
To apply this configuration, run:
terraform apply
After applying, let’s use port forwarding to access the service. Run the following command:
kubectl -n whoami port-forward svc/whoami-service 8080:80
This command sets up port forwarding from your local machine’s port 8080 to the service’s port 80.
6. Testing the Terraform deployment
Now, you can access the Whoami service by navigating to http://localhost:8080 in your web browser.
Success!
Embracing Uniformity In Your Cloud Infrastructure
The shift towards automation streamlines operations and significantly reduces the margin for human error. Tools like Terraform empower you to build a foundation you can trust implicitly.
However, there are many requirements in scaling your container empire beyond Terraform. Your container empire stretches far beyond deployment, and includes everything from high availability and low latency to security and compliance scaling.
Without dedicated training and expertise, it’s challenging to maintain all the complex moving parts of container orchestration platforms like Kubernetes and associated tools like Terraform. This complexity in container and infrastructure orchestration grows immensely when attempting to meet stringent security or compliance requirements. To help you manage your container empire at scale, Control Plane offers a complete multi-cloud management solution through a Terraform plugin that makes Control Plane easy to integrate with any CI/CD pipeline setup. Experience everything on one Internal Developer Platform (IDP) that enables seamless developer self-service, resulting in greater productivity and freedom to innovate.
Schedule a demo or get in touch with us today for more info. We’re here to help – anytime!