How to use a ConfigMap in a Kubernetes Deployment
To recap quickly from last week, Kubernetes is an open-source container orchestration platform that is widely used in cloud-native application development and deployment. It provides a powerful set of tools for managing and scaling containerized applications, enabling developers to focus on writing code and delivering features rather than worrying about the underlying infrastructure. With Kubernetes, organizations can easily manage complex containerized applications, automate deployment and scaling, and achieve high levels of resilience and availability.
Many companies use Kubernetes to solve a variety of problems related to deploying and managing containerized applications at scale. Let’s look at a an example of a real-world use case at Spotify:
Spotify, the popular music streaming service, uses Kubernetes to manage their vast infrastructure and deliver a seamless user experience to millions of listeners worldwide. Prior to Kubernetes, Spotify was using a custom container orchestration platform called Helios, which was difficult to maintain and didn’t scale well.
By adopting Kubernetes, Spotify was able to simplify their infrastructure management and improve their deployment speed and reliability. They use Kubernetes to manage over 1,500 microservices, running on tens of thousands of containers across multiple data centers. Kubernetes allows them to automatically scale up and down their infrastructure based on demand, ensuring that they can handle spikes in traffic without downtime.
Spotify also uses Kubernetes to enable continuous delivery and deployment of their software. They use a GitOps approach, where all changes to their infrastructure are managed through Git repositories, making it easy to version control, review, and roll back changes as needed
Overall, Kubernetes has helped Spotify to improve their development and deployment processes, reduce downtime, and deliver a better user experience to their listeners. It has also enabled them to scale their infrastructure efficiently and handle high levels of traffic with ease.
This week, we’re diving in deeper with Kubernetes and I have a fantastic project to step through using ConfigMaps. What are ConfigMaps you might ask? A ConfigMap is an object that is used to store configuration data in key-value pairs. This configuration data can be used to configure different aspects of your application or system, such as environment variables, command-line arguments, or configuration files.
A ConfigMap is typically used to store configuration data that is independent of the application code, allowing you to modify the configuration without having to modify the code itself. This makes it easy to update the configuration of your application without having to rebuild and redeploy the entire application.
For example, you can create a ConfigMap that contains environment variables for your application. Then you can mount the ConfigMap as a volume in your application’s container and read the environment variables from the ConfigMap. This allows you to modify the environment variables without having to modify the container image or the deployment configuration.
Let’s have a look out our project requirements for this week and dive into the project!
Project Requirements:
- Spin up two deployments. One deployment contains 2 pods running the NGINX image. Include a ConfigMap that points to a custom index.html page that contains the line “This is Deployment One”. The other deployment contains 2 pods running the nginx image.Include a ConfigMap that points to a custom index.html page that contains the line “This is Deployment Two”.
- Create a service that points to both deployments. You should be able to both deployments using the same ip address and port number.
- Use the curl command (curl service-IP-address:service-port) to validate that you eventually see the index.html pages from both Deployment 1 and Deployment 2.
Step 1: Spin up two deployments. One deployment contains 2 pods running the NGINX image. Include a ConfigMap that points to a custom index.html page that contains the line “This is Deployment One”. The other deployment contains 2 pods running the nginx image.Include a ConfigMap that points to a custom index.html page that contains the line “This is Deployment Two”.
There’s a lot in the first step so I’m going to break it down methodically in a logical order of operations. With that said, before we get these deployments spun up, I’m going to create a ConfigMap, from which our deployments will be created from. This custom index.html file will be added to the NGINX installation and later, when we run the curl
command, we should see a Title that says, “Matt’s LUIT Week 18 Project.” and some body text that says, “This is Deployment One”, and “This is Deployment Two”, for Deployment One and Deployment Two, respectively.
apiVersion: v1
kind: ConfigMap
metadata:
name: deployment-one
data:
index.html: |
<html>
<head>
<title>Matt's LUIT Week 18 Project.</title>
</head>
<body>
<h1>This is Deployment One</h1>
</body>
</html>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: deployment-two
data:
index.html: |
<html>
<head>
<title>Matt's LUIT Week 18 Project.</title>
</head>
<body>
<h1>This is Deployment Two</h1>
</body>
</html>
I’ve used VisualStudio Code as my IDE and written my ConfigMap there. I’ve saved it as configmap_v1.yaml
In the Terminal, we’ll run the following command to apply the ConfigMap:
kubectl apply -f configmap_v1.yaml
Applying a ConfigMap in Kubernetes creates or updates a named configuration object that holds key-value pairs of configuration data. The ConfigMap can be used by other Kubernetes objects such as Pods, Deployments, or Services to access this configuration data. We can see that applying the ConfigMap has created a ConfigMap for Deployment One and another for Deployment Two.
Let’s verify that the ConfigMaps are present. I will run the following command to check.
kubectl get configmap
Now we are ready to create the deployment file. The deployment is a resource object that provides a declarative way to manage a set of identical Pods. A Deployment file specifies the desired state of the deployment, including the desired number of replicas, which in our case, will be two, the container image to use, and how to roll out updates to the deployment.
When a deployment is created, Kubernetes creates a ReplicaSet that manages the creation and scaling of the Pods based on the desired state specified in the Deployment. The ReplicaSet ensures that the actual state of the Pods matches the desired state specified in the Deployment, and it automatically replaces any Pods that are deleted or fail.
Once a deployment is created or updated, Kubernetes creates or updates the associated ReplicaSet and Pods to match the desired state specified in the Deployment file. The Deployment file serves as a declarative specification for the desired state of the deployment, making it easy to manage and scale applications in Kubernetes.
Here’s my deployment file. I’ve named my deployment file matts-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: matts-deployment-one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
deployment: deployment-one
template:
metadata:
labels:
app: nginx
deployment: deployment-one
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: matts-config-volume-one
mountPath: /usr/share/nginx/html
volumes:
- name: matts-config-volume-one
configMap:
name: deployment-one
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: matts-deployment-two
spec:
replicas: 2
selector:
matchLabels:
app: nginx
deployment: deployment-two
template:
metadata:
labels:
app: nginx
deployment: deployment-two
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: matts-config-volume2
mountPath: /usr/share/nginx/html
volumes:
- name: matts-config-volume2
configMap:
name: deployment-two
I’m using VisualStudio Code as I’ve come to really like it as a development environment.
I will go ahead and apply my deployment file. Deployment files are used to define the desired state of a set of pods, and to manage the process of updating and scaling those pods.
When you create a deployment file, you define the desired number of replicas (pods) that should be running, as well as the image and configuration details for those pods. You can also define rolling updates and rollbacks, which allow you to safely and efficiently update the deployment without downtime.
Once you apply the deployment file using the kubectl apply
command, Kubernetes will create the necessary resources (such as pods, services, and replica sets) to match the desired state described in the deployment file. Kubernetes will also continuously monitor the actual state of the deployment, and make any necessary changes to ensure that the actual state matches the desired state.
kubectl apply -f matts-deployment.yaml
We can see here the the deployment has been applied. Note I ran mine twice accidentally so it says “unchanged” after both deployments.
It’s always good to make mistakes as long as you learn from them. I had fat-fingered my deployment file so I have a couple of extra pods running here, but we did get the pods successfully created and they have obtained IP addresses.
Step 2: Create a service that points to both deployments. You should be able to see both deployments using the same ip address and port number.
Next, we’re going to create a service that will point to both deployments using the same IP address and port number. In my service code below, the selector matches the app: nginx
label of both deployments, which means that the service will load balance traffic to both deployments. The ports
section defines the port that the service listens on (port
) and the port that the deployment containers listen on (targetPort
). Setting the type
to NodePort exposes the service on a static port on each node in our cluster.
apiVersion: v1
kind: Service
metadata:
name: matts-service
spec:
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30007
type: NodePort
I will now apply the service using the following command:
kubectl apply -f matts-service.yaml
Now we’ll verify that the application of the service was successful using the following command:
kubectl get service -o wide
The application of the service was successful and now we’ll see if we can connect to the deployments by hitting the webpages from our browser. Let’s enter localhost:30007 into our web browser and see what we get. It looks like Deployment One came up.
Let’s refresh our browser and see if we can get Deployment Two to come up successfully.
Both deployments came up successfully and that will wrap up this week’s project. In conclusion, ConfigMaps are a powerful tool in managing configuration data in Kubernetes deployments. By separating configuration data from application code, ConfigMaps help keep our configuration changes separate from our application code changes, which simplifies maintenance and deployment of Kubernetes applications. In this article, we have gone through the process of creating a ConfigMap, adding it to a Kubernetes deployment, and updating its values. We hope this guide has been useful in helping you understand how to use ConfigMaps in your Kubernetes deployments. By following the steps outlined here, you should now be able to incorporate ConfigMaps into your own Kubernetes deployments and streamline your application configuration management process. Thanks again for tuning in and stay tuned for my next project!
Feel free to connect with me on LinkedIn.
References: