Create a custom NGINX image and deploy it with Kubernetes

Matt Dixon
6 min readMar 29, 2023

--

Last week, in Level Up In Tech, we introduced the concept of containers, and today we’ll continue diving into Kubernetes. To recap from last week, containers are a way of packaging and isolating applications and their dependencies so that they can run consistently and reliably across different computing environments.

Containers allow for the deployment of microservices. In a microservices architecture, each service is responsible for a single task or feature and can be developed and deployed independently of the other services. This allows for greater flexibility and agility in the development process, as developers can make changes to one service without affecting the others.

Microservices also enable better scalability and fault tolerance, as each service can be scaled independently based on its specific usage patterns. Additionally, if one service fails, it does not affect the entire application, as other services can continue to function normally.

What gave rise to Kubernetes?

I’m glad you asked! In order to cover Kubernetes appropriately, it’s important to understand where we came from with applications and how they were deployed on the underlying server infrastructure.

Traditional application deployments involve installing the application on a physical server or multiple servers. This approach requires manual configuration, maintenance, and updates, which can be time-consuming and prone to errors. Moreover, traditional deployments are inflexible and cannot easily scale up or down to meet changing demand, leading to inefficiencies and wasted resources.

To address these challenges, virtualization emerged as a solution. Virtualization allows multiple virtual machines (VMs) to run on a single physical server, each with its own operating system and applications. This approach enables greater flexibility, scalability, and resource utilization, as well as simplified management and deployment.

Source: Kubernetes.io

However, virtualization also has its drawbacks. It requires a hypervisor to manage the virtual machines, which can introduce performance overhead and complexity. Additionally, each virtual machine requires its own operating system, which can lead to significant storage and memory usage.

To overcome these issues, containerization has gained popularity in recent years. Containerization allows applications to be packaged into lightweight, portable containers that can be run on any platform. Unlike virtual machines, containers share the host operating system, resulting in much lower resource usage and improved performance.

Containerized application deployments also offer many benefits over traditional deployments. They can be easily scaled up or down to meet changing demand, and updates can be deployed quickly and consistently across multiple environments. Containers also provide a level of isolation and security, preventing applications from interfering with each other.

So what is Kubernetes?

Kubernetes, also known as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

In simple terms, Kubernetes provides a framework for deploying and managing containerized applications at scale. It abstracts the underlying infrastructure and provides a unified API to manage containers, regardless of the underlying platform. Kubernetes allows you to define, deploy, and manage applications and their associated components, including networking, storage, and security.

Here are a few key benefits of Kubernetes:

  1. Scalability: Kubernetes allows you to scale your applications up or down based on demand. It automatically provisions resources and distributes workloads across multiple nodes, ensuring that your applications are always available and performant.
  2. Flexibility: Kubernetes supports multiple container runtimes and can run on any infrastructure, whether it’s on-premises or in the cloud. This gives you the flexibility to choose the best platform for your needs.
  3. Automation: Kubernetes automates many of the tasks involved in managing containerized applications, including deployment, scaling, and load balancing. This reduces the amount of manual intervention required and improves overall efficiency.
  4. Resilience: Kubernetes is designed to be highly resilient and fault-tolerant. It automatically detects and replaces failed nodes or containers, ensuring that your applications remain available even in the event of hardware or software failures.

Some of the problems that Kubernetes solves include:

  1. Application deployment: Kubernetes simplifies the process of deploying containerized applications by providing a single platform for managing all of the components, including containers, networking, storage, and security.
  2. Scaling: Kubernetes makes it easy to scale your applications up or down based on demand. It automatically provisions resources and distributes workloads across multiple nodes, ensuring that your applications are always available and performant.
  3. Resource management: Kubernetes helps you manage resources efficiently by automating resource allocation and balancing workloads across multiple nodes. This reduces the risk of overloading any single node and improves overall performance.
  4. Fault tolerance: Kubernetes is designed to be highly resilient and fault-tolerant. It automatically detects and replaces failed nodes or containers, ensuring that your applications remain available even in the event of hardware or software failures.

Kubernetes is a powerful platform for deploying and managing containerized applications at scale. It provides a unified API to manage containers, abstracts the underlying infrastructure, and automates many of the tasks involved in managing containerized applications. Kubernetes solves many of the problems associated with deploying and managing containerized applications, including application deployment, scaling, resource management, and fault tolerance.

Now that we’ve covered the technology, let’s dive into this week’s project. We’ve been asked to complete the following:

FOUNDATIONAL — Part 1

Using the command line,
create a deployment that runs the nginx image.
Display the details of this deployment
Check the event logs from the deployment
Delete the deployment

ADVANCED — Part 2

Create the same deployment using a yaml file
Display the details of this deployment via the command line
Update the yaml file to scale the deployment to 4 nginx containers
Verify the change via the command line

Prerequisites:

Docker installed on your machine

DockerHub account

Kubernetes enabled on Docker Desktop

Linux and terminal knowledge

Step 1: Using the command-line, create a deployment that runs the NGINX image.

This assumes that you already have Docker installed as well as have Kubernetes enabled on Docker Desktop. Therefore, I won’t be covering those portions. Let’s dive in and get into it.

We’ll create the deployment that runs the NGINX image with the following command:

kubectl create deployment nginx-deploy --image=nginx --replicas=4
NGINX Deployment successful

We can see that the deployment was successful!

Step 2: Display the details of this deployment

Let’s take a further look into the salient details of this deployment with the following command:

kubectl get deployment
kubectl describe deployment nginx-deploy

We can see that the deployment has been successful and is ready to go.

Deployment in ready state

Step 3: Display the details of this deployment

We can have Kubernetes describe the deployment with the above referenced command kubectl describe deployment nginx-deployand it provides a wealth of additional information.

Describe the deployment

Step 4: Check the event logs from the deployment

In order to check the logs, we’ll run the following command:

Kubectl logs deployment/nginx-deploy

Step 5: Delete the deployment

In the last step of Part 1, we’ll delete this deployment using the following command:

kubectl delete deployment nginx-deploy

We can see that the deployment was deleted successfully.

Thanks for tuning in to check out my article and project. I’ll be revisiting this project shortly to create a deployment with a YAML file.

In summary, while traditional application deployments required manual configuration and were inflexible, virtualization provided greater flexibility but introduced complexity and overhead. Containerization provides a lightweight, portable alternative that is highly scalable, efficient, and secure. Kubernetes functions as an orchestrator platform that automates the deployment, scaling, and management of containerized applications.

--

--