How to Automate Jenkins CI/CD Setup with Terraform on AWS

Matt Dixon
13 min readApr 8, 2023

--

Today, we’re going to discuss automating your Jenkins CI/CD Pipeline with Terraform on AWS! Amazon Web Services (AWS) has revolutionized how companies run their workloads. When I think back to running workloads on-premises, it seems so archaic, and that was not that long ago. Running workloads on-prem required a team of people to set up, configure, install, maintain, upgrade, take down for service for failed hardware, patch, update software, and many other ancillary tasks.

AWS made all of that a thing of the past, except the part of needing knowledgeable, qualified people to know when, how, and where to utilize and implement the wide range of services that AWS offers. There is a lot of foundational and underlying knowledge associated with curating an AWS environment.

In today’s fast-paced software development environment, Continuous Integration and Continuous Deployment (CI/CD) have become critical components for delivering high-quality, reliable, and scalable applications. Jenkins, an open-source automation server, has emerged as a popular choice for implementing CI/CD pipelines due to its flexibility, extensive plugin ecosystem, and robust community support.

To keep up with the demands of modern software development, we’ll explore how to automate the setup of a Jenkins server on AWS using Terraform, a powerful Infrastructure-as-Code (IaC) tool. By leveraging Terraform, we can create easily maintainable and reproducible infrastructure that ensures consistency across different environments, making our CI/CD processes more efficient and reliable.

In this guide, we will walk you through the process of automating the deployment of a Jenkins server on an AWS EC2 instance using Terraform. I’ll step through creating the necessary AWS resources, such as VPCs, subnets, and security groups, to ensure secure and reliable access to the Jenkins server. Additionally, we’ll demonstrate how to create an S3 bucket for storing Jenkins artifacts, ensuring that it remains private and accessible only from within your VPC.

By the end of this tutorial, you’ll have a solid understanding of how to combine the power of Jenkins and Terraform to create a fully automated CI/CD infrastructure on AWS. This will empower you to focus on writing code, improving product quality, and delivering new features faster than ever before. So, let’s get started on our journey to automate Jenkins CI/CD setup with Terraform on AWS!

Let’s take a look at our project requirements:

Your team would like to start using Jenkins as their CI/CD tool to create pipelines for DevOps projects. They need you to create the Jenkins server using Terraform so that it can be used in other environments and so that changes to the environment are better tracked. For the Foundational project you are allowed to have all your code in a single main.tf file (known as a monolith) with hardcoded data.

  1. Deploy 1 EC2 Instances in your Default VPC.
  2. Bootstrap the EC2 instance with a script that will install and start Jenkins. Review the official Jenkins Documentation for more information: https://www.jenkins.io/doc/book/installing/linux/
  3. Create and assign a Security Group to the Jenkins Security Group that allows traffic on port 22 from your ip and allows traffic from port 8080.
  4. Create a S3 bucket for your Jenkins Artifacts that is not open to the public.

Prerequisites:

Let’s get started! I’ll be using Visual Studio Code on my Mac. I am assuming that you have already installed the AWS CLI and Terraform. If you need a quick primer on how to install AWS CLI, see the following article here. You can reference the Terraform documentation for how to perform that installation here.

I will write my Terraform code starting from the outer-most AWS elements and work my way inward on my AWS diagram, because that makes logical sense to me. However, for those that write code as they think of the elements to be provisioned, worry not, because Terraform has your back!

Terraform is designed to handle dependencies and manage the order of resource creation automatically, regardless of the order in which the resources are defined in your code. This is achieved through Terraform’s built-in dependency graph. That’s pretty slick!

When you run terraform apply, Terraform builds a dependency graph based on the relationships between resources. These relationships can be established implicitly through resource attributes or explicitly by using the depends_on keyword. Once the dependency graph is constructed, Terraform determines the correct order of resource creation, modification, or destruction based on these relationships.

This feature allows you to write Terraform code in any order without worrying about the order of execution. Terraform will automatically figure out the optimal sequence of actions to create, modify, or destroy resources based on their dependencies, ensuring that your infrastructure is provisioned correctly and efficiently.

So despite that Terraform has my back, working logically and methodically helps me have a mental note of where I’ve progressed in provisioning the infrastructure with code. This is what’s known as IaC or Infrastructure as Code. Infrastructure as Code affords us with a wealth of benefits that are too good to ignore.

Infrastructure as Code is a practice where you manage and provision infrastructure resources using code, rather than through the standard manual processes or bespoke configurations and scripts that we’re accustomed to. IaC brings several benefits to the table, making it an essential part of modern software development and DevOps practices:

  1. Version control: IaC allows you to store your infrastructure configurations in version control systems like Git. This provides a history of changes, enables collaboration among your team members, and allows for easy rollbacks to previous versions in case something goes wrong.
  2. Consistency and repeatability: IaC ensures that infrastructure provisioning is consistent across your different environments (e.g., development, staging, production). This helps minimize inconsistencies or discrepancies that may arise from manual or ad-hoc provisioning.
  3. Automation: IaC enables you to automate the process of creating, updating, and destroying infrastructure resources. This reduces human error (maybe you didn’t have enough coffee) and speeds up the provisioning process, allowing for faster deployment of new features and bug fixes.
  4. Documentation: IaC serves as a form of living documentation if you will, for your infrastructure, making it easier to understand, maintain, and share with your team. This is particularly useful for onboarding new team members or when transferring knowledge between teams. Finally, the days of “there’s no documentation” are coming to an end.
  5. Cost savings: By automating the provisioning and management of infrastructure, IaC reduces the time and effort required for manual tasks, leading to cost savings to your OPEX. Additionally, IaC can help optimize resource usage and reduce waste by ensuring that you’re only using the resources you need, and that’s critical in the age where financial stewardship is more important than ever.
  6. Scalability and flexibility: IaC makes it easier to scale your infrastructure to meet changing business demands. You can quickly spin up or tear down resources as needed, and you can adapt your infrastructure to support new technologies and services as they arise.
  7. Improved security and compliance: IaC allows you to enforce security best practices and compliance requirements across your infrastructure. By codifying security policies and configurations, you can minimize the risk of fat-fingered configurations and ensure that your infrastructure meets the necessary compliance standards.
  8. Faster recovery: In case of infrastructure failures or disasters, IaC enables you to quickly recreate your infrastructure in a new environment or region, reducing downtime and minimizing the impact on your business.

As you can see, Infrastructure as Code offers numerous benefits that can improve your organization’s agility, consistency, and efficiency. By adopting IaC practices, you can streamline your infrastructure management processes, reduce fat-fingered configuration errors, and enable your team to focus on other tasks.

Now, back to Visual Studio Code!

Terraform main.tf file

Again, I’ve gone through and written my code starting from the outer elements of my AWS diagram to the inner elements to methodically keep track of where I am.

terraform {
required_version = ">= 0.14"

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}

provider "aws" {
region = "us-east-1"
}

resource "aws_vpc" "default" {
cidr_block = "10.0.0.0/16"
}

resource "aws_security_group" "jenkins_sg" {
name = "jenkins"
description = "Security group for Jenkins"
vpc_id = aws_vpc.default.id
}

resource "aws_subnet" "jenkins_public" {
vpc_id = aws_vpc.default.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
}

resource "aws_instance" "jenkins-server" {
ami = "ami-0747e613a2a1ff483"
instance_type = "t2.micro"
subnet_id = aws_subnet.public.id
key_name = "matts-aws-jenkins-key-pair" # Replace with your own key pair name-- it's helpful to use an existing key pair

vpc_security_group_ids = [aws_security_group.jenkins.id]

user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y java-1.8.0-openjdk-devel
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
sudo yum install -y jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins
EOF

tags = {
Name = "jenkins_server"
}
}

resource "aws_security_group_rule" "jenkins_ssh_in" {
security_group_id = aws_security_group.jenkins.id

type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["xxx.xxx.xxx.xxx/32"] # Replace my "xxx" octets with your own public IP address
}

resource "aws_security_group_rule" "jenkins_http_in" {
security_group_id = aws_security_group.jenkins.id

type = "ingress"
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["xxx.xxx.xxx.xxx/32"] # Replace my "xxx" octets with your own public IP address
}

resource "aws_subnet" "jenkins_private" {
vpc_id = aws_vpc.default.id
cidr_block = "10.0.2.0/24"
}

resource "aws_s3_bucket" "jenkins_artifacts" {
bucket = "matts-aws-jenkins-private-bucket-707" # Replace with your unique AWS S3 bucket name
acl = "private"
}

resource "aws_vpc_endpoint" "jenkins_s3_bucket" {
vpc_id = aws_vpc.default.id
service_name = "com.amazonaws.us-east-1.s3"
}

resource "aws_route_table_association" "jenkins_private_s3_endpoint" {
subnet_id = aws_subnet.private.id
route_table_id = aws_vpc.default.main_route_table_id
}

resource "aws_internet_gateway" "jenkins_igw" {
vpc_id = aws_vpc.default.id
}

resource "aws_route_table" "jenkins_rt_public" {
vpc_id = aws_vpc.default.id
}

resource "aws_route" "jenkins_public_igw" {
route_table_id = aws_route_table.public.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}

I’ll explain what each code section does here as it’s rather long.

  1. terraform: This code block specifies the required Terraform version and the required AWS provider version. It’s a best practice to include this in your code.
  2. provider "aws": This code block sets the AWS region for our resources that will be created.
  3. resource "aws_vpc" "default": This code block creates a Virtual Private Cloud (VPC) with a CIDR block of 10.0.0.0/16. I’m using my default VPC.
  4. resource "aws_security_group" "jenkins_sg": This code block creates a security group named "jenkins" within the VPC created earlier.
  5. resource "aws_subnet" "jenkins_public": This code block creates a public subnet with a CIDR block of 10.0.1.0/24 within our VPC.
  6. resource "aws_instance" "jenkins-server": This code block creates our EC2 instance, launches it in the public subnet, and bootstraps it with a script to install and configure Jenkins.
  7. resource "aws_security_group_rule" "jenkins_ssh_in" and resource "aws_security_group_rule" "jenkins_http_in": These code blocks create the ingress rules for the Jenkins security group, allowing traffic on ports 22 (SSH) and 8080 (HTTP) from my public IP address which is not posted.
  8. resource "aws_subnet" "jenkins_private": This block creates our private subnet with a CIDR block of 10.0.2.0/24 within the VPC, where the S3 bucket will reside.
  9. resource "aws_s3_bucket" "jenkins_artifacts": This code block creates our S3 bucket with a private ACL for storing Jenkins artifacts.
  10. resource "aws_vpc_endpoint" "jenkins_s3_bucket": This code block creates a VPC endpoint for the Amazon S3 service within our VPC, enabling access to the S3 bucket from our private subnet.
  11. resource "aws_route_table_association" "jenkins_private_s3_endpoint": This code block associates our private subnet with the main route table of our VPC, allowing it to use our VPC endpoint for S3 access.
  12. resource "aws_internet_gateway" "jenkins_igw": This code block creates an Internet Gateway and attaches it to our VPC, providing internet access for resources within the VPC.
  13. resource "aws_route_table" "jenkins_public": This code block creates a new route table for our public subnet within our VPC.
  14. resource "aws_route" "jenkins_public_igw": Lastly, this code block adds a route to the public route table, routing all internet-bound traffic through the Internet Gateway.

Now, I’ve saved the code to my Terraform directory. I will run the terraform init command which is used to initialize the Terraform directory. From the terminal, we’ll need to navigate to the same directory that our ‘main.tf’ file is in. It must be run before we run any further commands.

Terraform init successfully initialized

Now for the moment of truth! I’ll run terraform validate to make sure my code is valid and ready to be applied. Fingers crossed!

Terraform validate with a warning, but config is valid

Looks like my configuration is valid, and will still work, but I will need to go back in and investigate this later, as I am going to need to catch some shut eye tonight.

Next we’ll run terraform plan to see how what elements we’ll be provisioning.

I’ve got 13 items planned to add through this code. Now will be the ultimate moment of truth in running the terraform apply command to execute this code and provision all of the infrastructure.

And…

Ok, so my first “terraform apply” failed. But let me tell you why. I’m embarrassed to say that it took me a hot minute to figure it out. I am provisioning all of my infrastructure in US-East-1. Unbeknownst to me, I neglected to verify which AWS Region I was logged into to get the AMI-ID.

In AWS, an Amazon Machine Image (AMI) ID is a unique identifier for a specific machine image. An AMI is a virtual server template that contains the software configuration (operating system, application server, and applications) required to launch a virtual machine (known as an EC2 instance) in the AWS cloud.

When you create an EC2 instance, you need to specify an AMI ID, which defines the base operating system and software stack that will be used to launch the instance. AWS provides a variety of pre-built AMIs for popular operating systems, such as Amazon Linux, Mac, Ubuntu, Windows, and others.

One thing that’s very easy to overlook and just plain forget is that AMI IDs are unique for each region in AWS, meaning that an AMI ID for an Amazon Linux image in the us-east-1 region will be different from the same image in the us-west-2 region. In order to launch an instance in a specific region, you MUST use the AMI ID associated with that region.

I was actually logged into US-West-2 in the AWS Console. SO…I can’t use an AMI ID from US-West-2 and expect it to work in US-East-1, which is the region I chose for this project. Doh! Usually these types of things are painful, requiring more time than they should, and that’s what makes them great learning and self-teaching moments! Let’s fix that and try this again.

Heh. Now it failed because my Key Pair was also created in US-West-2. I went in and deleted it and re-created it in US-East-1, so now we should be good.

And…good to go now! My terraform output is quite verbose because I have trace and debugging enabled.

Success!

That was pretty cool and definitely a lot like magic! Now I’ll head to AWS and get the public IP of my newly provisioned EC2 Instance and see if we can connect to it.

Jenkins server running

Let’s look up the public IP address and connect to it.

Public ip for Jenkins server
Jenkins is running

Outstanding! To recap, in our Terraform project, we automated the setup of a Jenkins CI/CD server on AWS. We started by specifying the required Terraform version and AWS provider, followed by creating a default VPC with CIDR block 10.0.0.0/16. We then set up a Jenkins security group and two subnets: one public (10.0.1.0/24) and one private (10.0.2.0/24). We launched an EC2 instance within the public subnet, bootstrapped it with a script to install and start Jenkins, and associated it with the Jenkins security group. We configured the security group to allow SSH (port 22) and HTTP (port 8080) traffic from my IP address. Finally, we created an S3 bucket in the private subnet to store Jenkins artifacts and set up a VPC endpoint for S3 access, an Internet gateway, and a route table to enable external access to the Jenkins server.

Now, it’s time to burn it all down with one command, the command that you don’t want to run unless you’re absolutely certain you want to decommission your infrastructure — terraform destroy so without further ado, let’s do it!

Terraform destroy complete

Now let’s just check AWS real quick just to be sure.

Nuked from orbit by terraform destroy

That was a fun project! As you can see, you can wield some pretty incredible capabilities with Terraform. I can tell that Terraform is going to become one of my favorite tools. I’ll push my code to Github tomorrow and will update the code link. And now, I’m officially going to get some shut-eye.

Feel free to connect with me on LinkedIn here. And if you liked my article, give a couple of claps as I’m sure that helps it get visibility to help other folks.

--

--

No responses yet