
Creating AWS EKS Cluster using Terraform
5 September, 2023
0
0
0
Contributors
π Introduction:
Amazon Elastic Kubernetes Service (EKS) simplifies the process of deploying, managing, and scaling containerized applications using Kubernetes. When combined with Terraform, an infrastructure-as-code tool, you can automate the provisioning of an EKS cluster and related resources, making it easy to maintain and reproduce infrastructure configurations. In this blog, we'll guide you through the process of creating an Amazon EKS cluster using Terraform.
π Prerequisites:
- βοΈ An AWS account with appropriate permissions.
- π¨π»βπ» AWS CLI installed and configured.
- π€ Terraform installed on your local machine.
- π§ Basic familiarity with Kubernetes and Terraform concept
π Step-by-step guide:
β Set Up Your Terraform Configuration:
Create a directory structure for your Terraform configuration. The directory should look like the following:
P1-GITOPS-TERRAFROM/
βββ modules/
β βββ EKS/
β β βββ main.tf
β β βββ variables.tf
β β βββ outputs.tf
β βββ EKS-node-groups/
β β βββ main.tf
β β βββ variables.tf
β βββ IAM/
β β βββ main.tf
β β βββ outputs.tf
β βββ nat-gateways/
β β βββ main.tf
β β βββ variables.tf
β βββ vpc/
β βββ main.tf
β βββ variables.tf
β βββ outputs.tf
βββ WebApp-EKS/
βββ main.tf
βββ provider.tf
βββ variables.tf
βββ terrafrom.tfvars
β Breakdown of the Directory Structure:
π Inside the WebApp-EKS/ Directory :
|ββ WebApp-EKS/
βββ main.tf
βββ provider.tf
βββ variables.tf
βββ terrafrom.tfvars
π main.tf
: This file contain main configuration for deploying our web application on the EKS cluster The primary Terraform configuration file for the module. It defines the core resources and settings that implement the module's functionality.
# Creating a VPC using the "vpc" module
source = "../modules/vpc" # Using the "vpc" module from the specified path
REGION = var.REGION
PROJECT_NAME = var.PROJECT_NAME
}
# Managing IAM roles and policies using the "iam" module
module "iam" {
source = "../modules/IAM"
}
# Creating NAT gateways using the "nat-gateways" module
module "nat_gateways" {
source = "../modules/nat-gateways"
VPC_ID = module.vpc.VPC_ID
internet_gateway_id = module.vpc.internet_gateway_id
PUBLIC_SUBNET_1A_ID = module.vpc.PUBLIC_SUBNET_1A_ID
PUBLIC_SUBNET_2B_ID = module.vpc.PUBLIC_SUBNET_2B_ID
PRIVATE_SUBNET_3A_ID = module.vpc.PRIVATE_SUBNET_3A_ID
PRIVATE_SUBNET_4B_ID = module.vpc.PRIVATE_SUBNET_4B_ID
}
# Creating an Amazon EKS cluster using the "EKS" module
module "EKS" {
source = "../modules/EKS"
EKS_CLUSTER_ROLE_ARN = module.iam.EKS_CLUSTER_ROLE_ARN
PUBLIC_SUBNET_1A_ID = module.vpc.PUBLIC_SUBNET_1A_ID
PUBLIC_SUBNET_2B_ID = module.vpc.PUBLIC_SUBNET_2B_ID
PRIVATE_SUBNET_3A_ID = module.vpc.PRIVATE_SUBNET_3A_ID
PRIVATE_SUBNET_4B_ID = module.vpc.PRIVATE_SUBNET_4B_ID
}
# Creating EKS node groups using the "EKS-Node-groups" module
module "EKS-Node-groups" {
source = "../modules/EKS-Node-groups"
EKS_CLUSTER_NAME = module.EKS.EKS_CLUSTER_NAME
NODE_GROUP_ROLE_ARN = module.iam.NODE_GROUP_ROLE_ARN
PRIVATE_SUBNET_3A_ID = module.vpc.PRIVATE_SUBNET_3A_ID
PRIVATE_SUBNET_4B_ID = module.vpc.PRIVATE_SUBNET_4B_ID
}
π variables.tf
: Holds input variables, enabling customization of modules behavior
variable "PROJECT_NAME" {}
variable REGION{}
π provider.tf
: Specifies provider configurations for external services, ensuring proper interaction.
provider "aws" {
region = var.REGION
}
π terraform.tfvars
: Provides specific values for module variables, facilitating easy configuration changes.
PROJECT_NAME = "Web-App-EKS"
REGION = "us-east-1"
π Inside the modules/ Directory:
P1-GITOPS-TERRAFROM/
βββ modules/
β βββ EKS/
β β βββ main.tf
β β βββ variables.tf
β β βββ outputs.tf
β βββ EKS-node-groups/
β β βββ main.tf
β β βββ variables.tf
β βββ IAM/
β β βββ main.tf
β β βββ outputs.tf
β βββ nat-gateways/
β β βββ main.tf
β β βββ variables.tf
β βββ vpc/
β βββ main.tf
β βββ variables.tf
β βββ outputs.tf
π· EKS module π· :
π main.tf
: This file orchestrates the creation of an Amazon EKS cluster. It sets up crucial parameters like the cluster's name, version, and networking configuration. By defining the IAM role and subnet placement, it shapes the core foundation of the EKS environment.
# Create the EKS cluster
resource "aws_eks_cluster" "eks" {
# Name of the cluster
name = "eks"
# ARN of the IAM role that EKS will assume
role_arn = var.EKS_CLUSTER_ROLE_ARN
# Version of EKS to use
version = "1.27"
# VPC configuration
vpc_config {
# Whether private access to EKS API server is enabled
endpoint_private_access = false
# Whether public access to EKS API server is enabled
endpoint_public_access = true
# List of subnet IDs where EKS resources will be placed
subnet_ids = [
var.PUBLIC_SUBNET_1A_ID,
var.PUBLIC_SUBNET_2B_ID,
var.PRIVATE_SUBNET_3A_ID,
var.PRIVATE_SUBNET_4B_ID
]
}
}
π variables.tf
: Holds adjustable variables for EKS module behavior
variable "EKS_CLUSTER_ROLE_ARN" {}
variable "PUBLIC_SUBNET_1A_ID" {}
variable "PUBLIC_SUBNET_2B_ID" {}
variable "PRIVATE_SUBNET_3A_ID" {}
variable "PRIVATE_SUBNET_4B_ID" {}
π output.tf
: This file exposes essential data generated by the EKS module, facilitating integration with other parts of the project.
output "EKS_CLUSTER_NAME" {
value = aws_eks_cluster.eks.name
}
π· EKS-node-groups module π· :
π main.tf
: This file manages the setup of an EKS node group. it specifies crucial attributes such as the group's name, associated IAM role, scaling details, and instance specifics. By customizing this configuration, it establishes a flexible worker node environment within EKS, enhancing adaptability and scalability across the system.
# Create the EKS cluster
resource "aws_eks_cluster" "eks" {
# Name of the cluster
name = "eks"
# ARN of the IAM role that EKS will assume
role_arn = var.EKS_CLUSTER_ROLE_ARN
# Version of EKS to use
version = "1.27"
# VPC configuration
vpc_config {
# Whether private access to EKS API server is enabled
endpoint_private_access = false
# Whether public access to EKS API server is enabled
endpoint_public_access = true
# List of subnet IDs where EKS resources will be placed
subnet_ids = [
var.PUBLIC_SUBNET_1A_ID,
var.PUBLIC_SUBNET_2B_ID,
var.PRIVATE_SUBNET_3A_ID,
var.PRIVATE_SUBNET_4B_ID
]
}
}
π variables.tf
: Holds adjustable variables for EKS-node-groups module behavior
variable EKS_CLUSTER_NAME{}
variable NODE_GROUP_ROLE_ARN{}
variable "PRIVATE_SUBNET_3A_ID" {}
variable "PRIVATE_SUBNET_4B_ID" {}
π· IAM module π· :
π main.tf
: in this file we create IAM roles for an Amazon EKS cluster and its worker nodes. These roles enable secure access and permissions. The script attaches policies for EKS cluster management, networking, read-only ECR access, and worker node interactions.
# Create an IAM role for the EKS cluster to assume
resource "aws_iam_role" "eks_cluster" {
name = "eks-cluster"
# Define the policy that allows EKS service to assume this role
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
# Attach the AmazonEKSClusterPolicy to the IAM role
resource "aws_iam_role_policy_attachment" "amazon_eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
# Create an IAM role for general EKS worker nodes
resource "aws_iam_role" "nodes_group_role" {
name = "eks-node-group-general_role"
# Define the policy that allows EC2 instances to assume this role
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
# Attach AmazonEKS_CNI_Policy to the IAM role for CNI networking
resource "aws_iam_role_policy_attachment" "amazon_eks_cni_policy_general" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.nodes_group_role.name
}
# Attach AmazonEC2ContainerRegistryReadOnly policy to the IAM role for read-only ECR access
resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_read_only" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.nodes_group_role.name
}
# Attach AmazonEKSWorkerNodePolicy to the IAM role for general EKS worker nodes
resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy_general" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.nodes_group_role.name
}
πoutput.tf
:
output "EKS_CLUSTER_ROLE_ARN" {
value = aws_iam_role.eks_cluster.arn
}
output "NODE_GROUP_ROLE_ARN" {
value = aws_iam_role.nodes_group_role.arn
}
π· nat-gateways moduleπ· :
π main.tf
: This file sets up NAT Gateways in an AWS VPC. It creates Elastic IPs, associates them with subnets, and configures route tables for public and private access. NAT Gateways enable secure internet access for VPC resources."
# Create Elastic IP resources for NAT Gateways
resource "aws_eip" "nat1" {
# These Elastic IPs depend on the Internet Gateway being created first
depends_on = [var.internet_gateway_id]
}
resource "aws_eip" "nat2" {
depends_on = [var.internet_gateway_id]
}
# Create NAT Gateway resources
resource "aws_nat_gateway" "gw1" {
# Allocate the Elastic IP created earlier
allocation_id = aws_eip.nat1.id
# Associate this NAT Gateway with the public subnet (public_subnet_1a)
subnet_id = var.PUBLIC_SUBNET_1A_ID
# Assign a tag for easy identification
tags = {
Name = "NAT 1"
}
}
resource "aws_nat_gateway" "gw2" {
allocation_id = aws_eip.nat2.id
subnet_id = var.PUBLIC_SUBNET_2B_ID
tags = {
Name = "NAT 2"
}
}
# Create public route table
resource "aws_route_table" "public" {
# Associate this route table with the VPC created earlier (P1_vpc)
vpc_id = var.VPC_ID
# Create a default route through the Internet Gateway (P1_vpc)
route {
cidr_block = "0.0.0.0/0"
gateway_id = var.internet_gateway_id
}
# Assign a tag for easy identification
tags = {
Name = "public"
}
}
# Create private route tables with NAT Gateway routes
resource "aws_route_table" "private1" {
vpc_id = var.VPC_ID
# Create a default route through the NAT Gateway gw1
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.gw1.id
}
# Assign a tag for easy identification
tags = {
Name = "private"
}
}
resource "aws_route_table" "private2" {
vpc_id = var.VPC_ID
# Create a default route through the NAT Gateway gw2
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.gw2.id
}
# Assign a tag for easy identification
tags = {
Name = "private"
}
}
# Associate public and private route tables with respective subnets
resource "aws_route_table_association" "public1" {
# Associate the public route table with the public subnet public_subnet_1a
subnet_id = var.PUBLIC_SUBNET_1A_ID
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public2" {
# Associate the public route table with the public subnet public_subnet_2b
subnet_id = var.PUBLIC_SUBNET_2B_ID
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "private1" {
# Associate the private route table with the private subnet private_subnet_3a
subnet_id = var.PRIVATE_SUBNET_3A_ID
route_table_id = aws_route_table.private1.id
}
resource "aws_route_table_association" "private2" {
# Associate the private route table with the private subnet private_subnet_4b
subnet_id = var.PRIVATE_SUBNET_4B_ID
route_table_id = aws_route_table.private2.id
}
π variables.tf
:
variable "VPC_ID" {}
variable "internet_gateway_id" {}
variable "PUBLIC_SUBNET_1A_ID" {}
variable "PUBLIC_SUBNET_2B_ID" {}
variable "PRIVATE_SUBNET_3A_ID" {}
variable "PRIVATE_SUBNET_4B_ID" {}
π· VPC module π· : π main.tf
:In this file a Virtual Private Cloud (VPC) is created in AWS. It includes an internet gateway for external access and defines public and private subnets across availability zones. Tags indicate subnet roles within a Kubernetes cluster.
# VPC total IPs: 65,536 usable IPs
# Each /24 has 256 IPs
# Create VPC and Internet Gateway
resource "aws_vpc" "P1_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "${var.PROJECT_NAME}-vpc"
Environment = "prod"
}
}
resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.P1_vpc.id
tags = {
Name = "${var.PROJECT_NAME}-igw"
Environment = "prod"
}
}
# Fetch availability zones
data "aws_availability_zones" "available_zones" {}
# Create Public Subnets
resource "aws_subnet" "public_subnet_1a" {
vpc_id = aws_vpc.P1_vpc.id
cidr_block = "10.0.0.0/24"
availability_zone = data.aws_availability_zones.available_zones.names[0]
map_public_ip_on_launch = true
tags = {
Name = "public_subnet_1a"
"kubernetes.io/cluster/eks" = "shared"
"kubernetes.io/role/elb" = 1
}
}
resource "aws_subnet" "public_subnet_2b" {
vpc_id = aws_vpc.P1_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = data.aws_availability_zones.available_zones.names[1]
map_public_ip_on_launch = true
tags = {
Name = "public_subnet_2b"
"kubernetes.io/cluster/eks" = "shared"
"kubernetes.io/role/elb" = 1
}
}
# Create Private Subnets
resource "aws_subnet" "private_subnet_3a" {
vpc_id = aws_vpc.P1_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = data.aws_availability_zones.available_zones.names[0]
tags = {
Name = "private_subnet_3a"
"kubernetes.io/cluster/eks" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
resource "aws_subnet" "private_subnet_4b" {
vpc_id = aws_vpc.P1_vpc.id
cidr_block = "10.0.3.0/24"
availability_zone = data.aws_availability_zones.available_zones.names[1]
tags = {
Name = "private_subnet_4b"
"kubernetes.io/cluster/eks" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
π variables.tf
:
variable "PROJECT_NAME" {}
variable "REGION" {}
πoutput.tf
:
# Output for VPC ID
output "VPC_ID" {
value = aws_vpc.P1_vpc.id
}
# Output for Internet Gateway ID
output "internet_gateway_id" {
value = aws_internet_gateway.internet_gateway.id
}
# Output for Public Subnet 1A ID
output "PUBLIC_SUBNET_1A_ID" {
value = aws_subnet.public_subnet_1a.id
}
# Output for Public Subnet 2B ID
output "PUBLIC_SUBNET_2B_ID" {
value = aws_subnet.public_subnet_2b.id
}
# Output for Private Subnet 3A ID
output "PRIVATE_SUBNET_3A_ID" {
value = aws_subnet.private_subnet_3a.id
}
# Output for Private Subnet 4B ID
output "PRIVATE_SUBNET_4B_ID" {
value = aws_subnet.private_subnet_4b.id
}
β Creating the infrastructure
π§ Initialize the Configuration:
Run the following command to initialize the Terraform configuration:
terraform init
π Review Planned Changes:
Run the following command to see what changes Terraform will make:
terraform plan
π₯ Apply Changes:
To create the resources, run the following command:
terraform apply
Terraform will prompt you to confirm the changes. Type yes
and press Enter π₯³.
β Verifying Resources
Run the following command to login to the EKS cluster
aws eks --region us-east-1 update-kubeconfig --name eks
list the nodes to verify that you can run command using kubectl
aws eks --region us-east-1 update-kubeconfig --name eks
β Conclusion:
Congratulations π! You have successfully created an AWS EKS cluster using Terraform. Now, proceed to blog 2 to continue with the remaining tasks for this project. Thank you for reading this blog.
β Next read:
π Creating AWS EKS Cluster using Terraform ( Current Blog )
π Setting Up ArgoCD with AWS EKS for GitOps Implementation
π Setting Up Jenkins CI/CD as part of GitOps Implementation