Setting Up a Local Kubernetes Development Environment with VirtualBox and Vagrant for AWS EKS

Jimin
7 min readApr 20, 2024

--

Introduction

Kubernetes has revolutionized the way we deploy and manage containerized applications. However, setting up a local Kubernetes development environment can be daunting. In this blog post, we’ll guide you through setting up a local Kubernetes development environment using VirtualBox managed by Vagrant.

Prerequisites

Before we begin, make sure you have the following prerequisites installed on your machine:

  • VirtualBox: A free and open-source virtualization platform for running virtual machines.
  • Vagrant: A tool for building and managing development environments.

For more information about VirtualBox and Vagrant, you can refer to my previous blog post on the topic.

Step 1: Installing VirtualBox and Vagrant

  1. Download and install VirtualBox from the official website.
  2. Download and install Vagrant from the official website.

Step 2: Setting Up the Virtual Machine

  1. Open a terminal and run the following command to create a directory for your Vagrant project.
  2. Run the following command to initialize a new Vagrant environment and create a Vagrantfile.
mkdir kubernetes-dev
cd kubernetes-dev
vagrant init ubuntu/focal64

Step 3: Provisioning the Virtual Machine and Installing Kubernetes Tools

In this step, we’re configuring the Vagrantfile to automatically provision the virtual machine with the necessary tools and dependencies for Kubernetes development. The script will update package lists, install prerequisites, download and install kubectl, eksctl, AWS CLI, and Helm along with their dependencies.

  • Open the Vagrantfile with your preferred text editor (e.g., nano, vi, or Visual Studio Code):
vi vagrantfile
  • Replace the contents of the Vagrantfile with the provided code snippet:
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
# Define VM configuration
config.vm.box = "ubuntu/focal64"
config.vm.network "private_network", ip: "192.168.56.7"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end

# Provisioning script to install Kubernetes tools and dependencies
config.vm.provision "shell", inline: <<-SHELL
# Update package lists
sudo apt-get update

# Install prerequisites
sudo apt-get install -y apt-transport-https ca-certificates curl

# Download kubectl binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Validate the downloaded binary (optional)
# You can skip this step if you don't want to validate the checksum
# Download the kubectl checksum file
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

# Validate the kubectl binary against the checksum file
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check

# Install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Clean up downloaded files
rm kubectl kubectl.sha256

# Check if kubectl installation was successful
if [ $? -eq 0 ]; then
echo "kubectl installation completed successfully."
else
echo "Error: kubectl installation failed. Please check the logs and try again."
exit 1
fi

# Download and install eksctl
sudo curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | sudo tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

# Download and install AWS CLI version 2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Check if AWS CLI installation was successful
if [ $? -eq 0 ]; then
echo "AWS CLI version 2 installation completed successfully."
else
echo "Error: AWS CLI version 2 installation failed. Please check the logs and try again."
exit 1
fi

# Install Helm prerequisites
sudo apt-get install -y apt-transport-https gnupg
curl -fsSL https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install -y software-properties-common
sudo add-apt-repository "deb [arch=amd64] https://baltocdn.com/helm/stable/debian/ all main"

# Update package lists again
sudo apt-get update

# Install Helm
sudo apt-get install -y helm

# Check if all installations were successful
if [ $? -eq 0 ]; then
echo "All installations completed successfully."
else
echo "Error: One or more installations failed. Please check the logs and try again."
exit 1
fi
SHELL
end

Save the file and exit the text editor.

  • Now that we have configured the Vagrantfile, let’s bring up the virtual machine by running the following command in your terminal within the directory where your Vagrantfile is located:
vagrant up
  • Once the virtual machine is up and running, SSH into it using the following command:
vagrant ssh
  • Now that you’re logged into the virtual machine, let’s verify that all the necessary tools and dependencies for Kubernetes development are installed. You can do this by running the following commands:
kubectl version --client
eksctl version
aws --version
helm version --short

Each command should return the respective version information without any errors. If you encounter any errors or if the commands are not recognized, double-check the provisioning script in your Vagrantfile to ensure that all installations were successful.

Step 4: Configuring AWS CLI

While it’s generally discouraged to use the root user’s credentials for everyday tasks due to security reasons, for the sake of simplifying the demonstration process in this blog, we’ll proceed with using the root user’s credentials. However, it’s essential to note that in a real-world scenario, using the root user’s credentials should be avoided.

Create Access Key

  • On the top right, click your account name and select Security credentials.
  • In Access keys, click Create access key.
Create access key
  • You will encounter a warning message asking, Continue to create access key? with the option to acknowledge understanding that creating a root access key is not a best practice. If you still want to proceed, check the acknowledgment box and click the button labeled Create access key.
  • Note down the Access key and Secret access key.

Configure AWS CLI on Virtual Machine

Now that we have obtained the access key and secret access key, let’s configure the AWS CLI on your virtual machine.

  • If you are not in the virtual machine, SSH into it using the following command:
vagrant ssh
  • Run the following command to start the configuration process:
aws configure
  • You will be prompted to enter the following information:
    - AWS Access Key ID: Enter the Access Key ID provided earlier.
    - AWS Secret Access Key: Enter the Secret Access Key provided earlier.
    - Default region name: Enter the AWS region you want to use (e.g., us-west-2).
    - Default output format: You can leave this blank or enter a default output format (e.g., json).

After entering the required information, the AWS CLI will be configured on your virtual machine.

Testing AWS CLI Configuration

  • To ensure that AWS CLI is configured correctly, you can run a simple command to list the available AWS regions:
aws ec2 describe-regions

If the configuration is correct, you should see a list of AWS regions displayed in the output.

Step 5: Creating an Amazon EKS Cluster

Now that you’ve set up your local development environment and configures AWS CLI, you’re ready to create an Amazon EKS Cluster. Follow these steps to create your EKS cluster:

Define Cluster Configuration

Decide on the configuration for your EKS cluster, including the cluster name, AWS region, node instance type, and desired number of nodes. You can customize these parameters based on your requirements.

Create Cluster

Use the eksctl command-line tool to create your EKS cluster. Open a terminal and run the following command. If you are not in the virtual machine, don’t forget to SSH into it by typing vagrant ssh:

eksctl create cluster --name my-eks-cluster --region us-east-2 --node-type t2.medium --nodes 3

Replace my-eks-cluster with your desired cluster name and us-east-2 with your preferred AWS region. You can also adjust the node type and number of nodes as needed.

Note: Remember that EKS clusters and their associated EC2 instances incur costs, so it’s essential to remove these resources promptly after completing your demonstration to avoid unnecessary charges.

Step 6: Exploring Helm

Now that you have set up your local Kubernetes development environment and configured AWS CLI, you can explore Helm, a package manager for Kubernetes, which helps you manage Kubernetes applications.

Understanding Helm

Helm uses charts, which are packages of pre-configured Kubernetes resources, to deploy applications. These charts can be customized to suit your application’s requirements.

Adding Helm Repositories

Helm charts are stored in repositories. You can add repositories to Helm using the helm repo add command. For example:

helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list # List all added Helm repositories
helm repo update # Update the Helm repositories to ensure you have the latest charts

Searching for Charts

You can search for available charts using the helm search repo command. For instance:

helm search repo nginx

Installing Charts

Once you’ve found a chart you want to use, you can install it using the helm install command. For example:

helm install my-nginx bitnami/nginx
helm ls # List all installed Helm releases
helm status my-nginx # Get the status of the Helm release named "my-nginx"

After installation, you can check the resources created by Helm using:

kubectl get all

To ensure Nginx is running, you can check the pods:

kubectl get pods # Check if Nginx pods are running

Step 7: Cleaning Up

Uninstalling Helm

If you want to uninstall a Helm release, you can use the helm uninstall command followed by the release name:

helm uninstall my-nginx

Removing the EKS Cluster

To remove the EKS cluster and associated resources, you can use the eksctl delete cluster command. Make sure to replace my-eks-cluster with the name of your cluster:

eksctl delete cluster --name my-eks-cluster

--

--

Jimin
Jimin

Written by Jimin

DevOps engineer and tech enthusiast. Sharing tech insights to simplify the complex. Let's connect on LinkedIn! https://www.linkedin.com/in/byun-jimin/

No responses yet