Creating a Kubernetes Cluster in AWS using Kops

Kubernetes is an open source container orchestration and management tool created by Google. It is very useful wth the rise of containers and the need for constant development, deployment, and mantainance of software.

Kops (Kubernetes Operations) is an easy to use, open source tool used for creating, destroying and upgrading Kubernetes clusters and the underlying infrastructure in a Prodution Environment. It is supported on AWS and works via CLI. There are some other ways to deploy a Kubernetes cluster, but Kops manages the entire process including the infrastructure and is very reliable and easy.

One of the benefits of Kubernetes is that it is self-healing - your clusters can be restarted in case of failure of any of the components such as a node. You can create your cluster in an existing or new VPC either with a pubic or private topology. You’ll need to configure IAM permissions and an S3 bucket for the KOPS_STATE_STORE. The KOPS_STATE_STORE is an S3 bucket that stores your cluster configuration and state. It is the source of truth for your Kops managed clusters - if anything happens to this bucket - you will be unable to manage your cluster using Kops. The IAM permissions allow Kops to make API calls and create your infrastructure for you.

Kops supports a variety of DNS configurations including an internal DNS called a gossip-based cluster. With a gossip-based cluster, there is no need for configuring DNS under an existing/new domain, instead, you cluster is created under a domain ending in k8s.local.

In this example, I will be using a pre-existing VPC with 3 private and 3 public subnets. The gossip-based cluster will be created with private topology without the use of a bastion.

To deploy a cluster in AWS, you will:

  • Create an EC2 Instance Role to be used by the Kops Instance.

  • Provision an EC2 Instance with the previous role to run Kops - this instance will be used to manage the cluster externally.

  • Use Kops CLI commands to deploy and manage a Kubernetes Cluster

The IAM permissions needed by Kops to function properly are:

  • AmazonEC2FullAccess

  • AmazonRoute53FullAccess

  • AmazonS3FullAccess

  • IAMFullAccess

  • AmazonVPCFullAccess

Create EC2 Instance Role

You can either create the role via the console or using awscli.

Steps for creating the IAM role using AWS CLI

Create a trust policy file in json format.

{
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Principal": { "Service": "ec2.amazonaws.com"},
    "Action": "sts:AssumeRole"
    }
  ]
}

Create the role using the trust policy from the last step.

aws iam create-role --role-name KopsRole --assume-role-policy-document file://kops-trust-policy.json

Attach the required policies to the role.

aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --role-name KopsRole
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --role-name KopsRole
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --role-name KopsRole
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --role-name KopsRole
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --role-name KopsRole

Create an instance profile.

aws iam create-instance-profile --instance-profile-name kops-profile

Add the role to the instance profile.

aws iam add-role-to-instance-profile --instance-profile-name kops-profile --role-name KopsRole

Provision Kops EC2 Instance

Create an EC2 Instance

Be sure to attach the kops-profile created in the last step to the instance.

Install and Configure Kops

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

Install kubectl

Kubectl is a CLI used to run commands on the kubernetes clusters.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl

Create an ssh key

The ssh key is required and your create command will fail without it. The key is used to access the master and worker nodes from the Kops Instance.

ssh-keygen -q -t rsa -N '' -f .ssh/id_rsa|echo -e 'y\n' > /dev/null

Create a Kubernetes Cluster

Now that all the tools needed to create and test the cluster are instaled, we can continue with our cluster creation. Kubernetes can be deployed with High Availabiity in mind - that means you can deploy a cluster with multiple masters and worker nodes that are self-healing. In this example, I am creating a cluster with 1 master and 2 nodes.

Create an S3 Bucket

aws s3api create-bucket --bucket kops-bucket-k8s --region us-east-1

Export the AWS Region

export AWS_REGION=us-east-1

Export the S3 Bucket

export KOPS_STATE_STORE=s3://<bucket-name>

Create your gossip-based cluster config

In this example, I would like to use my existing subnets rather than letting kops create new ones during deployment. To have kops use them, Ineed to specify which subnet I would like it to use. Once the cluster is created, kops will add two tags to the subnets that are used for it's configuration and management.

#Public Subnets:
"kubernetes.io/cluster/<cluster-name>" = "shared"
"kubernetes.io/role/elb" = "1"
"SubnetType" = "Utility"

#Private Subnets:
"kubernetes.io/cluster/<cluster-name>" = "shared"
"kubernetes.io/role/internal-elb" = "1"
"SubnetType" = "Private"

I am also following AWS Best Practices and using an instance profile instead of adding programmatic keys to the instance

Preview your cluster creation configuration:

export AWS_REGION=us-east-1
export NODE_SIZE=${NODE_SIZE:-t2.medium}
export MASTER_SIZE=${MASTER_SIZE:-t2.medium}
export ZONES=${ZONES:-"us-east-1a,us-east-1b,us-east-1c"}
export MASTER_ZONES=${MASTER_ZONES:-"us-east-1a"}
export KOPS_STATE_STORE="s3://kops-bucket-k8s"
export MASTER_COUNT=${MASTER_COUNT:-"1"}
export NODE_COUNT=${NODE_COUNT:-"1"}
export VPCID=${VPCID:-"vpc-27b0b0a6e4skrb115"}
export TOPOLOGY=private
export PROVIDER=aws
export ELB=${ELB:-"internal"}
export LABELS=${LABELS:-"owner=PAnong,Project=K8S_Blog_Post"}
export SUBNET_IDS=${SUBNET_IDS:-"subnet-27bfkste542fdf82f,subnet-0bc9b753kwy6a535,subnet-020743vjhd8063837"}
export UTILITY_SUBNETS=${UTILITY_SUBNETS:-"subnet-00fd28n547487a1,subnet-0d8750cs64jeb2260,subnet-007eca82h7ea36e56"}
export UTILITY_CIDRS=${UTILITY_CIDRS:-"10.0.1.0/24",,"10.0.3.0/24","10.0.5.0/24"}
export NETWORK_CIDR=10.0.0.0/16
export SUBNET_CIDR=${SUBNET_CIDR:-"10.0.0.0/24","10.0.2.0/24","10.0.4.0/24"}
export NAME=panong.k8s.local
kops create cluster \
--cloud $PROVIDER \
--master-count=$MASTER_COUNT \
--node-count=$NODE_COUNT \
--dns $TOPOLOGY \
--zones $ZONES \
--api-loadbalancer-type $ELB \
--topology $TOPOLOGY \
--networking weave \
--network-cidr $NETWORK_CIDR \
--vpc ${VPCID} \
--node-size $NODE_SIZE \
--master-size $MASTER_SIZE \
--master-zones $MASTER_ZONES \
--cloud-labels "$LABELS" \
--authorization AlwaysAllow \
--subnets $SUBNET_IDS \
--utility-subnets $UTILITY_SUBNETS \
--name ${NAME}

Note: The number of subnets listed in the MASTER_ZONES must match the MASTER_COUNT parameter.

The cluster will be created on a debian AMI unless you specify a different OS image by adding the following flag to your create command:

--image=<ami-id>

You can specify a kubernetes version to install by adding the following flag:

--kubernetes-version=<version>

Note: Kops only supports the equivalent Kubernetes minor release number. A minor version is the second digit in the release number. e.g kops version 1.9.0 has a minor version of 9. However, kops is backwards compatible, so even with kops version 1.9.0, you can create a kubernetes cluster version 1.8.0 or older.

For the process of automating a cluster deployment or backing up your configuration in addition to the KOPS_STATE_STORE, you can get the configuration of Cluster and InstanceGroups in a yaml file

kops get cluster $NAME -o yaml > <file_name.yaml>

I created this cluster with

--authorization-mode AlwaysAllow

this is not the most secure in a production environment. It is worth taking a look at using RBAC and setting up the necessary permissions and restrict access to the cluster via roles.

If the configuration looks correct, you can proceed with cluster creation by running

kops update cluster panong.k8s.local --yes

Running the kops update command provisions the AWS infrastructure to support your cluster including a load balancer, master and node instances, storage, security groups, key pair for master-node communication, iam-roles for the master and node, and an autoscaling-group.

It also tags the subnets in the vpc you provided to prevent it from being deleted by kops since it is more than likely shared.

If you want to make changes before creating the cluster, you can edit the configuration by running:

kops edit cluster <cluster-name>

After a few minutes, ensure the cluster is up and running

kops validate cluster --name <cluster-name>

You should see your master and nodes and a message saying your cluster is ready

Using cluster from kubectl context: panong.k8s.local

Validating cluster panong.k8s.local

INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-us-east-1a       Master  t2.medium       1       1       us-east-1a
nodes                   Node    t2.medium       1       1       us-east-1a,us-east-1b,us-east-1c

NODE STATUS
NAME                            ROLE    READY
ip-10-0-0-72.ec2.internal       master  True
ip-10-0-4-24.ec2.internal       node    True

Your cluster panong.k8s.local is ready

You can further verify by running a kubectl command

kubectl get nodes
# You should see your master and nodes in a "Ready" state as well as the version of Kubernetes installed. E.g.
NAME                        STATUS    ROLES     AGE       VERSION
ip-10-0-0-72.ec2.internal   Ready     master    15m       v1.9.8
ip-10-0-4-24.ec2.internal   Ready     node      14m       v1.9.8

To tear down the cluster

kops delete cluster --name <cluster-name> --yes

With your cluster up and running, the real fun can begin!

 

*Cover Images from github.com