Ansible Role to Configure K8S Multi-Node Cluster over AWS Cloud

Krithika Sharma
9 min readApr 26, 2021

--

In this article, we will :

🔅 Create Ansible Playbook to launch 3 AWS EC2 Instance
🔅 Create Ansible Playbook to configure Docker over those instances.
🔅 Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
🔅 Convert Playbook into roles and Upload those roles on your Ansible Galaxy.
🔅 Also, Upload all the YAML code over your GitHub Repository.
🔅 Create a README.md document using markdown language describing your Task in a creative manner.

Let’s start!!!

Ansible Architecture is designed in such a way that uses some intelligent kind of thing and knows how to do it, we only need to specify what to do.

Ansible can be used in 2 ways for configuration management:
1. Ad-hoc
2. PlayBook

For this, we require: 1.Python(3) 2.Ansible(2.10.3) 3. boto(3)

  1. You can use yum to install python

2. Before we begin our task let’s know how Ansible is installed:
we can install Ansible using pip cmd because ansible is one of the Python libraries.
#pip3 install ansible

And install sshpass dependent software

#yum install sshpass

3. Boto is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto provides an easy-to-use, object-oriented API, as well as low-level access to AWS services.

Let’s now, understand the basic terms:

· Controller Node: The system in which Ansible is installed is known as the controller node.

· Managed Node/ Target Node: The system on which configuration is done is known as Target node or Managed node.

· Inventory: Inventory is a file in the controller node which Ansible uses to connect to target nodes. It consists of 1. Ip addresses 2. Protocol 3. Username 4. Password

Step 1: Install Ansible on the managed node.
Create IAM user in AWS to generate access key n secret key.
The private key (pem) file should be in the managed node.

  • Change the mode of the key(give only read permission)
chmod 400 awskey.pem
  • Download ec2.py n ec2.ini from github using ‘wget <raw_file_url>’ in a separate folder and give this folder path in ansible.cfg
wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.pywget https://github.com/ansible/ansible/blob/stable-2.9/contrib/inventory/ec2.ini
  • Either export the keys to set the environmental variable so that ec2.py file can access these variables or you can mention the access key n secret key in ec2.ini file in line no 217 n 218 respectively.

Note: In ec2.py update the first line with your python version. Since mine is python3 I’m updating it as ‘#!/usr/bin/python3’ n comment the line no. 172.

aws_access_key_id = ***********************
aws_secret_access_key = *******************

Step 2: Creating playbook to launch 3 instances

Here is the playbook run on the local host to launch 3 ec2 instances

  • Secure your access key and secret key using ansible-vault
- hosts: localhostvars_files:
- ec2_details.yml
- ec2_vault.yml
tasks:
- name: installing boto
pip:
name: boto
- name: Launching EC2 instance for master
ec2_instance:
image_id: "{{ image_id_master }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ key }}"
network:
assign_public_ip: true
instance_type: "{{ instance_type }}"
wait: yes
region: "{{ region }}"
security_group: "{{ sg_master }}"
vpc_subnet_id: "{{ vpc_master}}"
state: started
name: "kube-master"
register: master_details
- name: Launching EC2 instance for slave1
ec2_instance:
image_id: "{{ image_id_slave1 }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ key }}"
network:
assign_public_ip: true
instance_type: "{{ instance_type }}"
wait: yes
region: "{{ region }}"
security_group: "{{ sg_slave }}"
vpc_subnet_id: "{{ vpc_slave1 }}"
state: started
name: "kube-slave1"
register: slave1_details
- name: Launching EC2 instance for slave2
ec2_instance:
image_id: "{{ image_id_slave2 }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
key_name: "{{ key }}"
network:
assign_public_ip: true
instance_type: "{{ instance_type }}"
wait: yes
region: "{{ region }}"
security_group: "{{ sg_slave }}"
vpc_subnet_id: "{{ vpc_slave2 }}"
state: started
name: "kube-slave2"
register: slave2_details
3 instances launched using the above ansible-playbook

Step 3A: we need to update the inventory using the facts for further tasks

- name: updating the inventory
blockinfile:
dest: /root/ansible_ws/task19/ip.txt
marker: ""
block: |
[aws_instance_master]
{{ master_details["instances"][0]["public_ip_address"] }} ansible_user=ec2-user ansible_ssh_private_key_file=awskey.pem

Using the same way we can update inventory for the slaves

provide the — ask-vault-pass if u encrypted the credentials using vault
dynamically updated inventory

To update slaves inventory:

- name: updating the inventory
blockinfile:
dest: /root/ansible_ws/task19/ip.txt
marker: ""
block: |
[aws_instance_slaves]
{{ slave1_details["instances"][0]["public_ip_address"] }} ansible_user=ec2-user ansible_ssh_private_key_file=awskey.pem
{{ slave2_details["instances"][0]["public_ip_address"] }} ansible_user=ec2-user ansible_ssh_private_key_file=awskey.pem

Inventory updated for slaves:

Step 3B: inventory updation using ec2.py and ec2.ini files from github.

You can use this to make the inventory more optimized. For Playbooks i used the step 3A for dynamic inventory but that has a loophole i.es., it keeps on launching the instances again n again n doesn’t work if the instances are restarted as the IP o restarting the instance changes.

Run the python ec2 file following command to know the host(group) names

#python3 ec2.py
(or)
#./ec.py #since the file is made executable

Step 4: Installing n starting docker services on the above instance

Set the “ansible.cfg “ file:

  • Most of the EC2 instances allow us to login as an “ec2-user” user, this is why we have to mention the remote_user as “ec2-user”.
  • EC2 instances allow key-based authentication, hence, we must mention the path of the private key.
  • The most important part is privilege_escalation. “root” powers are required if we want to configure anything in the instance. But “ec2-user” user is a general user with limited powers. Privilege Escalation is used to give “Sudo” powers to a general user.
- hosts: all
gather_facts: false
tasks:
- name: installing docker
yum:
name: docker
# state by default is presnt so no need to mention
- name: starting docker services
service:
name: docker
enabled: yes
state: started

Step 5: Let’s understand the steps required to configure the MasterNode

  1. Both the nodes (master and slave) require docker containers. The master node needs containers for running different services, while the worker node needs them for the client to launch the pods.
  2. Now the next step is to install the software for K8s setup which is kubeadm, but by default, kubeadm is not provided by the repos configured in the yum, so we need to configure yum first before downloading it.
  3. Installing kubelet, kubectl, kubeadm, and iproute-tc. Start the kubelet service. All the services of the MasterNode are available as containers and “kubeadm” can be used to pull the required images.
  4. Docker by default uses a Cgroup Driver called “cgroupfs”. But Kubernetes doesn’t support this cgroup rather it supports the “systemd” driver. Hence, we must change this. Restart the Docker service
  5. Copying k8s.conf file
  6. Start the kubeadm service by providing all the required parameters.
  7. We usually have a separate client who will use the kubectl command on the master, but just for testing, we can make the master the client/user. Now if you run the “kubectl” command, it will fail (we already have kubectl software in the system). The reason for the above issue is that the client doesn’t know where the master is, so the client should know the port number of API, and username and password of the master, so to use this cluster as a normal user, we can copy some files in the HOME location, the files contain all the credentials of the master node.
  8. Install Add-ons

Step 6: Creating the playbook(YAML file) to set up Kubernetes master.

- hosts: aws_instance_master
vars:
cidr: 10.240.0.0/16
tasks:
- name: Configuring yum repository for kubernetes
yum_repository:
name: k8s
description: kuberbetes
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey: >-
https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
file: kubernetes
- name: installing kubelet, kubeadm, kubectl, n iproute-tc
package:
name: "{{ item }}"
loop:
- kubelet
- kubeadm
- kubectl
- iproute-tc
- name: enabling kubelet service
service:
name: kubelet
state: started
enabled: yes
- name: pulling the images
command: "kubeadm config images pull"
- name: changing the cgroup driver of docker to systemd
copy:
dest: "/etc/docker/daemon.json"
src: "daemon.json"
- name: restarting docker
service:
name: docker
state: restarted
enabled: yes
- shell: "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"- name: clearing caches
shell: "echo '3' > /proc/sys/vm/drop_caches"
- name: copying configuration file of k8s
copy:
dest: "/etc/sysctl.d/k8s.conf"
src: "k8s.conf"
- name: modyfying kernal parameters
shell: "sysctl --system"
- name: examining service of kubectl
shell: "kubectl get nodes"
register: shell_status
ignore_errors: yes
- name: initializing cluster
shell: "kubeadm init --pod-network-cidr={{ cidr }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
when: shell_status.rc==1
- name: creating .kube directory
file:
path: "$HOME/.kube"
state: directory
- name: copying config file
shell: "echo Y | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
- name: changing ownership of config file
shell: "chown $(id -u):$(id -g) $HOME/.kube/config"
- name: applying flannel
shell: "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
- name: generating token
shell: "kubeadm token create --print-join-command"
register: token
- name: storing token in token.yml
blockinfile:
path: /root/token.yml
create: yes
block: |
join_command: {{ token.stdout }}
- name: copying the token.yml file from master to local
fetch:
src: /root/token.yml
dest: /root/ansible_ws/task19/token.yml
flat: yes

daemon.json

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Step 7: Running the playbook

Step 8: Verifying the ec2 Kubernetes master instance:

Step 9: Creating the playbook(YAML file) to set up Kubernetes Slave.

- hosts: aws_instance_slaves
vars_files:
- token.yml
tasks:
- name: Configuring yum repository for kubernetes
yum_repository:
name: k8s
description: kuberbetes
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey: >-
https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
file: kubernetes
- name: installing kubelet, kubeadm, kubectl, n iproute-tc
package:
name: "{{ item }}"
loop:
- kubelet
- kubeadm
- kubectl
- iproute-tc
- name: enabling kubelet service
service:
name: kubelet
state: started
enabled: yes
- name: pulling the images
command: "kubeadm config images pull"
- name: changing the cgroup driver of docker to systemd
copy:
dest: "/etc/docker/daemon.json"
src: "daemon.json"
- name: restarting docker
service:
name: docker
state: restarted
enabled: yes
- shell: "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"- name: clearing caches
shell: "echo '3' > /proc/sys/vm/drop_caches"
- name: copying configuration file of k8s
copy:
dest: "/etc/sysctl.d/k8s.conf"
src: "k8s.conf"
- name: modifying kernel parameters
command: "sysctl --system"
- name: joining the node to master
shell: "{{ join_command }}"

Step 10: Checking the master if the nodes are joined

Note: The change in IPs is because of the AWS ec2 instance’s policies which change the IP each time we start the instance. To avoid this use Elastic IP.

The task is done! Now, we need to convert this playbook into roles

Step 11:

Set the roles path n inventory in ansible.cfg

Use the below command to create a role:

ansible-galaxy init <role_name>

Create 3 roles 1 for launching the instance 1 for Kubernetes master n 1 for Kubernetes slave. And, then copy the content of the above playbooks to their respective files.

  • Inserting these roles in the main playbook
- hosts: localhost
roles:
- aws_instances_launch
- hosts: tag_Name_kube_master
roles:
- Kubernetes_master
tasks:
- name: refreshing the inventory
meta: refresh_inventory
- hosts: tag_Name_kube_slave1
roles:
- Kubernetes_slave
- hosts: tag_Name_kube_slave2
roles:
- Kubernetes_slave

Let’s now run the main_playbook.yml

  • Let’s now verify the Kubernetes master node:

Here is the link to the Github repository.

Task Done!!

Thank you! keep learning! keep growing! keep sharing!

Krithika Sharma
If you enjoyed this, follow me on Medium for more
Let’s connect on LinkedIn

--

--

No responses yet