Overview of AWX k9s Dashboard on Kubernetes
This guide details the installation and configuration of the AWX k9s Dashboard on Kubernetes using a Debian 12 cluster. Hence, it includes deploying AWX for Ansible automation, k9s for Kubernetes management, and Kubernetes Dashboard for a web-based cluster interface.
1. Kubernetes Cluster
Prerequisites for each node
- Minimal installed Debian 12
 - 8 vCPU
 - 8 GB RAM
 - 50 GB free disk space
 - Sudo User with Admin rights
 - Stable Internet Connectivity
 - Ensure that each node can communicate with the others via a reliable network connection.
 - Ensure apparmor or SELinux are disabled / set to permissive / uninstalled
 - Ensure there is no firewall active
 
For the tutorial, I am using two Debian 12 systems.
- Master Node (k8s-master) – 192.168.1.23
 - Worker Node 1 (k8s-worker) – 192.168.1.24
 
Official AWX Operator Documentation
Detailed guide on Automation
Set Host Name and Update Hosts File
Login to each node (master & woker nodes) and set their hostname using hostnamectl command.
sudo hostnamectl set-hostname "k8s-master.local"  // Run on master node
sudo hostnamectl set-hostname "k8s-worker.local"  // Run on worker nodeFurthermore, add the following entries in /etc/hosts file on all the nodes,
192.168.1.23 k8s-master.local k8s-master
192.168.1.24 k8s-worker.local k8s-workerDisable Swap on All Nodes
For kubelet to work smoothly, it is recommended to disable swap and run following commands on master and worker nodes to turn off swap.
sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstabInstall Containerd Run time on All Nodes
Containerd is the industry standard container run time and supported by Kubernetes, so, install containerd on all master and worker nodes.
Before installing containerd, set the following kernel parameters on all the nodes.
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf 
overlay 
br_netfilter
EOF
sudo modprobe overlay 
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 
net.bridge.bridge-nf-call-ip6tables = 1 
EOFAlso, to make above changes into the effect, run
sudo sysctl --systemAdditionally, install containerd on all the nodes.
sudo apt update
sudo apt -y install containerdFurthermore, configure containerd so that it works with Kubernetes, run on all nodes
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1Set cgroupdriver to systemd on all the nodes.
Therefore, edit the file /etc/containerd/config.toml and look for the section [plugins.”io.containerd.grpc.v1.cri”.containerd.runtimes.runc.options] and change SystemdCgroup = false to SystemdCgroup = true
Save and exit the file.
Ultimately, restart and enable containerd service on all the nodes,
sudo systemctl restart containerd
sudo systemctl enable containerdAdd Kubernetes Apt Repository
In Debian 12, Kubernetes related packages are not available in the default package repositories. We have to add the Kubernetes apt repository on all nodes
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgInstall Kubernetes Tools
Additionally, install the Kubernetes tools kubeadm, kubelet, and kubectl on all nodes.
sudo apt update
sudo apt install kubelet kubeadm kubectl -y
sudo apt-mark hold kubelet kubeadm kubectlInstall Kubernetes Cluster with Kubeadm
kubelet doesn’t appreciate the command-line options anymore (these are deprecated). However, I suggest to create a configuration file, say kubelet.yaml with following content.
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "1.31.0" # Replace with your desired version
controlPlaneEndpoint: "k8s-master"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfigurationNow, we are all set to initialize the Kubernetes cluster, so run the following command only on the master node
sudo kubeadm init --config kubelet.yamlTo start interacting with the cluster, run the following commands on master node as a user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configOn the other hand, you can run the next command as root. Thus, I recommend you run everything as root.
export KUBECONFIG=/etc/kubernetes/admin.confRun following kubectl command to get nodes and cluster information
kubectl get nodes
kubectl cluster-infoOn your worker nodes, join them to the cluster by running the command that was displayed when you initialized the master node, so it will look something like this ‘Kubeadm join’
Note: To summarize, copy the exact command from the output of kubeadm init command. In my case it is
sudo kubeadm join k8s-master:6443 --token 21nm87.x1lgd4jf0lqiiiau 
--discovery-token-ca-cert-hash sha256:28b503f1f2a2592678724c482776f04b445c5f99d76915552f14e68a24b78009Once it completes, check the nodes’ status by running the following command on master node
kubectl get nodesUltimately, to make nodes’ status Ready, we must install a POD network addon like Calico
Setup Pod Network Using Calico
On the master node, run the beneath command to install calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yamlVerify the status of Calico pods and the status of the nodes again
kubectl get pods -n kube-system
kubectl get nodes2. k9s
Download the latest .deb package from K9s Releases:
wget https://github.com/derailed/k9s/releases/latest/download/k9s_Linux_amd64.debInstall the package:
sudo dpkg -i k9s_Linux_amd64.debRun K9s:
k9s3. Ansible AWX Tower
Install helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
helm versionInstall the AWX helm repo
The easiest way to install AWX on Kubernetes is by using the AWX Helm repo. So, to install AWX via helm, first add its repository using
helm repo add awx-operator https://ansible.github.io/awx-operator/
"awx-operator" has been added to your repositories
helm repo updateTherefore, to install awx-operator via helm and create the awx namespace, run
helm install ansible-awx-operator awx-operator/awx-operator -n awx --create-namespaceVerify AWX operator installation
After the successful installation, you can verify AWX operator status by running
sudo kubectl get pods -n awxCreate the storage on worker and set permissions
mkdir /mnt/storage
mkdit /mnt/storage/data
chmod -R 700 /mnt/storage/data
chown -R 26:0 /mnt/storage/data
ls -la /mnt/storage/data/Create PV, PVC and deploy AWX yaml file
AWX requires persistent volume for the postgres pod. So, let’s first create a storage class for the local volume
nano local-storage-class.yamlCopy the following inside the file, save and exit
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
  namespace: awx
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerOnce you exited the yaml file, run
kubectl create -f local-storage-class.yaml
kubectl get sc -n awxCreate persistent volume (pv) using the following pv.yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
  namespace: awx
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/storage
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s-workerSave & exit the file and run
kubectl create -f pv.yamlOnce the pv is created successfully, create persistentvolumeclaim using pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-13-ansible-awx-postgres-13-0
  namespace: awx
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10GiSave & exit the file and run
kubectl create -f pvc.yamlVerify the status of pv and pvc using the following command
kubectl get pv,pvc -n awxNow, we are all set to deploy the AWX instance. Furthermore, create an ansible-awx.yaml file with the following content
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: ansible-awx
  namespace: awx
spec:
  service_type: nodeport
  postgres_storage_class: local-storageSave & exit the file and run
kubectl create -f ansible-awx.yamlOpen k9s and watch the deployment in real time, using
k9s -n awxDeployment can take up to 10 minutes. Once everything has Completed, or has Running status, you need to
Access AWX Web Interface
To access the AWX web interface, you need to create a service that exposes the awx-web deployment:
kubectl expose deployment ansible-awx-web --name ansible-awx-web-svc --type NodePort -n awxThis command will create a NodePort service that maps the AWX web container’s port to a port on the Kubernetes node. You can find the port number by running
kubectl get svc ansible-awx-web-svc -n awxThe output will be similar to
NAME                 TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
ansible-awx-web-svc   NodePort   10.99.83.248   <none>        8052:32254/TCP   82sBy default, the admin user is admin for the web interface and the password is available in the 
kubectl get secrets -n awx | grep -i admin-passwordIt will output something like
ansible-awx-admin-password        Opaque               1      109mNow let’s get the admin password
kubectl get secret ansible-awx-admin-password -o jsonpath="{.data.password}" -n awx | base64 --decode ; echoThat will output something similar to
l9mWcIOXQhSKnzZQyQQ9LZf3awDV0YMJYou can now access the AWX web interface by opening a web browser and navigating to `http://
4. Kubernetes Dashboard
The easy way to install Kubernetes dashboard for your cluster is via the helm repo. Latest Kubernetes dashboard now has a dependency on cert-manager and nginx-ingress-controller. Fortunately, these dependencies can be automatically installed using the Helm repo
Add Kubernetes Dashboard Helm Repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo listInstall Kubernetes Dashboard Using Helm
To install Kubernetes dashboard using helm, run the following command
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboardAccessing the dashboard
List the svc running on the kubernetes-dashboard namespace using the command below, and look for a service called kubernetes-dashboard-kong-proxy
kubectl get svc -n kubernetes-dashboard1. On the collumn TYPE, it says ClusterIP. Therefore, we want to turn that into NodePort, because nodeport acts as a reverse proxy, forwarding the traffic from an external port to the internal port of the dashboard.
However, this is a critical step, because you can access the dashboard ONLY using https, so Kong is the King at this. Additionally, to convert it from ClusterIP to NodePort run
kubectl patch svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'2. If you relist the services, you’ll see something similar to
NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard-api               ClusterIP   10.102.3.238     <none>        8000/TCP        12h
kubernetes-dashboard-auth              ClusterIP   10.106.93.8      <none>        8000/TCP        12h
kubernetes-dashboard-kong-proxy        NodePort    10.96.244.188    <none>        443:32393/TCP   12h
kubernetes-dashboard-metrics-scraper   ClusterIP   10.103.129.180   <none>        8000/TCP        12h
kubernetes-dashboard-web               ClusterIP   10.100.215.221   <none>        8000/TCP        12h
Furthermore, you need to remember the port for Kong that is on the right side of the :, next to /TCP. In this case, 32393.
3. You may open a browser and access the dashboard using https://master-ip:port. For example, in this case you need to access https://192.168.1.23:32393 but you’ll see it requires a token.
In order to get a token, you can run one of 2 commands:
- kubectl create bla bla
 - kubectl get bla bla
 
4. The difference is the first command issues a token valid for one hour and the second discloses the token associated to a service account, which doesn’t expire.
Therefore, to get a service account and the token for it, run the following
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin-binding   --clusterrole=cluster-admin   --serviceaccount=kubernetes-dashboard:dashboard-admin
kubectl -n kubernetes-dashboard create token dashboard-admin
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: dashboard-admin-secret
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: dashboard-admin
type: kubernetes.io/service-account-token
EOF
kubectl get secret dashboard-admin-secret -n kubernetes-dashboard -o jsonpath="{.data.token}" | base64 --decodeAs a result, the output of the last command will be a humongous line of characters that looks like a hash. That’s the token that you need to copy and paste in the token field from the dashboard web interface in order to login.
Conclusion on AWX k9s Dashboard on Kubernetes
Therefore, setting up the AWX k9s Dashboard on Kubernetes enhances cluster management and automation. With AWX for Ansible playbooks, k9s for efficient navigation, and the Kubernetes Dashboard for a web interface, this configuration ensures streamlined operations on Debian 12.


