Bare Metal/VM Installation - kubeadm
¶
This walkthrough will show how to install Kinetica DB. For this example the Kubernetes cluster will be running on 3 VMs with Ubuntu Linux 22.04 (ARM64).
This solution is equivalent to a production bare metal installation and does not use Docker, Podman or QEMU.
The Kubernetes cluster requires 3 VMs consiting of one Master node k8smaster1
and two Worker nodes k8snode1
& k8snode2
.
Purple Example Boxes
The purple boxes in the instructions below can be expanded for a screen recording of the commands & their results.
Kubernetes Node Installation¶
Setup the Kubernetes Nodes¶
Edit /etc/hosts
¶
SSH into each of the nodes and run the following: -
where x.x.x.x is the IP Address of the corresponding nose.
Disable Linux Swap¶
Next we need to disable Swap on Linux: -
comment out the swap entry in /etc/fstab
on each node.
Linux System Configuration Changes¶
We are using containerd as the container runtime but in order to do so we need to make some system level changes on Linux.
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Container Runtime Installation¶
Run on all nodes (VMs)
Run the following commands, until advised not to, on all of the VMs you created.
Install containerd
¶
Create a Default containerd
Config¶
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Enable System CGroup¶
Change the SystemdCgroup value to true in the containerd configuration file and restart the service
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
Install Pre-requisite/Utility packages¶
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl gpg git
Download the Kubernetes public signing key¶
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes Package Repository¶
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install the Kubernetes Installation and Management Tools¶
sudo apt update
sudo apt install -y kubeadm=1.29.0-1.1 kubelet=1.29.0-1.1 kubectl=1.29.0-1.1
sudo apt-mark hold kubeadm kubelet kubectl
Initialize the Kubernetes Cluster¶
Initialize the Kubernetes Cluster by using kubeadm on the k8smaster1
control plane node.
Note
You will need an IP Address range for the Kubernetes Pods. This range is provided to kubeadm
as part of the initialization. For our cluster of three nodes, given the default number of pods supported by a node (110) we need a CIDR of at least 330 distinct IP Addresses. Therefore, for this example we will use a --pod-network-cidr
of 10.1.1.0/22
which allows for 1007 usable IPs. The reason for this is each node will get /24
of the /22
total.
The apiserver-advertise-address
should be the IP Address of the k8smaster1
VM.
sudo kubeadm init --pod-network-cidr 10.1.1.0/22 --apiserver-advertise-address 192.168.2.180 --kubernetes-version 1.29.2
You should now deploy a pod network to the cluster. Run kubectl apply -f [podnetwork].yaml
with one of the options listed at: Cluster Administration Addons
Make a note of the portion of the shell output which gives the join command which we will need to add our worker nodes to the master.
Copy the kudeadm join
command
Then you can join any number of worker nodes by running the following on each as root:
Setup kubeconfig
¶
Before we add the worker nodes we can setup the kubeconfig
so we will be able to use kubectl
going forwards.
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Connect & List the Kubernetes Cluster Nodes¶
We can now run kubectl
to connect to the Kubernetes API Server to display the nodes in the newly created Kubernetes CLuster.
STATUS = NotReady
From the kubectl
output the status of the k8smaster1
node is showing as NotReady
as we have yet to install the Kubernetes Network to the cluster.
We will be installing cilium
as that provider in a future step.
Warning
At this point we should complete the installations of the worker nodes to this same point before continuing.
Join the Worker Nodes to the Cluster¶
Once installed we run the join on the worker nodes. Note that the command which was output from the kubeadm init
needs to run with sudo
sudo kubeadm join 192.168.2.180:6443 --token wonuiv.v93rkizr6wvxwe6l \
--discovery-token-ca-cert-hash sha256:046ffa6303e6b281285a636e856b8e9e51d8c755248d9d013e15ae5c5f6bb127
Now we can again run
Now we can see all the nodes are present in the Kubernetes Cluster.
Run on Head Node only
From now the following cpmmands need to be run on the Master Node only..
Install Kubernetes Networking¶
We now need to install a Kubernetes CNI (Container Network Interface) to enable the pod network.
We will use Cilium as the CNI for our cluster.
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-arm64.tar.gz
sudo tar xzvfC cilium-linux-arm64.tar.gz /usr/local/bin
rm cilium-linux-arm64.tar.gz
Install cilium
¶
You can now install Cilium with the following command:
If cilium status
shows errors you may need to wait until the Cilium pods have started.
You can check progress with
Check cilium
Status¶
Once Cilium the Cilium pods are running we can check the status of Cilium again by using
We can now recheck the Kubernetes Cluster Nodes
and they should have Status Ready
Kubernetes Node Preparation¶
Label Kubernetes Nodes¶
Now we go ahead and label the nodes. Kinetica uses node labels in production clusters where there are separate 'node groups' configured so that the Kinetica Infrastructure pods are deployed on a smaller VM type and the DB itself is deployed on larger nodes or gpu enabled nodes.
If we were using a Cloud Provider Kubernetes these are synonymous with EKS Node Groups or AKS VMSS which would be created with the same two labels on two node groups.
kubectl label node k8snode1 app.kinetica.com/pool=infra
kubectl label node k8snode2 app.kinetica.com/pool=compute
additionally in our case as we have created a new cluster the 'role' of the worker nodes is not set so we can also set that. In many cases the role is already set to worker
but here we have some latitude.
kubectl label node k8snode1 kubernetes.io/role=kinetica-infra
kubectl label node k8snode2 kubernetes.io/role=kinetica-compute
Install Storage Class¶
Install a local path provisioner storage class. In this case we are using the Rancher Local Path provisioner
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml
Set Default Storage Class¶
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Kubernetes Cluster Provision Complete
Your basre Kubernetes Cluster is now complete and ready to have the Kinetica DB installed on it using the Helm Chart.
Install Kinetica for Kubernetes using Helm¶
Add the Helm Repository¶
helm repo add kinetica-operators https://kineticadb.github.io/charts
helm repo update
Download a Starter Helm values.yaml
¶
Now we need to obtain a starter values.yaml
file to pass to our Helm install. We can download one from the github.com/kineticadb/charts
repo.
wget https://raw.githubusercontent.com/kineticadb/charts/master/kinetica-operators/values.onPrem.k8s.yaml
Obtain a Kinetica License Key
A product license key will be required for install. Please contact Kinetica Support to request a trial key.
Helm Install Kinetica¶
helm -n kinetica-system upgrade -i \
kinetica-operators kinetica-operators/kinetica-operators \
--create-namespace \
--values values.onPrem.k8s.yaml \
--set db.gpudbCluster.license="LICENSE-KEY" \
--set dbAdminUser.password="PASSWORD" \
--set global.defaultStorageClass="local-path"
Monitor Kinetica Startup¶
After a few moments, follow the progression of the main database pod startup with:
Kinetica DB Provision Complete
Once you see gpudb-0 3/3 Running
the database is up and running.
Software LoadBalancer
If you require a software based LoadBalancer to allocate IP address to the Ingress Controller or exposed Kubernetes Services then see here
This is usually apparent if your ingress or other Kubernetes Services with the type LoadBalancer
are stuck in the Pending
state.