Skip to content

Preparation & Prerequisites

Checks & steps to ensure a smooth installation.

Obtain a Kinetica License Key

A product license key will be required for install. Please contact Kinetica Support to request a trial key.

Failing to provide a license key at installation time will prevent the DB from starting.

Preparation and prerequisites

Free Resources

Your Kubernetes cluster version should be >= 1.22.x and have a minimum of 8 CPU, 8GB Ram and
SSD or SATA 7200RPM hard drive(s) with 4X memory capacity.

GPU Support

For GPU enabled clusters the cards below have been tested in large-scale production environments and provide the best performance for the database.

GPU Driver
P4/P40/P100 525.X (or higher)
V100 525.X (or higher)
T4 525.X (or higher)
A10/A40/A100 525.X (or higher)

Kubernetes Cluster Connectivity

Installation requires Helm3 and access to an on-prem or CSP managed Kubernetes cluster. and the Kubernetes CLI kubectl.

The context for the desired target cluster must be selected from your ~/.kube/config file and set via the KUBECONFIG environment variable or kubectl ctx (if installed). Check to see if you have the correct context with,

show the current kubernetes context
kubectl config current-context

and that you can access this cluster correctly with,

list kubernetes cluster nodes
kubectl get nodes
Get Nodes

Find get_nodes

If you do not see a list of nodes for your K8s cluster the helm installation will not work. Please check your Kubernetes installation or access credentials (kubeconfig).

Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

Required Container Images

docker.io (Required Kinetica Images for All Installations)

  • docker.io/kinetica/kinetica-k8s-operator:v7.2.0-3.rc-3
    • docker.io/kinetica/kinetica-k8s-cpu:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:v7.2.0-3.rc-3 or
    • docker.io/kinetica/kinetica-k8s-gpu:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench-operator:v7.2.0-3.rc-3
  • docker.io/kinetica/workbench:v7.2.0-3.rc-3
  • docker.io/kinetica/kinetica-k8s-monitor:v7.2.0-3.rc-3
  • docker.io/kinetica/busybox:v7.2.0-3.rc-3
  • docker.io/kinetica/fluent-bit:v7.2.0-3.rc-3
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga

nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • nvcr.io/nvidia/gpu-operator:v23.9.1

registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2

docker.io (Required Supporting Images)

  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0

quay.io (Required Supporting Images)

  • quay.io/brancz/kube-rbac-proxy:v0.14.2

Optional Container Images

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx

quay.io (Optional Supporting Images)

  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)

registry.k8s.io (Optional Supporting Images)

  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c

Which Kinetica Core Image do I use?

Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64)
kinetica-k8s-cpu (1)
kinetica-k8s-cpu-avx512
kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.

Label the Kubernetes Nodes

Kinetica requires some of the Kubernetes Nodes to be labeled as it splits some of the components into different deployment 'pools'. This enables different physical node types to be present in the Kubernetes Cluster allowing us to target which Kinetica components go where.

e.g. for a GPU installation some nodes in the cluster will have GPUs and others are CPU only. We can put the DB on the GPU nodes and our infrastructure components on CPU only nodes.

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

CPU Node Labels

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute

The Kubernetes cluster nodes selected to host the Kinetica infrastructure pods i.e. non-DB Pods require the following label app.kinetica.com/pool=infra.

GPU Node Labels

Label the Infrastructure Nodes
    kubectl label node k8snode1 app.kinetica.com/pool=infra

whilst the Kubernetes cluster nodes selected to host the Kinetica DB Pods require the following label app.kinetica.com/pool=compute-gpu.

Label the Database Nodes
    kubectl label node k8snode2 app.kinetica.com/pool=compute-gpu

On-Prem Kinetica SQLAssistant - Nodes Groups, GPU Counts & VRAM Memory

To run the Kinetica SQLAssistant locally requires additional GPUs to be available in a separate Node Group labeled app.kinetica.com/pool=compute-llm. In order for the On-Prem Kinetica LLM to run it requires 40GB GPU VRAM therefore the number of GPUs automatically allocated to the SQLAssistant pod will ensure that the 40GB VRAM is available e.g. 1x A100 GPU or 2x A10G GPU.

Label Kubernetes Nodes for LLM
kubectl label node k8snode3 app.kinetica.com/pool=compute-llm

Pods Not Scheduling

If the Kubernetes are not labeled you may have a situation where Kinetica pods not schedule and sit in a 'Pending' state.

Install the kinetica-operators chart

This chart will install the Kinetica K8s operators together with a default configured database and workbench UI.

Add the Kinetica chart repository

Add the repo locally as kinetica-operators:

Helm repo add
helm repo add kinetica-operators https://kineticadb.github.io/charts/latest
Helm Repo Add

Helm Repo Add

Obtain the default Helm values file

For the generic Kubernetes install use the following values file without modification. Advanced users with specific requirements may need to adjust parameters in this file.

Helm values.yaml download
wget https://raw.githubusercontent.com/kineticadb/charts/72.0.11/kinetica-operators/values.onPrem.k8s.yaml

Determine the following prior to the chart install

Default Admin User

the default admin user in the Helm chart is kadmin but this is configurable. Non-ASCII characters and typographical symbols in the password must be escaped with a "\". For example, --set dbAdminUser.password="MyPassword\!"

  1. Obtain a LICENSE-KEY as described in the introduction above.
  2. Choose a PASSWORD for the initial administrator user
  3. As the storage class name varies between K8s flavor and/or there can be multiple, this must be prescribed in the chart installation. Obtain the DEFAULT-STORAGE-CLASS name with the command:


Find the default storageclass
kubectl get sc -o name 
List StorageClass

Find Storage Class

use the name found after the /, For example, in storageclass.storage.k8s.io/local-path use "local-path" as the parameter.

Amazon EKS

If installing on Amazon EKS See here

Planning access to your Kinetica Cluster

Existing Ingress Controller?

If you have an existing Ingress Controller in your Kubernetes cluster and do not want Kinetica to install an ingresss-nginx to expose it's endpoints then you can disable ingresss-nginx installation in the values.yaml by editing the file and setting install: true to install: false: -

Text Only
```` yaml
nodeSelector: {}
tolerations: []
affinity: {}

ingressNginx:
    install: false
````