Skip to content

Air-Gapped Environments

Obtaining the Kinetica Images

Kinetica Images for an Air-Gapped Environment

If you are installing Kinetica with Helm in an air-gapped environment you will either need a Registry Proxy to pass the requests through or to download the images and push them to your internal Registry.

For information on ways to transfer the files into an air-gapped environment See here.

Required Container Images

docker.io (Required Kinetica Images for All Installations)

  • docker.io/kinetica/kinetica-k8s-operator:{{kinetica_full_version}}
    • docker.io/kinetica/kinetica-k8s-cpu:{{kinetica_full_version}} or
    • docker.io/kinetica/kinetica-k8s-cpu-avx512:{{kinetica_full_version}} or
    • docker.io/kinetica/kinetica-k8s-gpu:{{kinetica_full_version}}
  • docker.io/kinetica/workbench-operator:{{kinetica_full_version}}
  • docker.io/kinetica/workbench:{{kinetica_full_version}}
  • docker.io/kinetica/kinetica-k8s-monitor:{{kinetica_full_version}}
  • docker.io/kinetica/busybox:{{kinetica_full_version}}
  • docker.io/kinetica/fluent-bit:{{kinetica_full_version}}
  • docker.io/kinetica/kagent:7.1.9.15.20230823123615.ga

nvcr.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • nvcr.io/nvidia/gpu-operator:v23.9.1

registry.k8s.io (Required Kinetica Images for GPU Installations using kinetica-k8s-gpu)

  • registry.k8s.io/nfd/node-feature-discovery:v0.14.2

docker.io (Required Supporting Images)

  • docker.io/bitnami/openldap:2.6.7
  • docker.io/alpine/openssl:latest (used by bitnami/openldap)
  • docker.io/otel/opentelemetry-collector-contrib:0.95.0

quay.io (Required Supporting Images)

  • quay.io/brancz/kube-rbac-proxy:v0.14.2

Optional Container Images

These images are only required if certain features are enabled as part of the Helm installation: -

  • CertManager
  • ingress-ninx

quay.io (Optional Supporting Images)

  • quay.io/jetstack/cert-manager-cainjector:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-controller:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)
  • quay.io/jetstack/cert-manager-webhook:v1.13.3 (if optionally installing CertManager via Kinetica Helm Chart)

registry.k8s.io (Optional Supporting Images)

  • registry.k8s.io/ingress-nginx/controller:v1.9.4 (if optionally installing Ingress nGinx via Kinetica Helm Chart)
  • registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c

Which Kinetica Core Image do I use?

Container Image Intel (AMD64) Intel (AMD64 AVX512) Amd (AMD64) Graviton (aarch64) Apple Silicon (aarch64)
kinetica-k8s-cpu (1)
kinetica-k8s-cpu-avx512
kinetica-k8s-gpu (2) (2) (2)
  1. It is preferable on an Intel AVX512 enabled CPU to use the kinetica-k8s-cpu-avx512 container image
  2. With a supported nVidia GPU.

Please select the method to transfer the images: -

It is possible to use mesosphere/mindthegap

mindthegap

mindthegap provides utilities to manage air-gapped image bundles, both creating image bundles and seeding images from a bundle into an existing OCI registry or directly loading them to containerd.

This makes it possible with mindthegap to

  • create a single archive bundle of all the required images outside the air-gapped environment
  • run mindthegap using the archive bundle on the Kubernetes Nodes to bulk load the images into contained in a single command.

Kinetica provides two mindthegap yaml files which list all the necessary images for Kinetica for Kubernetes.

Install mindthegap

Install mindthegap
wget https://github.com/mesosphere/mindthegap/releases/download/v1.13.1/mindthegap_v1.13.1_linux_amd64.tar.gz
tar zxvf mindthegap_v1.13.1_linux_amd64.tar.gz

mindthegap - Create the Bundle

mindthegap create image-bundle
mindthegap create image-bundle --images-file mtg.yaml --platform linux/amd64

where --images-file is either the CPU or GPU Kinetica mindthegap yaml file.

mindthegap - Import the Bundle

mindthegap - Import to containerd

mindthegap import image-bundle
mindthegap import image-bundle --image-bundle images.tar [--containerd-namespace k8s.io]

If --containerd-namespace is not specified, images will be imported into k8s.io namespace.

sudo required

Depending on how containerd has been installed and configured it may require running the above command with sudo

mindthegap - Import to an internal OCI Registry

mindthegap import image-bundle
mindthegap push bundle --bundle <path/to/bundle.tar> \
--to-registry <registry.address> \
[--to-registry-insecure-skip-tls-verify]

It is possible with containerd to pull images, save them and load them either into a Container Registry in the air gapped environment or directly into another containerd instance.

If the target containerd is on a node running a Kubernetes Cluster then these images will be sourced by Kubernetes from the loaded images, via CRI, with no requirement to pull them from an external source e.g. a Registry or Mirror.

sudo required

Depending on how containerd has been installed and configured many of the example calls below may require running with sudo

containerd - Using containerd to pull and export an image

Similar to docker pull we can use ctr image pull so to pull the core Kinetica DB cpu based image

Pull a remote image (containerd)
ctr image pull docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2

We now need to export the pulled image as an archive to the local filesystem.

Export a local image (containerd)
ctr image export kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \
docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.tar) to the Kubernetes Node inside the air-gapped environment.

containerd - Using containerd to import an image

Using containerd to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
ctr -n=k8s.io images import kinetica-k8s-cpu-v7.2.2-3.ga-2.tar

-n=k8s.io

It is possible to use ctr images import kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar to import the image to containerd.

However, in order for the image to be visible to the Kubernetes Cluster running on containerd it is necessary to add the parameter -n=k8s.io.

containerd - Verifying the image is available

To verify the image is loaded into containerd on the node run the following on the node: -

Verify containerd Images
ctr image ls

To verify the image is visible to Kubernetes on the node run the following: -

Verify CRI Images
crictl images

It is possible with docker to pull images, save them and load them into an OCI Container Registry in the air gapped environment.

Pull a remote image (docker)
docker pull --platformlinux/amd64 docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2
Export a local image (docker)
docker export --platformlinux/amd64 -o kinetica-k8s-cpu-v7.2.2-3.ga-2.tar \
docker.io/kinetica/kinetica-k8s-cpu:v7.2.2-3.ga-2

We can now transfer this archive (kinetica-k8s-cpu-v7.2.2-3.ga-2.rc-3.tar) to the Kubernetes Node inside the air-gapped environment.

docker - Using docker to import an image

Using docker to import an image on to a Kubernetes Node on which a Kinetica Cluster is running.

Import the Images
docker import --platformlinux/amd64 kinetica-k8s-cpu-v7.2.2-3.ga-2.tar registry:repository/kinetica-k8s-cpu:v7.2.0-3.rc-3