Kinetica Clusters CRD Reference¶
This page covers the Kinetica Cluster Kubernetes CRD.
kubectl
cli commands¶
kubectl -n _namespace_ get kc
¶
Lists the KineticaUsers
defined within the specified anmespace to the console.
Full KineticaCluster CR Structure¶
kineticaclusters.app.kinetica.com_sample.yaml
# APIVersion defines the versioned schema of this representation of an
# object. Servers should convert recognized schemas to the latest
# internal value, and may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource this object
# represents. Servers may infer this from the endpoint the client
# submits requests to. Cannot be updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
metadata: {}
# KineticaClusterSpec defines the configuration for KineticaCluster DB
spec:
# An optional duration after which the database is stopped and DB
# resources are freed
autoSuspend:
enabled: false
# InactivityDuration - the duration which the cluster should be idle
# before auto-pausing the DB Cluster.
inactivityDuration: "1h"
# The platform infrastructure provider e.g. azure, aws, gcp, on-prem
# etc.
awsConfig:
# ClusterName - AWS name of the EKS Cluster. NOTE: Marked as
# optional but is mandatory
clusterName: string
# MarketplaceAppConfig - Amazon AWS specific DB Cluster
# information.
marketplaceApp:
# KmsKeyId - Key for disk encryption. The full Amazon Resource
# Name of the key to use when encrypting the volume. If none is
# supplied but encrypted is true, a key is generated by AWS. See
# AWS docs for valid ARN value.
kmsKeyId: string
# ProductCode - used to uniquely identify a product in AWS
# Marketplace. The product code should be the same as the one
# used during the publishing of a new product.
productCode: "1cmucncoyp9pi8xjdwqjimlf8"
# PublicKeyVersion - Public Key Version provided by AWS
# Marketplace
publicKeyVersion: 1
# ParentResourceGroup - The resource group of the ManagedApp
# itself ParentResourceGroup string
# `json:"parentResourceGroup"` ResourceId - Identifier of the
# resource against which usage is emitted Format is GUID
# (UUID)
# https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
# Optional only if that exactly of ResourceId or ResourceUri is
# specified.
resourceId: string
# NodeGroups - List of NodeGroups for this cluster MUST contain at
# least one of the following keys: -
# * none
# * infra
# * infra_public
# * compute
# * compute-gpu
# * aaw_cpu
# NOTE: Marked as optional but is mandatory
nodeGroups: {}
# OTELTracing - OpenTelemetry Tracing Specifics
otelTracing:
# Endpoint - Set the OpenTelemetry reporting Endpoint
endpoint: ""
# Key - KineticaCluster specific Key required to send Telemetry
# information to the Cloud
key: string
# MaxBatchSize - Telemetry Reporting Interval to use in seconds.
maxBatchInterval: 10
# MaxBatchSize - Telemetry Maximum Batch Size to send.
maxBatchSize: 1024
# The platform infrastructure provider e.g. azure, aws, gcp, on-prem
# etc.
azureConfig:
# App Insights Specifics
appInsights:
# Endpoint - Override the default AppInsights reporting Endpoint
endpoint: ""
# Key - KineticaCluster specific Application Insights Key required
# to send Telemetry information to the Azure Portal
key: string
# MaxBatchSize - Telemetry Reporting Interval to use in seconds.
maxBatchInterval: 10
# MaxBatchSize - Telemetry Maximum Batch Size to send.
maxBatchSize: 1024
# AzureManagedAppConfig - Microsoft Azure specific DB Cluster
# information.
managedApp:
# DiskEncryptionSetID - By default, managed disks use
# platform-managed encryption keys. All managed disks, snapshots,
# images, and data written to existing managed disks are
# automatically encrypted-at-rest with platform-managed keys. You
# can choose to manage encryption at the level of each managed
# disk, with your own keys. When you specify a customer-managed
# key, that key is used to protect and control access to the key
# that encrypts your data. Customer-managed keys offer greater
# flexibility to manage access controls.
diskEncryptionSetId: string
# PlanId - The Azure Marketplace Plan/Offer identifier selected by
# the customer for this DB cluster e.g. BYOL, Pay-As-You-Go etc.
planId: string
# ParentResourceGroup - The resource group of the ManagedApp
# itself ParentResourceGroup string
# `json:"parentResourceGroup"` ResourceId - Identifier of the
# resource against which usage is emitted Format is GUID
# (UUID)
# https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
# Optional only if that exactly of ResourceId or ResourceUri is
# specified.
resourceId: string
# ResourceUri - Identifier of the managed app resource against
# which usage is emitted
# https://github.com/microsoft/commercial-marketplace-openapi/blob/main/Microsoft.Marketplace.Metering/2018-08-31/meteringapi.v1.json
# Optional only if that exactly of ResourceId or ResourceUri is
# specified.
resourceUri: string
# Tells the operator we want to run in Debug mode.
debug: false
# Identifies the type of Kubernetes deployment.
deploymentType:
# CloudRegionEnum - The target Kubernetes type to deploy to.
# Supported Values are: - aws_useast_1 aws_useast_2 aws_uswest_1
# az_useast_1 az_uswest_1
region: string
# DeploymentTypeEnum - The type of the Deployment. Supported Values
# are: - Managed FreeSaaS DedicatedSaaS OnPrem
type: string
# The platform infrastructure provider e.g. azure, aws, gcp, on-prem
# etc.
devEditionConfig:
# Host IPv4 address. Used by KiND based Developer Edition where
# ingress paths set to *. Provides qualified, routable URLs to
# workbench.
hostIpAddress: ""
# The GAdmin Dashboard Configuration for the Kinetica Cluster.
gadmin:
# The port that GAdmin will be running on. It runs only on the head
# node pod in the cluster. Default: 8080
containerPort:
# Number of port to expose on the pod's IP address. This must be a
# valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must be
# a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name. Name
# for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# The Ingress Endpoint that GAdmin will be running on.
ingressPath:
# backend defines the referenced service endpoint to which the
# traffic will be forwarded to.
backend:
# resource is an ObjectRef to another Kubernetes resource in the
# namespace of the Ingress object. If resource is specified,
# serviceName and servicePort must not be specified.
resource:
# APIGroup is the group for the resource being referenced. If
# APIGroup is not specified, the specified Kind must be in
# the core API group. For any other third-party types,
# APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# serviceName specifies the name of the referenced service.
serviceName: string
# servicePort Specifies the port of the referenced service.
servicePort:
# path is matched against the path of an incoming request.
# Currently it can contain characters disallowed from the
# conventional "path" part of a URL as defined by RFC 3986. Paths
# must begin with a '/' and must be present when using PathType
# with value "Exact" or "Prefix".
path: string
# pathType determines the interpretation of the path matching.
# PathType can be one of the following values: * Exact: Matches
# the URL path exactly. * Prefix: Matches based on a URL path
# prefix split by '/'. Matching is done on a path element by
# element basis. A path element refers is the list of labels in
# the path split by the '/' separator. A request is a match for
# path p if every p is an element-wise prefix of p of the request
# path. Note that if the last element of the path is a substring
# of the last element in request path, it is not a match
# (e.g. /foo/bar matches /foo/bar/baz, but does not
# match /foo/barbaz). * ImplementationSpecific: Interpretation of
# the Path matching is up to the IngressClass. Implementations
# can treat this as a separate PathType or treat it identically
# to Prefix or Exact path types. Implementations are required to
# support all path types. Defaults to ImplementationSpecific.
pathType: string
# Whether to enable the GAdmin Dashboard on the Cluster. Default:
# true
isEnabled: true
# Gaia - gaia.properties configuration
gaia: admin:
# AdminLoginOnlyGpudbDown - When GPUdb is down, only allow admin
# user to login
admin_login_only_gpudb_down: true
# Username - We do check for admin username in various places
admin_username: "admin"
# LoginAnimationEnabled - Display any animation in login page
login_animation_enabled: true
# AdminLoginOnlyGpudbDown - Convenience settings for dev mode
login_bypass_enabled: false
# RequireStrongPassword - Convenience settings for dev mode
require_strong_password: true
# SSLTruststorePasswordScript - Display any animation in login
# page
ssl_truststore_password_script: string
# DemoSchema - Schema-related configuration
demo_schema: "demo" gpudb:
# DataFileStringNullValue - Table import/export null value string
data_file_string_null_value: "\\N"
gpudb_ext_url: "http://127.0.0.1:8082/gpudb-0"
# URL - Current instance of gpudb, when running in HA mode change
# this to load balancer endpoint
gpudb_url: "http://127.0.0.1:9191"
# LoggingLogFileName - Which file to use when displaying logging
# on Cluster page.
logging_log_file_name: "gpudb.log"
# SampleRepoURL - Table import/export null value string
sample_repo_url: "//s3.amazonaws.com/kinetica-ce-data" hm:
gpudb_ext_hm_url: "http://127.0.0.1:8082/gpudb-host-manager"
gpudb_hm_url: "http://127.0.0.1:9300" http:
# ClientTimeout - Number of seconds for proxy request timeout
http_client_timeout: 3600
# ClientTimeoutV2 - Force override of previous default with 0 as
# infinite timeout
http_client_timeout_v2: 0
# TomcatPathKey - Name of folder where Tomcat apps are installed
tomcat_path_key: "tomcat"
# WebappContext - Web App context
webapp_context: "gadmin"
# GAdminIsRemote - True if the gadmin application is running on a
# remote machine (not on same node as gpudb). If running on a
# remote machine the manage options will be disabled.
is_remote: false
# KAgentCLIPath - Schema-related configuration
kagent_cli_path: "/opt/gpudb/kagent/bin/kagent"
# KIO - KIO-related configuration
kio: kio_log_file_path: "/opt/gpudb/kitools/kio/logs/gadmin.log"
kio_log_level: "DEBUG" kio_log_size_limit: 10485760 kisql:
# QueryResultsLimit - KiSQL limit on the number of results in each
# query
kisql_query_results_limit: 10000
# QueryTimezone - KiSQL TimeZoneId setting for queries
# (use "system" for local system time)
kisql_query_timezone: "GMT" license:
# Status - Stub for license manager
status: "ok"
# Type - Stub for license manager
type: "unlimited"
# MaxConcurrentUserSessions - Session management configuration
max_concurrent_user_sessions: 0
# PublicSchema - Schema-related configuration
public_schema: "ki_home"
# RevealDBInfoFile - Path to file containing Reveal DB location
reveal_db_info_file: "/opt/gpudb/connectors/reveal/var/REVEAL_DB_DIR"
# RootSchema - Schema-related configuration
root_schema: "root" stats:
# GraphanaURL -
graphana_url: "http://127.0.0.1:3000"
# GraphiteURL
graphite_url: "http://127.0.0.1:8181"
# StatsGrafanaURL - Port used to host the Grafana user interface
# and embeddable metric dashboards in GAdmin. Note: If this value
# is defaulted then it will be replaced by the name of the Stats
# service if it is deployed & Grafana is enabled e.g.
# cluster-1234.gpudb.svc.cluster.local
stats_grafana_url: "http://127.0.0.1:9091"
# https://github.com/kubernetes-sigs/controller-tools/issues/622 if we
# want to set usePools as false, need to set defaults GPUDBCluster is
# an instance of a Kinetica DB Cluster i.e. it's StatefulSet,
# Service, Ingress, ConfigMap etc.
gpudbCluster:
# Affinity - is a group of affinity scheduling rules.
affinity:
# Describes node affinity scheduling rules for the pod.
nodeAffinity:
# The scheduler will prefer to schedule pods to nodes that
# satisfy the affinity expressions specified by this field, but
# it may choose a node that violates one or more of the
# expressions. The node that is most preferred is the one with
# the greatest sum of weights, i.e. for each node that meets
# all of the scheduling requirements (resource request,
# requiredDuringScheduling affinity expressions, etc.), compute
# a sum by iterating through the elements of this field and
# adding "weight" to the sum if the node matches the
# corresponding matchExpressions; the node(s) with the highest
# sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
# A list of node selector requirements by node's labels.
matchExpressions:
- key: string
# Represents a key's relationship to a set of values.
# Valid operators are In, NotIn, Exists, DoesNotExist.
# Gt, and Lt.
operator: string
# An array of string values. If the operator is In or
# NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. If the operator is Gt or Lt, the values
# array must have a single element, which will be
# interpreted as an integer. This array is replaced
# during a strategic merge patch.
values: ["string"]
# A list of node selector requirements by node's fields.
matchFields:
- key: string
# Represents a key's relationship to a set of values.
# Valid operators are In, NotIn, Exists, DoesNotExist.
# Gt, and Lt.
operator: string
# An array of string values. If the operator is In or
# NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. If the operator is Gt or Lt, the values
# array must have a single element, which will be
# interpreted as an integer. This array is replaced
# during a strategic merge patch.
values: ["string"]
# Weight associated with matching the corresponding
# nodeSelectorTerm, in the range 1-100.
weight: 1
# If the affinity requirements specified by this field are not
# met at scheduling time, the pod will not be scheduled onto
# the node. If the affinity requirements specified by this
# field cease to be met at some point during pod execution
# (e.g. due to an update), the system may or may not try to
# eventually evict the pod from its node.
requiredDuringSchedulingIgnoredDuringExecution:
# Required. A list of node selector terms. The terms are
# ORed.
nodeSelectorTerms:
- matchExpressions:
- key: string
# Represents a key's relationship to a set of values.
# Valid operators are In, NotIn, Exists, DoesNotExist.
# Gt, and Lt.
operator: string
# An array of string values. If the operator is In or
# NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. If the operator is Gt or Lt, the values
# array must have a single element, which will be
# interpreted as an integer. This array is replaced
# during a strategic merge patch.
values: ["string"]
# A list of node selector requirements by node's fields.
matchFields:
- key: string
# Represents a key's relationship to a set of values.
# Valid operators are In, NotIn, Exists, DoesNotExist.
# Gt, and Lt.
operator: string
# An array of string values. If the operator is In or
# NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. If the operator is Gt or Lt, the values
# array must have a single element, which will be
# interpreted as an integer. This array is replaced
# during a strategic merge patch.
values: ["string"]
# Describes pod affinity scheduling rules (e.g. co-locate this pod
# in the same node, zone, etc. as some other pod(s)).
podAffinity:
# The scheduler will prefer to schedule pods to nodes that
# satisfy the affinity expressions specified by this field, but
# it may choose a node that violates one or more of the
# expressions. The node that is most preferred is the one with
# the greatest sum of weights, i.e. for each node that meets
# all of the scheduling requirements (resource request,
# requiredDuringScheduling affinity expressions, etc.), compute
# a sum by iterating through the elements of this field and
# adding "weight" to the sum if the node has pods which matches
# the corresponding podAffinityTerm; the node(s) with the
# highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
# A label query over a set of resources, in this case pods.
labelSelector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator
# is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the values
# array must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# A label query over the set of namespaces that the term
# applies to. The term is applied to the union of the
# namespaces selected by this field and the ones listed in
# the namespaces field. null selector and null or empty
# namespaces list means "this pod's namespace". An empty
# selector ({}) matches all namespaces.
namespaceSelector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator
# is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the values
# array must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# namespaces specifies a static list of namespace names that
# the term applies to. The term is applied to the union of
# the namespaces listed in this field and the ones selected
# by namespaceSelector. null or empty namespaces list and
# null namespaceSelector means "this pod's namespace".
namespaces: ["string"]
# This pod should be co-located (affinity) or not
# co-located (anti-affinity) with the pods matching the
# labelSelector in the specified namespaces, where
# co-located is defined as running on a node whose value of
# the label with key topologyKey matches that of any node
# on which any of the selected pods is running. Empty
# topologyKey is not allowed.
topologyKey: string
# weight associated with matching the corresponding
# podAffinityTerm, in the range 1-100.
weight: 1
# If the affinity requirements specified by this field are not
# met at scheduling time, the pod will not be scheduled onto
# the node. If the affinity requirements specified by this
# field cease to be met at some point during pod execution
# (e.g. due to a pod label update), the system may or may not
# try to eventually evict the pod from its node. When there are
# multiple elements, the lists of nodes corresponding to each
# podAffinityTerm are intersected, i.e. all terms must be
# satisfied.
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
# matchExpressions is a list of label selector requirements.
# The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator is
# In or NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# A label query over the set of namespaces that the term
# applies to. The term is applied to the union of the
# namespaces selected by this field and the ones listed in
# the namespaces field. null selector and null or empty
# namespaces list means "this pod's namespace". An empty
# selector ({}) matches all namespaces.
namespaceSelector:
# matchExpressions is a list of label selector requirements.
# The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator is
# In or NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# namespaces specifies a static list of namespace names that
# the term applies to. The term is applied to the union of
# the namespaces listed in this field and the ones selected
# by namespaceSelector. null or empty namespaces list and
# null namespaceSelector means "this pod's namespace".
namespaces: ["string"]
# This pod should be co-located (affinity) or not co-located
# (anti-affinity) with the pods matching the labelSelector in
# the specified namespaces, where co-located is defined as
# running on a node whose value of the label with key
# topologyKey matches that of any node on which any of the
# selected pods is running. Empty topologyKey is not
# allowed.
topologyKey: string
# Describes pod anti-affinity scheduling rules (e.g. avoid putting
# this pod in the same node, zone, etc. as some other pod(s)).
podAntiAffinity:
# The scheduler will prefer to schedule pods to nodes that
# satisfy the anti-affinity expressions specified by this
# field, but it may choose a node that violates one or more of
# the expressions. The node that is most preferred is the one
# with the greatest sum of weights, i.e. for each node that
# meets all of the scheduling requirements (resource request,
# requiredDuringScheduling anti-affinity expressions, etc.),
# compute a sum by iterating through the elements of this field
# and adding "weight" to the sum if the node has pods which
# matches the corresponding podAffinityTerm; the node(s) with
# the highest sum are the most preferred.
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
# A label query over a set of resources, in this case pods.
labelSelector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator
# is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the values
# array must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# A label query over the set of namespaces that the term
# applies to. The term is applied to the union of the
# namespaces selected by this field and the ones listed in
# the namespaces field. null selector and null or empty
# namespaces list means "this pod's namespace". An empty
# selector ({}) matches all namespaces.
namespaceSelector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator
# is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the values
# array must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# namespaces specifies a static list of namespace names that
# the term applies to. The term is applied to the union of
# the namespaces listed in this field and the ones selected
# by namespaceSelector. null or empty namespaces list and
# null namespaceSelector means "this pod's namespace".
namespaces: ["string"]
# This pod should be co-located (affinity) or not
# co-located (anti-affinity) with the pods matching the
# labelSelector in the specified namespaces, where
# co-located is defined as running on a node whose value of
# the label with key topologyKey matches that of any node
# on which any of the selected pods is running. Empty
# topologyKey is not allowed.
topologyKey: string
# weight associated with matching the corresponding
# podAffinityTerm, in the range 1-100.
weight: 1
# If the anti-affinity requirements specified by this field are
# not met at scheduling time, the pod will not be scheduled
# onto the node. If the anti-affinity requirements specified by
# this field cease to be met at some point during pod
# execution (e.g. due to a pod label update), the system may or
# may not try to eventually evict the pod from its node. When
# there are multiple elements, the lists of nodes corresponding
# to each podAffinityTerm are intersected, i.e. all terms must
# be satisfied.
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
# matchExpressions is a list of label selector requirements.
# The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator is
# In or NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# A label query over the set of namespaces that the term
# applies to. The term is applied to the union of the
# namespaces selected by this field and the ones listed in
# the namespaces field. null selector and null or empty
# namespaces list means "this pod's namespace". An empty
# selector ({}) matches all namespaces.
namespaceSelector:
# matchExpressions is a list of label selector requirements.
# The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator is
# In or NotIn, the values array must be non-empty. If the
# operator is Exists or DoesNotExist, the values array
# must be empty. This array is replaced during a
# strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to an
# element of matchExpressions, whose key field is "key",
# the operator is "In", and the values array contains
# only "value". The requirements are ANDed.
matchLabels: {}
# namespaces specifies a static list of namespace names that
# the term applies to. The term is applied to the union of
# the namespaces listed in this field and the ones selected
# by namespaceSelector. null or empty namespaces list and
# null namespaceSelector means "this pod's namespace".
namespaces: ["string"]
# This pod should be co-located (affinity) or not co-located
# (anti-affinity) with the pods matching the labelSelector in
# the specified namespaces, where co-located is defined as
# running on a node whose value of the label with key
# topologyKey matches that of any node on which any of the
# selected pods is running. Empty topologyKey is not
# allowed.
topologyKey: string
# Annotations - Annotations labels to be applied to the Statefulset
# DB pods.
annotations: {}
# The name of the cluster to form.
clusterName: string
# The Ingress Endpoint that GAdmin will be running on.
clusterSize:
# ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a
# representation of the number of nodes in a simple to understand
# T-Short size scheme. This indicates the size of the cluster
# i.e. the number of nodes. It does not identify the size of the
# cloud provider nodes. For node size see ClusterTypeEnum.
# Supported Values are: - XS S M L XL XXL XXXL
tshirtSize: string
# ClusterTypeEnum - An Enum of the node types of a KineticaCluster
# e.g. CPU, GPU along with the Cloud Provider node size e.g. size
# of the VM.
tshirtType: string
# Config Kinetica DB Configuration Object
config: ai: apiKey: string
# Provider - AI API provider type. The default is "sqlgpt"
apiProvider: "sqlgpt" apiUrl: string
# AlertManagerConfig
alertManager:
# AlertManager IP address (run on head node) default port
# is "2003"
ipAddress: "${gaia.host0.address}" port: 2003
# AlertConfig
alerts: alertDiskAbsolute: [integer]
# Trigger an alert if available disk space on any given node
# falls to or below a certain threshold, either absolute
# (number of bytes) or percentage of total disk space. For
# multiple thresholds, use a comma-delimited list of values.
alertDiskPercentage: [1,5,10,20]
# Trigger generic error message alerts, in cases of various
# significant runtime errors.
alertErrorMessages: true
# Executable to run when an alert condition occurs. This
# executable will only be run on **rank0** and does not need to
# be present on other nodes.
alertExe: ""
# Trigger an alert whenever the status of a host or rank
# changes.
alertHostStatus: true
# Optionally, filter host alerts for a comma-delimited list of
# statuses. If a filter is empty, every host status change will
# trigger an alert.
alertHostStatusFilter: "fatal_init_error"
# The maximum number of triggered alerts guaranteed to be stored
# at any given time. When this number of alerts is exceeded,
# older alerts may be discarded to stay within the limit.
alertMaxStoredAlerts: 100 alertMemoryAbsolute: [integer]
# Trigger an alert if available memory on any given node falls
# to or below a certain threshold, either absolute (number of
# bytes) or percentage of total memory. For multiple
# thresholds, use a comma-delimited list of values.
alertMemoryPercentage: [1,5,10,20]
# Trigger an alert if a CUDA error occurs on a rank.
alertRankCudaError: true
# Trigger alerts when the fallback allocator is employed; e.g.,
# host memory is allocated because GPU allocation fails. NOTE:
# To prevent a flooding of alerts, if a fallback allocator is
# triggered in bursts, not every use will generate an alert.
alertRankFallbackAllocator: true
# Trigger an alert whenever the status of a rank changes.
alertRankStatus: true
# Optionally, filter rank alerts for a comma-delimited list of
# statuses. If a filter is empty, every rank status change will
# trigger an alert.
alertRankStatusFilter:
["fatal_init_error","not_responding","terminated"]
# Enable the alerting system.
enableAlerts: true
# Directory where the trace event and summary files are stored.
# Must be a fully qualified path with sufficient free space for
# required volume of data.
traceDirectory: "/tmp"
# The maximum number of trace events to be collected
traceEventBufferSize: 1000000
# Audit - This section controls the request auditor, which will
# audit all requests received by the server in full or in part
# based on the settings.
audit:
# Controls whether the body of each request is audited (in JSON
# format). If 'enable_audit' is "false" this setting has no
# effect. NOTE: For requests that insert data records, this
# setting does not control the auditing of the records being
# inserted, only the rest of the request body; see 'audit_data'
# below to control this. audit_body = false
body: false
# Controls whether records being inserted are audited (in JSON
# format) for requests that insert data records. If
# either 'enable_audit' or 'audit_body' is "false", this
# setting has no effect. NOTE: Enabling this setting during
# bulk ingestion of data will rapidly produce very large audit
# logs and may cause disk space exhaustion; use with caution.
# audit_data = false
data: false
# Controls whether request auditing is enabled. If set
# to "true", the following information is audited for every
# request: Job ID, URI, User, and Client Address. The settings
# below control whether additional information about each
# request is also audited. If set to "false", all auditing is
# disabled. enable_audit = false
enable: false
# Controls whether HTTP headers are audited for each request.
# If 'enable_audit' is "false" this setting has no effect.
# audit_headers = false
headers: true
# Controls whether the above audit settings can be altered at
# runtime via the /alter/system/properties endpoint. In a
# secure environment where auditing is required at all times,
# this should be set to "true" to lock the settings to what is
# set in this file. lock_audit = false
lock: false
# Controls whether response information is audited for each
# request. If 'enable_audit' is "false" this setting has no
# effect. audit_response = false
response: false
# EventConfig
events:
# Run a statistics server to collect information about Kinetica
# and the machines it runs on.
internal: true
# Statistics server IP address (run on head node) default port
# is "2003"
ipAddress: "${gaia.host0.address}" port: 2003
# Statistics server namespace - should be a machine identifier
statsServerNamespace: "gpudb"
# ExternalFilesConfig
externalFiles:
# Defines the directory from which external files can be loaded
directory: "/opt/gpudb/persist"
# # Parquet files compression type egress_parquet_compression =
# snappy
egressParquetCompression: "snappy"
# Max file size (in MB) to allow saving to a single file. May be
# overridden by target limitations. egress_single_file_max_size
# = 100
egressSingleFileMaxSize: "100"
# Maximum number of simultaneous threads allocated to a given
# external file read request, on each rank. Note that thread
# allocation may also be limited by resource group limits, the
# subtask_concurrency_limit setting, or system load.
readerNumTasks: "-1"
# GeneralConfig - the root of the gpudb.conf configuration in the
# CRD
general:
# Timeout (in seconds) to wait for a rank to start during a
# cluster event (ex: failover) event is considered failed.
clusterEventTimeoutStartupRank: "300"
# Enable (if "true") multiple kernels to run concurrently on the
# same GPU
concurrentKernelExecution: true
# Time-to-live in minutes of non-protected tables before they
# are automatically deleted from the database.
defaultTTL: "20"
# Disallow the /clear/table request to clear all tables.
disableClearAll: true
# Enable overlapped-equi-join filters
enableOverlappedEquiJoin: true
# Enable predicate-equi-join filter plan type
enablePredicateEquiJoin: true
# If "true" then all filter execution will be host-only
# (i.e. CPU). This can be useful for high-concurrency
# situations and when PCIe bandwidth is a limiting factor.
forceHostFilterExecution: false
# Maximum number of kernels that can be running at the same time
# on a given GPU. Set to "0" for no limit. Only takes effect
# if 'concurrent_kernel_execution' is "true"
maxConcurrentKernels: "0"
# Maximum number of records that data retrieval requests such
# as /get/records and /aggregate/groupby will return per
# request.
maxGetRecordsSize: 20000
# Set an optional executable command that will be run once when
# Kinetica is ready for client requests. This can be used to
# perform any initialization logic that needs to be run before
# clients connect. It will be run as the "gpudb" user, so you
# must ensure that any required permissions are set on the file
# to allow it to be executed. If the command cannot be
# executed or returns a non-zero error code, then Kinetica will
# be stopped. Output from the startup script will be logged
# to "/opt/gpudb/core/logs/gpudb-on-start.log" (and its dated
# relatives). The "gpudb_env.sh" script is run directly before
# the command, so the path will be set to include the supplied
# Python runtime. Example: on_startup_script
# = /home/gpudb/on-start.sh param1 param2 ...
onStartupScript: ""
# Size in bytes of the pinned memory pool per-rank process to
# speed up copying data to the GPU. Set to "0" to disable.
pinnedMemoryPoolSize: 2000000000
# Tables and collections with these names will not be deleted
# (comma separated).
protectedSets: "MASTER,_MASTER,_DATASOURCE"
# Timeout (in minutes) for filter-type requests
requestTimeout: "20"
# Timeout (in seconds) to wait for a rank to exit gracefully
# before it is force-killed. Machines with slow disk drives may
# require longer times and data may be lost if a drive is not
# responsive.
timeoutShutdownRank: "300"
# Timeout (in seconds) to wait for each database subsystem to
# exit gracefully before it is force-killed.
timeoutShutdownSubsystem: "20"
# Timeout (in seconds) to wait for each database subsystem to
# startup. Subsystems include the Query Planner, Graph,
# Stats, & HTTP servers, as well as external text-search
# ranks.
timeoutStartupSubsystem: "60"
# GraphConfig
graph:
# Enable the graph server
enable: false
# List of GPU devices to be used by graph server The server
# would ideally be run on a different node with dedicated GPU
# (s)
gpuList: ""
# Specify where the graph server should be run, defaults to head
# node
ipAddress: "${gaia.rank0_ip_address}"
# Maximum memory that can be used by the graph server, set
# to "0" to disable memory restriction
maxMemory: 0
# Port used for responses from the graph server to the database
# server
pullPort: 8100
# Port used for requests from the database server to the graph
# server
pushPort: 8099
# Number of seconds the graph client will wait for a response
# from the graph server
timeout: 1200
# HardwareConfig
hardware:
# Rank0HardwareConfig
rank0:
# Specify the GPU to use for all calculations on the HTTP
# server node, **rank0**. NOTE: The **rank0** GPU may be
# shared with another rank.
gpu: 0
# Set the head HTTP **rank0** numa node(s). If left empty,
# there will be no thread affinity or preferred memory node.
# The node list may be either a single node number or a
# range; e.g., "1-5,7,10". If there will be many simultaneous
# users, specify as many nodes as possible that won't overlap
# the **rank1** to **rankN** worker numa nodes that the GPUs
# are on. If there will be few simultaneous users and WMS
# speed is important, choose the numa node the 'rank0.gpu' is
# on.
numaNode: ranks:
- baseNumaNode: string
# Set each worker rank's preferred data numa node for CPU
# affinity and memory allocation.
# The 'rank<#>.data_numa_node' is the node or nodes that data
# intensive threads will run in and should be set to the same
# numa node that the GPU specified by the
# corresponding 'rank<#>.taskcalc_gpu' is on for best
# performance. If the 'rank<#>.taskcalc_gpu' is specified
# the 'rank<#>.data_numa_node' will be automatically set to
# the node the GPU is attached to, otherwise there will be no
# CPU thread affinity or preferred node for memory allocation
# if not specified or left empty. The node list may be a
# single node number or a range; e.g., "1-5,7,10".
dataNumaNode: string
# Set the GPU device for each worker rank to use. If no GPUs
# are specified, each rank will round-robin the available
# GPUs per host system. Add 'rank<#>.taskcalc_gpu' as needed
# for the worker ranks, where *#* ranges from "1" to the
# highest *rank #* among the 'rank<#>.host' parameters
# Example setting the GPUs to use for ranks 1 and 2:
# # rank1.taskcalc_gpu = 0 # rank2.taskcalc_gpu = 1
taskCalcGPU: kafka:
# Maximum number of records to be ingested in a single batch
# kafka.batch_size = 1000
batchSize: 1000
# Maximum time (milliseconds) for each poll to get records from
# kafka kafka.poll_timeout = 0
pollTimeout: 1
# Maximum wait time (seconds) to buffer records received from
# kafka before ingestion kafka.wait_time = 30
waitTime: 30
# KifsConfig
kifs:
# KIFs user data size limit
dataLimit: "4Gi"
# sudo usermod -a -G gpudb_proc <user>
enable: false
# Parent directory of the mount point for the KiFS file system.
# Must be a fully qualified path. The actual mount point will
# be a subdirectory *mount* below this directory. Note that
# this folder must have read, write and execute permissions for
# the "gpudb" user and the "gpudb_proc" group, and it cannot be
# a path on an NFS.
mountPoint: "/gpudb/kifs" useManagedCredentials: true
# Etcd *ETCDConfig `json:"etcd,omitempty"` HA HAConfig
# `json:"ha,omitempty"`
ml:
# Enable the ML server.
enable: false
# NetworkConfig
network:
# HAAddress - An optional address to allow inter-cluster
# communication with HA when 'address' is not routable between
# clusters.
HAAddress: string
# CompressNetworkData - Enables compression of inter-node
# network data transfers.
compressNetworkData: false
# EnableHTTPDProxy - Start an HTTP server as a proxy to handle
# LDAP and/or Kerberos authentication. Each host will run an
# HTTP server and access to each rank is available through
# http://host:8082/gpudb-1, where port "8082" is defined
# by 'httpd_proxy_port'. NOTE: HTTP external endpoints are not
# affected by the 'use_https' parameter above. If you wish to
# enable HTTPS, you must edit
# the "/opt/gpudb/httpd/conf/httpd.conf" and setup HTTPS as per
# the Apache httpd documentation at
# https://httpd.apache.org/docs/2.2/
enableHTTPDProxy: true
# EnableWorkerHTTPServers - Enable worker HTTP servers; each
# process runs its own server for multi-head ingest.
enableWorkerHTTPServers: true
# GlobalManagerLocalPubPort - ?
globalManagerLocalPubPort: 5554
# GlobalManagerPortOne - Internal communication ports - Host
# manager status notification channel
globalManagerPortOne: 5552
# GlobalManagerPubPort - Host manager synchronization message
# publishing channel port
globalManagerPubPort: 5553
# HeadIPAddress - Head HTTP server IP address. Set to the
# publicly accessible IP address of the first
# process, **rank0**.
headIPAddress: "172.20.0.10"
# HeadPort - Head HTTP server port to use
# for 'head_ip_address'.
headPort: 9191
# HostManagerHTTPPort - HTTP port for web portal of the host
# manager
hostManagerHTTPPort: 9300
# HTTPAllowOrigin - Value to return via
# Access-Control-Allow-Origin HTTP header (for Cross-Origin
# Resource Sharing). Set to empty to not return the header and
# disallow CORS.
httpAllowOrigin: "*"
# HTTPKeepAlive - Keep HTTP connections alive between requests
httpKeepAlive: false
# HTTPDProxyPort - TCP port that the httpd auth proxy server
# will listen on if 'enable_httpd_proxy' is "true".
httpdProxyPort: 8082
# HTTPDProxyUseHTTPS - Set to "true" if the httpd auth proxy
# server is configured to use HTTPS.
httpdProxyUseHTTPS: false
# HTTPSCertFile - File containing the SSL certificate e.g.
# cert.pem If required, a self-signed certificate(expires after
# 10 years) can be generated via the command: e.g. cert.pem
# openssl req -newkey rsa:2048 -new -nodes -x509 \ -days
# 3650 -keyout key.pem -out cert.pem
httpsCertFile: ""
# HTTPSKeyFile - File containing the SSL private Key e.g.
# key.pem If required, a self-signed certificate (expires after
# 10 years) can be generated via the command: openssl
# req -newkey rsa:2048 -new -nodes -x509 \ -days 3650 -keyout
# key.pem -out cert.pem
httpsKeyFile: ""
# Rank0IPAddress - Internal use IP address of the head HTTP
# server, **rank0**. Set to either a second internal network
# accessible by all ranks or to '${gaia.head_ip_address}'.
rank0IPAddress: "${gaia.rank0.host}" ranks:
- communicatorPort:
# Number of port to expose on the pod's IP address. This
# must be a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this
# must be a valid port number, 0 < x < 65536. If
# HostNetwork is specified, this must match ContainerPort.
# Most containers do not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique
# within the pod. Each named port in a pod must have a
# unique name. Name for the port that can be referred to by
# services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# Specify the hosts to run each rank worker process in the
# cluster. For a single machine system, use "127.0.0.1", but
# if using two or more machines, a hostname or IP address
# must be specified for each rank that is accessible from the
# other ranks. See also 'head_ip_address'
# and 'rank0_ip_address'.
host: string
# Optionally, specify the worker HTTP server ports. The
# default is to use ('head_port' + *rank #*) for each worker
# process where rank number is from "1" to number of ranks
# in 'rank<#>.host' below.
httpServerPort:
# Number of port to expose on the pod's IP address. This
# must be a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this
# must be a valid port number, 0 < x < 65536. If
# HostNetwork is specified, this must match ContainerPort.
# Most containers do not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique
# within the pod. Each named port in a pod must have a
# unique name. Name for the port that can be referred to by
# services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# This is the Kubernetes pod IP Address of the current rank
# which we need to populate in the operator. NOTE: Internal
# Attribute
podIP: string
# Optionally, specify a public URL for each worker HTTP server
# that clients should use to connect for multi-head
# operations. NOTE: If specified for any ranks, a public URL
# must be specified for all ranks.
publicURL: "https://:8082/gpudb-{{.Rank}}"
# Define the rank number of this rank.
rank: 1
# SetMonitorPort - Set monitor ZMQ publisher server port (-1 to
# disable), uses the 'head_ip_address' interface.
setMonitorPort: 9002
# SetMonitorProxyPort - Set monitor ZMQ publisher internal proxy
# server port ("-1" to disable), uses the 'head_ip_address'
# interface. IMPORTANT: Disabling this port effectively
# prevents worker nodes from publishing set monitor
# notifications when multi-head ingest is enabled
# (see 'enable_worker_http_servers').
setMonitorProxyPort: 9003
# SetMonitorQueueSize - Set monitor queue size
setMonitorQueueSize: 1000
# TriggerPort - Trigger ZMQ publisher server port ("-1" to
# disable), uses the 'head_ip_address' interface.
triggerPort: -1
# UseHTTPS - Set to "true" to use HTTPS; if "true"
# then 'https_key_file' and 'https_cert_file' must be provided
useHttps: false
# PersistenceConfig
persistence:
# Removed in 7.2
IndexDBFlushImmediate: true
# DataLoadingSchema Startup data-loading scheme
buildMaterializedViewsOnStart: "on_demand"
# DataLoadingSchema Startup data-loading scheme
buildPKIndexOnStart: "on_demand"
# Target maximum data size for any one column in a chunk
# (512 MB) (0 = disable). chunk_max_memory = 8192000000
chunkColumnMaxMemory: 8192000000
# Target maximum total data size for all columns in a chunk
# (8 GB) (0 = disable).
chunkMaxMemory: 512000000
# Number of records per chunk ("0" disables chunking)
chunkSize: 8000000
# Determines whether to execute kernels on host (CPU) or device
# (GPU). Possible values are:
# * "default" : engine decides * "host" : execute only
# host * "device" : execute only device * *<rows>* :
# execute on the host if chunked column contains the given
# number of *rows* or fewer; otherwise, execute on device.
executionMode: "device"
# Removed in 7.2
fsyncIndexDBImmediate: true
# Removed in 7.2
fsyncInodesImmediate: true
# Removed in 7.2
fsyncMetadataImmediate: true
# Removed in 7.2
fsyncOnInterval: true
# Maximum number of open files for IndexedDb object file store.
# Removed in 7.2
indexDBMaxOpenFiles:
# Table of contents size for IndexedDb object file store.
# Removed in 7.2
indexDBTOCSize:
# Disable detection of sparse file support and use the full file
# length which may be an over-estimate of the actual usage in
# the persist tier. Removed in 7.2
indexDBTierByFileLength: false
# Startup data-loading scheme:
# * "always" : load all the data into memory before
# accepting requests * "lazy" : load the necessary
# data to start, but load the remainder
# lazily * "on_demand" : only load data as requests use it
loadVectorsOnStart: "on_demand"
# Removed in 7.2
metadataFlushImmediate: true
# Specify a base directory to store persistence data files.
persistDirectory: "/opt/gpudb/persist"
# Whether to use synchronous persistence file writing.
# If "false", files will be written asynchronously. Removed in
# 7.2
persistSync: true
# Duration in seconds, for which persistence files will be
# force-synced if out of sync, once per minute. NOTE: Files are
# always opportunistically saved; this simply enforces a
# maximum time a file can be out of date. Set to a very high
# number to disable.
persistSyncTime: 5
# The maximum number of bytes in the shadow aggregate cache
shadowAggSize: 100000000
# Whether to enable chunk caching
shadowCubeEnabled: true
# The maximum number of bytes in the shadow filter cache
shadowFilterSize: 100000000
# Base directory to store hashed strings.
smsDirectory: "${gaia.persist_directory}"
# Maximum number of open files (per-TOM) for the SMS
# (string) store.
smsMaxOpenFiles: 128
# Synchronous compression: compress vectors on set compression.
synchronousCompression: false
# Directory for GPUdb to use to store temporary files. Must be a
# fully qualified path, have at least 100Mb of free space, and
# execute permission.
tempDirectory: "${gaia.persist_directory}/tmp"
# Base directory to store the text search index.
textIndexDirectory: "${gaia.persist_directory}"
# Enable checksum protection on the wal entries. New in 7.2
walChecksum: true
# Specifies how frequently wal entries are written with
# background sync. New in 7.2
walFlushFrequency: 60
# Maximum size of each wal segment file New in 7.2
walMaxSegmentSize: 500000000
# Approximate number of segment files to split the wal across. A
# minimum of two is required. The size of the wal is limited by
# segment_count * max_segment_size. (per rank and per tom) Set
# to 0 to remove a size limit on the wal itself, but still be
# bounded by rank tier limits. Set to -1 to have the database
# decide automatically per table. New in 7.2
walSegmentCount:
# Sync mode to use when persisting wal entries to disk:
# "none" : Disable the wal "background" : Wal entries are
# periodically written instead of immediately after each
# operation "flush" : Protects entries in the event of a
# database crash "fsync" : Protects entries in the event
# of an OS crash New in 7.2
walSyncPolicy: "flush"
# If true, any table that is found to be corrupt after replaying
# its wal at startup will automatically be truncated so that
# the table becomes operable. If false, the user will be
# responsible for resolving the issue via sql REPAIR TABLE or
# similar. New in 7.2
walTruncateCorruptTablesOnStart: true
# PostgresProxy
postgresProxy:
# Postgres Proxy Server Start an Postgres(TCP) server as a proxy
# to handle postgres wire protocol messages.
enablePostgresProxy: false
# Set idle connection timeout in seconds. (default: "1200")
idleConnectionTimeout: 1200
# Set max number of queued server connections. (default: "1")
maxQueuedConnections: 1
# Set max number of server threads to spawn. (default: "64")
maxThreads: 64
# Set min number of server threads to spawn. (default: "2")
minThreads: 2
# TCP port that the postgres proxy server will listen on
# if 'enable_postgres_proxy' is "true".
port:
# Number of port to expose on the pod's IP address. This must
# be a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this
# must be a valid port number, 0 < x < 65536. If HostNetwork
# is specified, this must match ContainerPort. Most
# containers do not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique
# within the pod. Each named port in a pod must have a unique
# name. Name for the port that can be referred to by
# services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# Set to "true" to use SSL; if "true" then 'ssl_key_file'
# and 'ssl_cert_file' must be provided
ssl: false sslCertFile: ""
# Files containing the SSL private Key and the SSL certificate
# for. If required, a self signed certificate (expires after 10
# years) can be generated via the command: openssl req -newkey
# rsa:2048 -new -nodes -x509 \ -days 3650 -keyout key.pem -out
# cert.pem
sslKeyFile: ""
# ProcessesConfig
processes:
# Set the maximum number of threads per tom for table
# initialization on startup
initTablesNumThreadsPerTom: 8
# Set the number of parallel calculation threads to use for data
# processing use -1 to use the max number of threads
# (not recommended)
kernelOmpThreads: 3
# The maximum number of web server threads to spawn
maxHttpThreads: 512
# Set the maximum number of threads (both workers and masters)
# to be passed to TBB on initialization. Generally
# speaking, 'max_tbb_threads_per_rank' - "1" TBB workers will
# be created. Use "-1" for no limit.
maxTbbThreadsPerRank: "-1"
# The minimum number of web server threads to spawn
minHttpThreads: 8
# Set the number of parallel jobs to create for multi-child set
# calulations use "-1" to use the max number of threads
# (not recommended)
smOmpThreads: 2
# Maximum number of simultaneous threads allocated to a given
# request, on each rank. Note that thread allocation may also
# be limted by resource group limits and/or system load.
subtaskConcurrentyLimit: "-1"
# Set the number of TaskCalculators per TOM, GPU data
# processors.
tcsPerTom: "-1"
# Set the number of TOMs (data container shards) per rank
tomsPerRank: 1
# Set the number of TaskProcessors per TOM, CPU data
# processors.
tpsPerTom: "-1"
# ProcsConfig
procs:
# Directory where proc files are stored at runtime. Must be a
# fully qualified path with execute permission. If not
# specified, 'temp_directory' will be used.
directory:
# PersistentVolumeClaim is a user's request for and claim to a
# persistent volume
persistVolumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and may
# reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource this
# object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the volume
# should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An existing
# PVC (PersistentVolumeClaim) If the provisioner or an
# external controller can support the specified data
# source, it will create a new volume based on the
# contents of the specified data source. When the
# AnyVolumeDataSource feature gate is enabled, dataSource
# contents will be copied to dataSourceRef, and
# dataSourceRef contents will be copied to dataSource
# when dataSourceRef.namespace is not specified. If the
# namespace is specified, then dataSourceRef will not be
# copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For any
# other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume is
# desired. This may be any object from a non-empty API
# group (non core object) or a PersistentVolumeClaim
# object. When this field is specified, volume binding
# will only succeed if the type of the specified object
# matches some installed volume populator or dynamic
# provisioner. This field will replace the functionality
# of the dataSource field and as such if both fields are
# non-empty, they must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other is
# non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate to
# be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For any
# other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified, a
# gateway.networking.k8s.io/ReferenceGrant object is
# required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.(Alpha) This
# field requires the CrossNamespaceVolumeDataSource
# feature gate to be enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container.
# This is an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed
# Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set of
# values. Valid operators are In, NotIn, Exists and
# DoesNotExist.
operator: string
# values is an array of string values. If the operator
# is In or NotIn, the values array must be non-empty.
# If the operator is Exists or DoesNotExist, the
# values array must be empty. This array is replaced
# during a strategic merge patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values array
# contains only "value". The requirements are ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the volume
# backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request is
# lowered, allocatedResources is only lowered if there
# are no expansion operations in progress and if the
# actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and requires
# enabling RecoverVolumeExpansionFailure feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent volume
# claim. If underlying persistent volume is being resized
# then the Condition will be set to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value of
# PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# VolumeMount describes a mounting of a Volume within a
# container.
volumeMount:
# Path within the container at which the volume should be
# mounted. Must not contain ':'.
mountPath: string
# mountPropagation determines how mounts are propagated from
# the host to container and the other way around. When not
# set, MountPropagationNone is used. This field is beta in
# 1.10.
mountPropagation: string
# This must match the Name of a Volume.
name: string
# Mounted read-only if true, read-write otherwise (false or
# unspecified). Defaults to false.
readOnly: true
# Path within the volume from which the container's volume
# should be mounted. Defaults to "" (volume's root).
subPath: string
# Expanded path within the volume from which the container's
# volume should be mounted. Behaves similarly to SubPath
# but environment variable references $(VAR_NAME) are
# expanded using the container's environment. Defaults
# to "" (volume's root). SubPathExpr and SubPath are
# mutually exclusive.
subPathExpr: string
# Enable procs (UDFs)
enable: true
# SecurityConfig
security:
# Automatically create accounts for externally-authenticated
# users. If 'enable_external_authentication' is "false", this
# setting has no effect. Note that accounts are not
# automatically deleted if users are removed from the external
# authentication provider and will be orphaned.
autoCreateExternalUsers: false
# Automatically add roles passed in via the "KINETICA_ROLES"
# HTTP header to externally-authenticated users. Specified
# roles that do not exist are ignored.
# If 'enable_external_authentication' is "false", this setting
# has no effect. IMPORTANT: DO NOT ENABLE unless the
# authentication proxy is configured to block "KINETICA_ROLES"
# HTTP headers passed in from clients.
autoGrantExternalRoles: false
# Comma-separated list of roles to revoke from
# externally-authenticated users prior to granting roles passed
# in via the "KINETICA_ROLES" HTTP header, or "*" to revoke all
# roles. Preceding a role name with an "!" overrides the
# revocation (e.g. "*,!foo" revokes all roles except "foo").
# Leave blank to disable. If
# either 'enable_external_authentication'
# or 'auto_grant_external_roles' is "false", this setting has
# no effect.
autoRevokeExternalRoles: false
# Enable authorization checks. When disabled, all requests will
# be treated as the administrative user.
enableAuthorization: true
# Enable external (LDAP, Kerberos, etc.) authentication. User
# IDs of externally-authenticated users must be passed in via
# the "REMOTE_USER" HTTP header from the authentication proxy.
# May be used in conjuntion with the 'enable_httpd_proxy'
# setting above for an integrated external authentication
# solution. IMPORTANT: DO NOT ENABLE unless external access to
# GPUdb ports has been blocked via firewall AND the
# authentication proxy is configured to block "REMOTE_USER"
# HTTP headers passed in from clients. server.
enableExternalAuthentication: true
# ExternalSecurity
externalSecurity:
# Ranger
ranger:
# AuthorizerAddress - The network URI for the
# ranger_authorizer to start. The URI can be either TCP or
# IPC. TCP address is used to indicate the remote
# ranger_authorizer which may run at other hosts. The IPC
# address is for a local ranger_authorizer. Example
# addresses for remote or TCP servers: tcp://127.0.0.1:9293
# tcp://HOST_IP:9293 Example address for local IPC servers:
# ipc:///tmp/gpudb-ranger-0
# security.external.ranger_authorizer.address = ipc://$
# {gaia.temp_directory}/gpudb-ranger-0
authorizerAddress: "ipc://$
{gaia.temp_directory}/gpudb-ranger-0"
# Remote debugger port used for the ranger_authorizer.
# Setting the port to "0" disables remote debugging. NOTE:
# Recommended port to use is "5005"
# security.external.ranger_authorizer.remote_debug_port =
# 0
authorizerRemoteDebugPort: 0
# AuthorizerTimeout - Ranger Authorizer timeout in seconds
# security.external.ranger_authorizer.timeout = 120
authorizerTimeout: 120
# CacheMinutes- Maximum minutes to hold on to data from
# Ranger security.external.ranger.cache_minutes = 60
cacheMinutes: 60
# Name of the service created on the Ranger Server to manage
# this Kinetica instance
# security.external.ranger.service_name = kinetica
name: "kinetica"
# ExtURL - URL of Ranger REST API. E.g.,
# https://localhost:6080/ Leave blank for no Ranger Server
# security.external.ranger.url =
url: string
# The minimum allowable password length.
minPasswordLength: 4
# Require all users to be authenticated. Disable this to allow
# users to access the database as the 'unauthenticated' user.
# Useful for situations where the public needs to access the
# data.
requireAuthentication: true
# UnifiedSecurityNamespace - Use a single namespace for internal
# and external user IDs and role names. If false, external user
# IDs must be prefixed with "@" to differentiate them from
# internal user IDs and role names (except in the "REMOTE_USER"
# HTTP header, where the "@" is omitted).
# unified_security_namespace = true
unifiedSecurityNamespace: true
# SQLConfig
sql:
# SQLPlannerAddress is not included as it is just default
# always
address: "ipc://${gaia.temp_directory}/gpudb-query-engine-0"
# Enable the cost-based optimizer
costBasedOptimization: false
# Enable distributed joins
distributedJoins: true
# Enable distributed operations
distributedOperations: true
# Enable Query Planner
enablePlanner: true
# Perform joins between only 2 tables at a time; default is all
# tables involved in the operation at once
forceBinaryJoins: false
# Perform unions/intersections/exceptions between only 2 tables
# at a time; default is all tables involved in the operation at
# once
forceBinarySetOps: false
# Max parallel steps
maxParallelSteps: 4
# Max allowed view nesting levels. Valid range(1-64)
maxViewNestingLevels: 16
# TTL of the paging results table
pagingTableTTL: 20
# Enable parallel query evaluation
parallelExecution: true
# The maximum number of entries in the SQL plan cache. The
# default is "4000" entries, but the configurable range
# is "1" - "1000000". Plan caching will be disabled if the
# value is set outside of that range.
planCacheSize: 4000
# The maximum memory for the query planner to use in Megabytes.
plannerMaxMemory: 4096
# The maximum stack size for the query planner threads to use in
# Megabytes.
plannerMaxStack: 6
# Query planner timeout in seconds
plannerTimeout: 120
# Max Query planner threads
plannerWorkers: 16
# Remote debugger port used for the query planner. Setting the
# port to "0" disables remote debugging. NOTE: Recommended
# port to use is "5005"
remoteDebugPort: 5005
# TTL of the query cache results table
resultsCacheTTL: 60
# Enable query results caching
resultsCaching: true
# Enable rule-based query rewrites
ruleBasedOptimization: true
# SQLEngineConfig
sqlEngine:
# Enable the cost-based optimizer
costBasedOptimization: false
# Name of default collection for user tables
defaultSchema: ""
# Enable distributed joins
distributedJoins: true
# Enable distributed operations
distributedOperations: true
# Perform joins between only 2 tables at a time; default is all
# tables involved in the operation at once
forceBinaryJoins: false
# Perform unions/intersections/exceptions between only 2 tables
# at a time; default is all tables involved in the operation at
# once
forceBinarySetOps: false
# Max parallel steps
maxParallelSteps: 4
# Max allowed view nesting levels. Valid range(1-64)
maxViewNestingLevels: 16
# TTL of the paging results table
pagingTableTTL: 20
# Enable parallel query evaluation
parallelExecution: true
# The maximum number of entries in the SQL plan cache. The
# default is "4000" entries, but the configurable range
# is "1" - "1000000". Plan caching will be disabled if the
# value is set outside of that range.
planCacheSize: 4000
# PlannerConfig
planner:
# Enable Query Planner
enablePlanner: true
# The maximum memory for the query planner to use in
# Megabytes.
maxMemory: 4096
# The maximum stack size for the query planner threads to use
# in Megabytes.
maxStack: 6
# The network URI for the query planner to start. The URI can
# be either TCP or IPC. TCP address is used to indicate the
# remote query planner which may run at other hosts. The IPC
# address is for a local query planner. Example for remote or
# TCP servers:
# # sql.planner.address = tcp://127.0.0.1:9293 #
# sql.planner.address = tcp://HOST_IP:9293 Example for
# local IPC servers:
# # sql.planner.address = ipc:///tmp/gpudb-query-engine-0
plannerAddress: "ipc:///tmp/gpudb-query-engine-0"
# Remote debugger port used for the query planner. Setting the
# port to "0" disables remote debugging. NOTE: Recommended
# port to use is "5005"
remoteDebugPort: 0
# Query planner timeout in seconds
timeout: 120
# Max Query planner threads
workers: 16 results:
# TTL of the query cache results table
cacheTTL: 60
# Enable query results caching
caching: true
# Enable rule-based query rewrites
ruleBasedOptimization: true
# Name of collection that will be used to store result tables
# generated as part of query execution
tempCollection: "__SQL_TEMP"
# StatisticsConfig
statistics:
# system_metadata.stats_aggr_rowcount = 10000
aggrRowCount: 10000
# system_metadata.stats_aggr_time = 1
aggrTime: 1
# Run a statistics server to collect information about Kinetica
# and the machines it runs on.
enable: true
# Statistics server IP address (run on head node) default port
# is "2003"
ipAddress: "${gaia.host0.address}"
# Statistics server namespace - should be a machine identifier
namespace: "gpudb" port: 2003
# System metadata catalog settings
# system_metadata.stats_retention_days = 21
retentionDays: 21
# TextSearchConfig
textSearch:
# Enable text search capability within the database.
enableTextSearch: false
# Number of text indices to start for each rank
textIndicesPerTom: 2
# Searcher refresh intervals - specifies the maximum delay
# (in seconds) between writing to the text search index and
# being able to search for the value just written. A value
# of "0" insures that writes to the index are immediately
# available to be searched. A more nominal value of "100"
# should improve ingest speed at the cost of some delay in
# being able to text search newly added values.
textSearcherRefreshInterval: 20
# Use the production capable external text server instead of a
# lightweight internal server which should only be used for
# light testing. Note: The internal text server is deprecated
# and may be removed in future versions.
useExternalTextServer: true tieredStorage:
# Cold Storage Tiers can be used to extend the storage capacity
# of the Persist Tier. Assign a tier strategy with cold storage
# to objects that will be infrequently accessed since they will
# be moved as needed from the Persist Tier. The Cold Storage
# Tier is typically a much larger capacity physical disk or a
# cloud-based storage system which may not be as performant as
# the Persist Tier storage. A default storage limit and
# eviction thresholds can be set across all ranks for a given
# Cold Storage Tier, while one or more ranks within a Cold
# Storage Tier may be configured to override those defaults.
# NOTE: If an object needs to be pulled out of cold storage
# during a query, it may need to use the local persist
# directory as a temporary swap space. This may trigger an
# eviction of other persisted items to cold storage due to low
# disk space condition defined by the watermark settings for
# the Persist Tier.
coldStorageTier:
# ColdStorageAzure
coldStorageAzure:
# 'base_path' : A base path based on the
# provider type for this tier.
basePath: string clientID: string clientSecret: string
# 'connection_timeout' : Timeout in seconds for
# connecting to this storage provider.
connectionTimeout: "30"
# 'base_path' : A base path based on the
# provider type for this tier. BasePath string
# `json:"basePath,omitempty"`
containerName: "/gpudb/cold_storage"
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath" sasToken:
string storageAccountKey: string storageAccountName: string
tenantID: string useManagedCredentials: false
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# 'wait_timeout' : Timeout in seconds for reading
# from or writing to this storage provider.
waitTimeout: "90"
# ColdStorageDisk
coldStorageDisk:
# 'base_path' : A base path based on the
# provider type for this tier.
basePath: string
# 'connection_timeout' : Timeout in seconds for
# connecting to this storage provider.
connectionTimeout: "30"
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# 'wait_timeout' : Timeout in seconds for reading
# from or writing to this storage provider.
waitTimeout: "90"
# ColdStorageGCS - Google Cloud Storage-specific *parameter*
# names:
# * BucketName = 'gcs_bucket_name' *
# ProjectID - 'gcs_project_id'
# (optional) * AccountID - 'gcs_service_account_id'
# (optional) *
# AccountPrivateKey - 'gcs_service_account_private_key'
# (optional) * AccountKeys - 'gcs_service_account_keys'
# (optional) NOTE: If
# the 'gcs_service_account_id', 'gcs_service_account_private_key'
# and/or 'gcs_service_account_keys' values are not
# specified, the Google Clould Client Libraries will
# attempt to find and use service account credentials from
# the GOOGLE_APPLICATION_CREDENTIALS environment
# variable.
coldStorageGCS: accountID: string accountKeys: string
accountPrivateKey: string
# 'base_path' : A base path based on the
# provider type for this tier.
basePath: string bucketName: string
# 'connection_timeout' : Timeout in seconds for
# connecting to this storage provider.
connectionTimeout: "30"
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" projectID: string
provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# 'wait_timeout' : Timeout in seconds for reading
# from or writing to this storage provider.
waitTimeout: "90"
# ColdStorageHDFS
coldStorageHDFS:
# ColdStorageDisk
default:
# 'base_path' : A base path based on the
# provider type for this tier.
basePath: string
# 'connection_timeout' : Timeout in seconds for
# connecting to this storage provider.
connectionTimeout: "30"
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and
# continue until the 'low_watermark' percentage usage
# is reached. Default is "90", signifying a 90%
# memory usage threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that
# can be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from
# the endpoint the client submits requests to. Cannot
# be updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support
# the specified data source, it will create a new
# volume based on the contents of the specified data
# source. When the AnyVolumeDataSource feature gate
# is enabled, dataSource contents will be copied to
# dataSourceRef, and dataSourceRef contents will be
# copied to dataSource when dataSourceRef.namespace
# is not specified. If the namespace is specified,
# then dataSourceRef will not be copied to
# dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is
# required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty
# volume is desired. This may be any object from a
# non-empty API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty,
# they must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same
# value and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object,
# as well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values
# (dropping them), dataSourceRef preserves all
# values, and generates an error if a disallowed
# value is specified. * While dataSource only allows
# local objects, dataSourceRef allows objects in any
# namespaces. (Beta) Using this field requires the
# AnyVolumeDataSource feature gate to be enabled.
# (Alpha) Using the namespace field of dataSourceRef
# requires the CrossNamespaceVolumeDataSource feature
# gate to be enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is
# required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is
# specified, a
# gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow
# that namespace's owner to accept the reference.
# See the ReferenceGrant documentation for
# details. (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the
# volume should have. If
# RecoverVolumeExpansionFailure feature is enabled
# users are allowed to specify resource requirements
# that are lower than previous value but must still
# be higher than capacity recorded in the status
# field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider
# for binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a
# set of values. Valid operators are In, NotIn,
# Exists and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must
# be non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A
# single {key,value} in the matchLabels map is
# equivalent to an element of matchExpressions,
# whose key field is "key", the operator is "In",
# and the values array contains only "value". The
# requirements are ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required
# by the claim. Value of Filesystem is implied when
# not included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to
# a PVC. It may be larger than the actual capacity
# when a volume expansion operation is requested. For
# storage quota, the larger value from
# allocatedResources and PVC.spec.resources is used.
# If allocatedResources is not set,
# PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and
# if the actual volume capacity is equal or lower
# than the requested capacity. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short,
# machine understandable string that gives the
# reason for condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid
# value of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when
# expansion is complete resizeStatus is set to empty
# string by resize controller or kubelet. This is an
# alpha field and requires enabling
# RecoverVolumeExpansionFailure feature.
resizeStatus: string
# 'wait_timeout' : Timeout in seconds for reading
# from or writing to this storage provider.
waitTimeout: "90"
# 'hdfs_kerberos_keytab' : The Kerberos keytab file used to
# authenticate the "gpudb" Kerberos
kerberosKeytab: string
# 'hdfs_principal' : The effective principal name to
# use when connecting to the hadoop cluster.
principal: string
# 'hdfs_uri' : The host IP address & port for
# the hadoop distributed file system. For example:
# hdfs://localhost:8020
uri: string
# 'hdfs_use_kerberos' : Set to "true" to enable Kerberos
# authentication to an HDFS storage server. The
# credentials of the principal are in the file specified
# by the 'hdfs_kerberos_keytab' parameter. Note that
# Kerberos's *kinit* command will be run when the database
# is started.
useKerberos: true
# ColdStorageS3
coldStorageS3: awsAccessKeyId: string awsRoleARN: string
awsSecretAccessKey: string
# 'base_path' : A base path based on the
# provider type for this tier.
basePath: string bucketName: string
# 'connection_timeout' : Timeout in seconds for
# connecting to this storage provider.
connectionTimeout: "30" encryptionCustomerAlgorithm: string
encryptionCustomerKey: string
# EncryptionType - This is optional and valid values are
# sse-s3 (Encryption key is managed by Amazon S3) and
# sse-kms (Encryption key is managed by AWS Key Management
# Service (kms)).
encryptionType: string
# Endpoint - s3_endpoint
endpoint: string
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# KMSKeyID - This is optional and must be specified when
# encryption type is sse-kms.
kmsKeyID: string
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath" region:
string useManagedCredentials: true
# UseVirtualAddressing - 's3_use_virtual_addressing' : If
# true (default), S3 endpoints will be constructed using
# the 'virtual' style which includes the bucket name as
# part of the hostname. Set to false to use the 'path'
# style which treats the bucket name as if it is a path in
# the URI.
useVirtualAddressing: true
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# 'wait_timeout' : Timeout in seconds for reading
# from or writing to this storage provider.
waitTimeout: "90"
# ColdStorageType The storage provider type. Currently,
# supports "none", "disk"(local/network storage), "hdfs"
# (Hadoop distributed filesystem), "s3" (Amazon S3
# bucket), "azure_blob" (Microsoft Azure Blob Storage)
# and "gcs" (Google GCS Bucket).
coldStorageType: "none" name: string
# The DiskCacheTier are used as temporary swap space for data
# that doesn't fit in RAM or VRAM. The disk should be as fast
# or faster than the Persist Tier storage since this tier is
# used as an intermediary cache between the RAM and Persist
# Tiers.
diskCacheTier:
# DiskTierStorageLimit
default:
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string defaultStorePersistentObjects: true
ranks:
- highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# GlobalTier Parameters
globalTier:
# Co-locates all disks to a single disk i.e. persist, cache,
# UDF will be on a single PVC.
colocateDisks: true
# Timeout in seconds for subsequent requests to wait on a
# locked resource
concurrentWaitTimeout: 120
# EncryptDataAtRest - Enable disk encryption of data at rest
encryptDataAtRest: true
# The PersistTier are used as temporary swap space for data that
# doesn't fit in RAM or VRAM. The disk should be as fast or
# faster than the Persist Tier storage since this tier is used
# as an intermediary cache between the RAM and Persist Tiers.
persistTier:
# DiskTierStorageLimit
default:
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string defaultStorePersistentObjects: true
ranks:
- highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# A base directory to use as a space for this tier.
path: "default" provisioner: "docker.io/hostpath"
# Kubernetes Persistent Volume Claim for this disk tier.
volumeClaim:
# APIVersion defines the versioned schema of this
# representation of an object. Servers should convert
# recognized schemas to the latest internal value, and
# may reject unrecognized values. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
apiVersion: app.kinetica.com/v1
# Kind is a string value representing the REST resource
# this object represents. Servers may infer this from the
# endpoint the client submits requests to. Cannot be
# updated. In CamelCase. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
kind: KineticaCluster
# Standard object's metadata. More info:
# https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metadata: {}
# spec defines the desired characteristics of a volume
# requested by a pod author. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
spec:
# accessModes contains the desired access modes the
# volume should have. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# dataSource field can be used to specify either: * An
# existing VolumeSnapshot object
# (snapshot.storage.k8s.io/VolumeSnapshot) * An
# existing PVC (PersistentVolumeClaim) If the
# provisioner or an external controller can support the
# specified data source, it will create a new volume
# based on the contents of the specified data source.
# When the AnyVolumeDataSource feature gate is enabled,
# dataSource contents will be copied to dataSourceRef,
# and dataSourceRef contents will be copied to
# dataSource when dataSourceRef.namespace is not
# specified. If the namespace is specified, then
# dataSourceRef will not be copied to dataSource.
dataSource:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# dataSourceRef specifies the object from which to
# populate the volume with data, if a non-empty volume
# is desired. This may be any object from a non-empty
# API group (non core object) or a
# PersistentVolumeClaim object. When this field is
# specified, volume binding will only succeed if the
# type of the specified object matches some installed
# volume populator or dynamic provisioner. This field
# will replace the functionality of the dataSource
# field and as such if both fields are non-empty, they
# must have the same value. For backwards
# compatibility, when namespace isn't specified in
# dataSourceRef, both fields (dataSource and
# dataSourceRef) will be set to the same value
# automatically if one of them is empty and the other
# is non-empty. When namespace is specified in
# dataSourceRef, dataSource isn't set to the same value
# and must be empty. There are three important
# differences between dataSource and dataSourceRef: *
# While dataSource only allows two specific types of
# objects, dataSourceRef allows any non-core object, as
# well as PersistentVolumeClaim objects. * While
# dataSource ignores disallowed values (dropping them),
# dataSourceRef preserves all values, and generates an
# error if a disallowed value is specified. * While
# dataSource only allows local objects, dataSourceRef
# allows objects in any namespaces. (Beta) Using this
# field requires the AnyVolumeDataSource feature gate
# to be enabled. (Alpha) Using the namespace field of
# dataSourceRef requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
dataSourceRef:
# APIGroup is the group for the resource being
# referenced. If APIGroup is not specified, the
# specified Kind must be in the core API group. For
# any other third-party types, APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# Namespace is the namespace of resource being
# referenced Note that when a namespace is specified,
# a gateway.networking.k8s.io/ReferenceGrant object
# is required in the referent namespace to allow that
# namespace's owner to accept the reference. See the
# ReferenceGrant documentation for details.
# (Alpha) This field requires the
# CrossNamespaceVolumeDataSource feature gate to be
# enabled.
namespace: string
# resources represents the minimum resources the volume
# should have. If RecoverVolumeExpansionFailure feature
# is enabled users are allowed to specify resource
# requirements that are lower than previous value but
# must still be higher than capacity recorded in the
# status field of the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this
# container. This is an alpha field and requires
# enabling the DynamicResourceAllocation feature
# gate. This field is immutable. It can only be set
# for containers.
claims:
- name: string
# Limits describes the maximum amount of compute
# resources allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute
# resources required. If Requests is omitted for a
# container, it defaults to Limits if that is
# explicitly specified, otherwise to an
# implementation-defined value. Requests cannot
# exceed Limits. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# selector is a label query over volumes to consider for
# binding.
selector:
# matchExpressions is a list of label selector
# requirements. The requirements are ANDed.
matchExpressions:
- key: string
# operator represents a key's relationship to a set
# of values. Valid operators are In, NotIn, Exists
# and DoesNotExist.
operator: string
# values is an array of string values. If the
# operator is In or NotIn, the values array must be
# non-empty. If the operator is Exists or
# DoesNotExist, the values array must be empty.
# This array is replaced during a strategic merge
# patch.
values: ["string"]
# matchLabels is a map of {key,value} pairs. A single
# {key,value} in the matchLabels map is equivalent to
# an element of matchExpressions, whose key field
# is "key", the operator is "In", and the values
# array contains only "value". The requirements are
# ANDed.
matchLabels: {}
# storageClassName is the name of the StorageClass
# required by the claim. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
storageClassName: string
# volumeMode defines what type of volume is required by
# the claim. Value of Filesystem is implied when not
# included in claim spec.
volumeMode: string
# volumeName is the binding reference to the
# PersistentVolume backing this claim.
volumeName: string
# status represents the current information/status of a
# persistent volume claim. Read-only. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
status:
# accessModes contains the actual access modes the
# volume backing the PVC has. More info:
# https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
accessModes: ["string"]
# allocatedResources is the storage resource within
# AllocatedResources tracks the capacity allocated to a
# PVC. It may be larger than the actual capacity when a
# volume expansion operation is requested. For storage
# quota, the larger value from allocatedResources and
# PVC.spec.resources is used. If allocatedResources is
# not set, PVC.spec.resources alone is used for quota
# calculation. If a volume expansion capacity request
# is lowered, allocatedResources is only lowered if
# there are no expansion operations in progress and if
# the actual volume capacity is equal or lower than the
# requested capacity. This is an alpha field and
# requires enabling RecoverVolumeExpansionFailure
# feature.
allocatedResources: {}
# capacity represents the actual resources of the
# underlying volume.
capacity: {}
# conditions is the current Condition of persistent
# volume claim. If underlying persistent volume is
# being resized then the Condition will be set
# to 'ResizeStarted'.
conditions:
- lastProbeTime: string
# lastTransitionTime is the time the condition
# transitioned from one status to another.
lastTransitionTime: string
# message is the human-readable message indicating
# details about last transition.
message: string
# reason is a unique, this should be a short, machine
# understandable string that gives the reason for
# condition's last transition. If it
# reports "ResizeStarted" that means the underlying
# persistent volume is being resized.
reason: string status: string
# PersistentVolumeClaimConditionType is a valid value
# of PersistentVolumeClaimCondition.Type
type: string
# phase represents the current phase of
# PersistentVolumeClaim.
phase: string
# resizeStatus stores status of resize operation.
# ResizeStatus is not set by default but when expansion
# is complete resizeStatus is set to empty string by
# resize controller or kubelet. This is an alpha field
# and requires enabling RecoverVolumeExpansionFailure
# feature.
resizeStatus: string
# The RAMTier represents the RAM available for data storage per
# rank. The RAM Tier is NOT used for small, non-data objects or
# variables that are allocated and deallocated for program flow
# control or used to store metadata or other similar
# information; these continue to use either the stack or the
# regular runtime memory allocator. This tier should be sized
# on each machine such that there is sufficient RAM left over
# to handle this overhead, as well as the needs of other
# processes running on the same machine.
ramTier:
# The RAM Tier represents the RAM available for data storage
# per rank. The RAM Tier is NOT used for small, non-data
# objects or variables that are allocated and deallocated for
# program flow control or used to store metadata or other
# similar information; these continue to use either the stack
# or the regular runtime memory allocator. This tier should
# be sized on each machine such that there is sufficient RAM
# left over to handle this overhead, as well as the needs of
# other processes running on the same machine. A default
# memory limit and eviction thresholds can be set across all
# ranks, while one or more ranks may be configured to
# override those defaults. The general format for RAM
# settings:
# # tier.ram.[default|rank<#>].<parameter> Valid *parameter*
# names include:
# * 'limit' : The maximum RAM (bytes) per rank that
# can be allocated across all resource groups. Default
# is -1, signifying no limit and ignore watermark
# settings. * 'high_watermark' : RAM percentage used
# eviction threshold. Once memory usage exceeds this
# value, evictions from this tier will be scheduled in
# the background and continue until the 'low_watermark'
# percentage usage is reached. Default is "90",
# signifying a 90% memory usage
# threshold. * 'low_watermark' : RAM percentage used
# recovery threshold. Once memory usage exceeds
# the 'high_watermark', evictions will continue until
# memory usage falls below this recovery threshold.
# Default is "50", signifying a 50% memory usage
# threshold.
default:
# * 'high_watermark' : Percentage used eviction threshold.
# Once usage exceeds this value, evictions from this
# tier will be scheduled in the background and continue
# until the 'low_watermark' percentage usage is reached.
# Default is "90", signifying a 90% memory usage
# threshold.
highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string
# The maximum RAM (bytes) for processing data at rank 0.
# Overrides the overall default RAM tier
# limit. #tier.ram.rank0.limit = -1
ranks:
- highWatermark: 90
# * 'limit' : The maximum (bytes) per rank that can
# be allocated across all resource groups.
limit: "1Gi"
# * 'low_watermark' : Percentage used recovery threshold.
# Once usage exceeds the 'high_watermark', evictions
# will continue until usage falls below this recovery
# threshold. Default is "80", signifying an 80% usage
# threshold.
lowWatermark: 80 name: string tieredStrategy:
# Default strategy to apply to tables or columns when one was
# not provided during table creation. This strategy is also
# applied to a resource group that does not specify one at time
# of creation. The strategy is formed by chaining together the
# tier types and their respective eviction priorities. Any
# given tier may appear no more than once in the chain and the
# priority must be in range "1" - "10", where "1" is the lowest
# priority (first to be evicted) and "9" is the highest
# priority (last to be evicted). A priority of "10" indicates
# that an object is unevictable. Each tier's priority is in
# relation to the priority of other objects in the same tier;
# e.g., "RAM 9, DISK2 1" indicates that an object will be the
# highest evictable priority among objects in the RAM Tier
# (last evicted), but that it will be the lowest priority among
# objects in the Disk Tier named 'disk2' (first evicted). Note
# that since an object can only have one Disk Tier instance in
# its strategy, the corresponding priority will only apply in
# relation to other objects in Disk Tier instance 'disk2'. See
# the Tiered Storage section for more information about tier
# type names. Format: <tier1> <priority>, <tier2> <priority>,
# <tier3> <priority>, ... Examples using a Disk Tier
# named 'disk2' and a Cold Storage Tier 'cold0': vram 3, ram 5,
# disk2 3, persist 10 vram 3, ram 5, disk2 3, persist 6, cold0
# 10 tier_strategy.default = VRAM 1, RAM 5, PERSIST 5
default: "VRAM 1, RAM 5, PERSIST 5"
# Predicate evaluation interval (in minutes) - indicates the
# interval at which the tier strategy predicates are evaluated
predicateEvaluationInterval: 60 video:
# System default TTL for videos. Time-to-live (TTL) is the
# number of minutes before a video will expire and be removed,
# or -1 to disable. video_default_ttl = -1
defaultTTL: "-1"
# The maximum number of videos to allow on the system. Set to 0
# to disable video rendering. Set to -1 to allow an unlimited
# number of videos. video_max_count = -1
maxCount: "-1"
# Directory where video files should be temporarily stored while
# rendering. Only accessed by rank 0. video_temp_directory = $
# {gaia.temp_directory}/gpudb-temp-videos
tmpDir: "${gaia.temp_directory}/gpudb-temp-videos"
# VisualizationConfig
visualization:
# Enable level-of-details rendering for fast interaction with
# large WKT polygon data. Only available for the OpenGL
# renderer (when 'enable_opengl_renderer' is "true").
enableLODRendering: true
# If "true", enable hardware-accelerated OpenGL renderer;
# if "false", use the software-based Cairo renderer.
enableOpenGLRenderer: true
# If "true", enable Vector Tile Service (VTS) to support
# client-side visualization of geospatial data. Enabling this
# option increases memory usage on ingestion.
enableVectorTileService: false
# Longitude and latitude ranges of geospatial data for which
# level-of-details representations are being generated. The
# parameter order is: <min_longitude> <min_latitude>
# <max_longitude> <max_latitude> The default values span over
# the world, but the level-of-details rendering becomes more
# efficient when the precise extent of geospatial data is
# specified. kubebuilder:default:={ -180, -90, 180, 90 }
lodDataExtent: [integer]
# The extent to which shape data are pre-processed for
# level-of-details rendering during data insert/load or
# processed on-the-fly in rendering time. This is a trade-off
# between speed and memory. The higher the value, the faster
# level-of-details rendering is, but the more memory is used
# for storing processed shape data. The maximum level is "10"
# (most shape data are pre-processed) and the minimum level
# is "0".
lodPreProcessingLevel: 5
# The number of subregions in horizontal and vertical geospatial
# data extent. The default values of "12 6" divide the world
# into subregions of 30 degree (lon.) x 30 degree (lat.)
lodSubRegionNum: [12,6]
# A base image resolution (width and height in pixels) at which
# a subregion would be rendered in a global view spanning over
# the whole dataset. Based on this resolution level-of-details
# representations are generated for the polygons located in the
# subregion.
lodSubRegionResolution: [512,512]
# Maximum heatmap size (in pixels) that can be generated. This
# reserves 'max_heatmap_size' ^ 2 * 8 bytes of GPU memory
# at **rank0**
maxHeatmapSize: 3072
# The maximum number of levels in the level-of-details
# rendering. As the number increases, level-of-details
# rendering becomes effective at higher zoom levels, but it may
# increase memory usage for storing level-of-details
# representations.
maxLODLevel: 8
# Input geometries are pre-processed upon ingestion for faster
# vector tile generation. This parameter determines the
# zoomlevel at which the vector tile pre-processing stops. A
# vector tile request for a higher zoomlevel than this
# parameter takes additional time because the vector tile needs
# to be generated on the fly.
maxVectorTileZoomLevel: 8
# Input geometries are pre-processed upon ingestion for faster
# vector tile generation. This parameter determines the
# zoomlevel from which the vector tile pre-processing starts. A
# vector tile request for a lower zoomlevel than this parameter
# takes additional time because the vector tile needs to be
# generated on the fly.
minVectorTileZoomLevel: 1
# The number of samples to use for antialiasing. Higher numbers
# will improve image quality but require more GPU memory to
# store the samples on worker ranks. This affects only the
# OpenGL renderer. Value may be "0", "4", "8" or "16". When "0"
# antialiasing is disabled. The default value is "0".
openGLAntialiasingLevel: 1
# Threshold number of points (per-TOM) at which point rendering
# switches to fast mode.
pointRenderThreshold: 100000
# Single-precision coordinates are used for usual rendering
# processes, but depending on the precision of geometry data
# and use case, double precision processing may be required at
# a high zoomlevel. Double precision rendering processes are
# used from the zoomlevel specified by this parameter, which is
# corresponding to a zoomlevel of TMS or Google map service.
renderingPrecisionThreshold: 30
# The image width/height (in pixels) of svg symbols cached in
# the OpenGL symbol cache.
symbolResolution: 100
# The width/height (in pixels) of an OpenGL texture which caches
# symbol images for OpenGL rendering.
symbolTextureSize: 4000
# Threshold for the number of points (per-TOM) after which
# symbology rendering falls back to regular rendering
symbologyRenderThreshold: 10000
# The name of map tiler used for Vector Tile Service. "google"
# and "tms" map tilers are supported currently. This parameter
# should be matched with the map tiler of clients' vector tile
# renderer.
vectorTileMapTiler: "google" workbench:
# Start the Workbench app on the head host when host manager is
# started. enable_workbench = false
enable: false
# # HTTP server port for Workbench if enabled. workbench_port =
# 8000
port:
# Number of port to expose on the pod's IP address. This must
# be a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this
# must be a valid port number, 0 < x < 65536. If HostNetwork
# is specified, this must match ContainerPort. Most
# containers do not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique
# within the pod. Each named port in a pod must have a unique
# name. Name for the port that can be referred to by
# services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# The fully qualified URL used on the Ingress records for any
# exposed services. Completed buy yth Operator. DO NOT POPULATE
# MANUALLY.
fqdn: ""
# The name of the parent HA Ring this cluster belongs to.
haRingName: "default"
# Whether to enable the separate node 'pools' for "infra", "compute"
# pod scheduling. Default: false
hasPools: true
# The port the HostManager will be running in each pod in the
# cluster. Default: 9300, TCP
hostManagerPort:
# Number of port to expose on the pod's IP address. This must be a
# valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must be
# a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name. Name
# for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# Set the name of the container image to use.
image: "kinetica/kinetica-k8s-intel:v7.1.6.0"
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets in
# the same gpudb-namespace to use for pulling any of the images
# used by this PodSpec. If specified, these secrets will be passed
# to individual puller implementations for them to use. For
# example, in the case of docker, only DockerConfig type secrets
# are honored.
imagePullSecrets:
- name: string
# Labels - Pod labels to be applied to the Statefulset DB pods.
labels: {}
# The Ingress Endpoint that GAdmin will be running on.
letsEncrypt:
# Enable LetsEncrypt for Certificate generation.
enabled: false
# LetsEncryptEnvironment
environment: "staging"
# Set the Kinetica DB License.
license: string
# Periodic probe of container liveness. Container will be restarted
# if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# LoggerConfig Kinetica DB Logger Configuration Object Configure the
# LOG4CPLUS logger for the DB. Field takes a string containing the
# full configuration. If not specified a template file is used
# during DB configuration generation.
loggerConfig: configString: string
# Metrics - DB Metrics scrape & forward configuration for
# `fluent-bit`.
metricsRegistryRepositoryTag:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets in
# the same gpudb-namespace to use for pulling any of the images
# used by this PodSpec. If specified, these secrets will be
# passed to individual puller implementations for them to use.
# For example, in the case of docker, only DockerConfig type
# secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Metrics - `fluent-bit` container requests/limits.
metricsResources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# NodeSelector - NodeSelector to be applied to the DB Pods
nodeSelector: {}
# Do not use internal Operator field only.
originalReplicas: 1
# podManagementPolicy controls how pods are created during initial
# scale up, when replacing pods on nodes, or when scaling down. The
# default policy is `OrderedReady`, where pods are created in
# increasing order (pod-0, then pod-1, etc) and the controller will
# wait until each pod is ready before continuing. When scaling
# down, the pods are removed in the opposite order. The alternative
# policy is `Parallel` which will create pods in parallel to match
# the desired scale without waiting, and on scale down will delete
# all pods at once.
podManagementPolicy: "Parallel"
# Number of ranks per node as a uint16 i.e. 1-65535 ranks per node.
# Default: 1
ranksPerNode: 1
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# The number of DB ranks i.e. replicas that the cluster will spin
# up. Default: 3
replicas: 3
# Limit the resources a DB Pod can consume.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# SecurityContext holds security configuration that will be applied
# to a container. Some fields are present in both SecurityContext
# and PodSecurityContext. When both are set, the values in
# SecurityContext take precedence.
securityContext:
# AllowPrivilegeEscalation controls whether a process can gain
# more privileges than its parent process. This bool directly
# controls if the no_new_privs flag will be set on the container
# process. AllowPrivilegeEscalation is true always when the
# container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note
# that this field cannot be set when spec.os.name is windows.
allowPrivilegeEscalation: true
# The capabilities to add/drop when running containers. Defaults
# to the default set of capabilities granted by the container
# runtime. Note that this field cannot be set when spec.os.name
# is windows.
capabilities:
# Added capabilities
add: ["string"]
# Removed capabilities
drop: ["string"]
# Run container in privileged mode. Processes in privileged
# containers are essentially equivalent to root on the host.
# Defaults to false. Note that this field cannot be set when
# spec.os.name is windows.
privileged: true
# procMount denotes the type of proc mount to use for the
# containers. The default is DefaultProcMount which uses the
# container runtime defaults for readonly paths and masked paths.
# This requires the ProcMountType feature flag to be enabled.
# Note that this field cannot be set when spec.os.name is
# windows.
procMount: string
# Whether this container has a read-only root filesystem. Default
# is false. Note that this field cannot be set when spec.os.name
# is windows.
readOnlyRootFilesystem: true
# The GID to run the entrypoint of the container process. Uses
# runtime default if unset. May also be set in
# PodSecurityContext. If set in both SecurityContext and
# PodSecurityContext, the value specified in SecurityContext
# takes precedence. Note that this field cannot be set when
# spec.os.name is windows.
runAsGroup: 1
# Indicates that the container must run as a non-root user. If
# true, the Kubelet will validate the image at runtime to ensure
# that it does not run as UID 0 (root) and fail to start the
# container if it does. If unset or false, no such validation
# will be performed. May also be set in PodSecurityContext. If
# set in both SecurityContext and PodSecurityContext, the value
# specified in SecurityContext takes precedence.
runAsNonRoot: true
# The UID to run the entrypoint of the container process. Defaults
# to user specified in image metadata if unspecified. May also be
# set in PodSecurityContext. If set in both SecurityContext and
# PodSecurityContext, the value specified in SecurityContext
# takes precedence. Note that this field cannot be set when
# spec.os.name is windows.
runAsUser: 1
# The SELinux context to be applied to the container. If
# unspecified, the container runtime will allocate a random
# SELinux context for each container. May also be set in
# PodSecurityContext. If set in both SecurityContext and
# PodSecurityContext, the value specified in SecurityContext
# takes precedence. Note that this field cannot be set when
# spec.os.name is windows.
seLinuxOptions:
# Level is SELinux level label that applies to the container.
level: string
# Role is a SELinux role label that applies to the container.
role: string
# Type is a SELinux type label that applies to the container.
type: string
# User is a SELinux user label that applies to the container.
user: string
# The seccomp options to use by this container. If seccomp options
# are provided at both the pod & container level, the container
# options override the pod options. Note that this field cannot
# be set when spec.os.name is windows.
seccompProfile:
# localhostProfile indicates a profile defined in a file on the
# node should be used. The profile must be preconfigured on the
# node to work. Must be a descending path, relative to the
# kubelet's configured seccomp profile location. Must only be
# set if type is "Localhost".
localhostProfile: string
# type indicates which kind of seccomp profile will be applied.
# Valid options are: Localhost - a profile defined in a file on
# the node should be used. RuntimeDefault - the container
# runtime default profile should be used. Unconfined - no
# profile should be applied.
type: string
# The Windows specific settings applied to all containers. If
# unspecified, the options from the PodSecurityContext will be
# used. If set in both SecurityContext and PodSecurityContext,
# the value specified in SecurityContext takes precedence. Note
# that this field cannot be set when spec.os.name is linux.
windowsOptions:
# GMSACredentialSpec is where the GMSA admission webhook
# (https://github.com/kubernetes-sigs/windows-gmsa) inlines the
# contents of the GMSA credential spec named by the
# GMSACredentialSpecName field.
gmsaCredentialSpec: string
# GMSACredentialSpecName is the name of the GMSA credential spec
# to use.
gmsaCredentialSpecName: string
# HostProcess determines if a container should be run as a 'Host
# Process' container. This field is alpha-level and will only
# be honored by components that enable the
# WindowsHostProcessContainers feature flag. Setting this field
# without the feature flag will result in errors when
# validating the Pod. All of a Pod's containers must have the
# same effective HostProcess value (it is not allowed to have a
# mix of HostProcess containers and non-HostProcess
# containers). In addition, if HostProcess is true then
# HostNetwork must also be set to true.
hostProcess: true
# The UserName in Windows to run the entrypoint of the container
# process. Defaults to the user specified in image metadata if
# unspecified. May also be set in PodSecurityContext. If set in
# both SecurityContext and PodSecurityContext, the value
# specified in SecurityContext takes precedence.
runAsUserName: string
# StartupProbe indicates that the Pod has successfully initialized.
# If specified, no other probes are executed until this completes
# successfully. If this probe fails, the Pod will be restarted,
# just as if the livenessProbe failed. This can be used to provide
# different probe parameters at the beginning of a Pod's lifecycle,
# when it might take a long time to load data or warm a cache, than
# during steady-state operation. This cannot be updated. This is an
# alpha feature enabled by the StartupProbe feature flag. More
# info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
startupProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# HostManagerMonitor is used to monitor the Kinetica DB Ranks. If a
# rank is unavailable for the specified time(MaxRankFailureCount) the
# cluster will be restarted.
hostManagerMonitor:
# The HostMonitor Port for the DB StartupProbe, ReadinessProbe and
# Liveness probes. Default: 8888
db_healthz_port:
# Number of port to expose on the pod's IP address. This must be a
# valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must be
# a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name. Name
# for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# The HostMonitor Port for the DB StartupProbe, ReadinessProbe and
# Liveness probes. Default: 8889
hm_healthz_port:
# Number of port to expose on the pod's IP address. This must be a
# valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must be
# a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name. Name
# for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# Periodic probe of container liveness. Container will be restarted
# if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# Set the name of the container image to use.
monitorRegistryRepositoryTag:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets in
# the same gpudb-namespace to use for pulling any of the images
# used by this PodSpec. If specified, these secrets will be
# passed to individual puller implementations for them to use.
# For example, in the case of docker, only DockerConfig type
# secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# Allow for overriding resource requests/limits.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# StartupProbe indicates that the Pod has successfully initialized.
# If specified, no other probes are executed until this completes
# successfully. If this probe fails, the Pod will be restarted,
# just as if the livenessProbe failed. This can be used to provide
# different probe parameters at the beginning of a Pod's lifecycle,
# when it might take a long time to load data or warm a cache, than
# during steady-state operation. This cannot be updated. This is an
# alpha feature enabled by the StartupProbe feature flag. More
# info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
startupProbe:
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value is
# 1.
failureThreshold: 3
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 10
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 10
# The platform infrastructure provider e.g. azure, aws, gcp, on-prem
# etc.
infra: "on-prem"
# The Kubernetes Ingress Controller will be running on e.g.
# ingress-nginx, Traefik, Ambassador, Gloo, Kong etc.
ingressController: "nginx"
# The LDAP server to connect to.
ldap:
# BaseDN - The root base LDAP Distinguished Name to use as the base
# for the LDAP usage
baseDN: "dc=kinetica,dc=com"
# BindDN - The LDAP Distinguished Name to use for the LDAP
# connectivity/data connectivity/bind
bindDN: "cn=admin,dc=kinetica,dc=com"
# Host - The name of the host to connect to. If IsInLocalK8S=true
# then supply only the name e.g. `openldap` Default: openldap
host: "openldap"
# IsInLocalK8S - Is the LDAP server co-located in the same K8s
# cluster the operator is running in. Default: true
isInLocalK8S: true
# IsLDAPS - IUse LDAPS instead of LDAP Default: false
isLDAPS: false
# Namespace - The namespace the Default: openldap
namespace: "gpudb"
# Port - Defaults to LDAP Port 389 Default: 389
port: 389
# Tells the operator to use Cloud Provider Pay As You Go
# functionality.
payAsYouGo: false
# The Reveal Dashboard Configuration for the Kinetica Cluster.
reveal:
# The port that Reveal will be running on. It runs only on the head
# node pod in the cluster. Default: 8080
containerPort:
# Number of port to expose on the pod's IP address. This must be a
# valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must be
# a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name. Name
# for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# The Ingress Endpoint that Reveal will be running on.
ingressPath:
# backend defines the referenced service endpoint to which the
# traffic will be forwarded to.
backend:
# resource is an ObjectRef to another Kubernetes resource in the
# namespace of the Ingress object. If resource is specified,
# serviceName and servicePort must not be specified.
resource:
# APIGroup is the group for the resource being referenced. If
# APIGroup is not specified, the specified Kind must be in
# the core API group. For any other third-party types,
# APIGroup is required.
apiGroup: string
# Kind is the type of resource being referenced
kind: KineticaCluster
# Name is the name of resource being referenced
name: string
# serviceName specifies the name of the referenced service.
serviceName: string
# servicePort Specifies the port of the referenced service.
servicePort:
# path is matched against the path of an incoming request.
# Currently it can contain characters disallowed from the
# conventional "path" part of a URL as defined by RFC 3986. Paths
# must begin with a '/' and must be present when using PathType
# with value "Exact" or "Prefix".
path: string
# pathType determines the interpretation of the path matching.
# PathType can be one of the following values: * Exact: Matches
# the URL path exactly. * Prefix: Matches based on a URL path
# prefix split by '/'. Matching is done on a path element by
# element basis. A path element refers is the list of labels in
# the path split by the '/' separator. A request is a match for
# path p if every p is an element-wise prefix of p of the request
# path. Note that if the last element of the path is a substring
# of the last element in request path, it is not a match
# (e.g. /foo/bar matches /foo/bar/baz, but does not
# match /foo/barbaz). * ImplementationSpecific: Interpretation of
# the Path matching is up to the IngressClass. Implementations
# can treat this as a separate PathType or treat it identically
# to Prefix or Exact path types. Implementations are required to
# support all path types. Defaults to ImplementationSpecific.
pathType: string
# Whether to enable the Reveal Dashboard on the Cluster. Default:
# true
isEnabled: true
# The Stats server to deploy & connect to if required.
stats:
# AlertManager - AlertManager specific configuration.
alertManager:
# Set the arguments for the command within the container to run.
args:
["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
--config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
--storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
--storage.tsdb.retention.time=7d --web.enable-lifecycle"]
# Set the command within the container to run.
command: ["/bin/sh"]
# ConfigFile - Set the location of the Loki configuration file.
configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
# ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
# ConfigMap
configFileAsConfigMap: true
# The port that Stats will be running on. It runs only on the head
# node pod in the cluster. Default: 9091
containerPort:
# Number of port to expose on the pod's IP address. This must be
# a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must
# be a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name.
# Name for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# List of environment variables to set in the container.
env:
- name: string
# Variable references $(VAR_NAME) are expanded using the
# previously defined environment variables in the container and
# any service environment variables. If a variable cannot be
# resolved, the reference in the input string will be
# unchanged. Double $$ are reduced to a single $, which allows
# for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
# produce the string literal "$(VAR_NAME)". Escaped references
# will never be expanded, regardless of whether the variable
# exists or not. Defaults to "".
value: string
# Source for the environment variable's value. Cannot be used if
# value is not empty.
valueFrom:
# Selects a key of a ConfigMap.
configMapKeyRef:
# The key to select.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the ConfigMap or its key must be defined
optional: true
# Selects a field of the pod: supports metadata.name,
# metadata.namespace, `metadata.labels
# ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
# spec.serviceAccountName, status.hostIP, status.podIP,
# status.podIPs.
fieldRef:
# Version of the schema the FieldPath is written in terms
# of, defaults to "v1".
apiVersion: app.kinetica.com/v1
# Path of the field to select in the specified API version.
fieldPath: string
# Selects a resource of the container: only resources limits
# and requests (limits.cpu, limits.memory,
# limits.ephemeral-storage, requests.cpu, requests.memory and
# requests.ephemeral-storage) are currently supported.
resourceFieldRef:
# Container name: required for volumes, optional for env
# vars
containerName: string
# Specifies the output format of the exposed resources,
# defaults to "1"
divisor:
# Required: resource to select
resource: string
# Selects a key of a secret in the pod's namespace
secretKeyRef:
# The key of the secret to select from. Must be a valid
# secret key.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the Secret or its key must be defined
optional: true
# Set the name of the container image to use.
image:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets
# in the same gpudb-namespace to use for pulling any of the
# images used by this PodSpec. If specified, these secrets will
# be passed to individual puller implementations for them to
# use. For example, in the case of docker, only DockerConfig
# type secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Whether to enable the Stats Server on the Cluster. Default:
# true
isEnabled: true
# Periodic probe of container liveness. Container will be
# restarted if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Logs - Set the location of the Loki configuration file.
logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Resource Requests & Limits for the Stats Pod.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# StoragePath - Set the location of the AlertManager file
# storage.
storagePath: "/opt/gpudb/kagent/stats/storage/alertmanager/alertmanager"
# WebConfigFile - Set the location of the AlertManager
# alertmanager-web-config.yml.
webConfigFile: "/opt/gpudb/kagent/stats/alertmanager/alertmanager-web-config.yml"
# WebListenAddress - Set the location of the AlertManager
# alertmanager-web-config.yml.
webListenAddress: "0.0.0.0:9089"
# Grafana - Grafana specific configuration.
grafana:
# Set the arguments for the command within the container to run.
args:
["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
--config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
--storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
--storage.tsdb.retention.time=7d --web.enable-lifecycle"]
# Set the command within the container to run.
command: ["/bin/sh"]
# ConfigFile - Set the location of the Loki configuration file.
configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
# ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
# ConfigMap
configFileAsConfigMap: true
# The port that Stats will be running on. It runs only on the head
# node pod in the cluster. Default: 9091
containerPort:
# Number of port to expose on the pod's IP address. This must be
# a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must
# be a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name.
# Name for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# List of environment variables to set in the container.
env:
- name: string
# Variable references $(VAR_NAME) are expanded using the
# previously defined environment variables in the container and
# any service environment variables. If a variable cannot be
# resolved, the reference in the input string will be
# unchanged. Double $$ are reduced to a single $, which allows
# for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
# produce the string literal "$(VAR_NAME)". Escaped references
# will never be expanded, regardless of whether the variable
# exists or not. Defaults to "".
value: string
# Source for the environment variable's value. Cannot be used if
# value is not empty.
valueFrom:
# Selects a key of a ConfigMap.
configMapKeyRef:
# The key to select.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the ConfigMap or its key must be defined
optional: true
# Selects a field of the pod: supports metadata.name,
# metadata.namespace, `metadata.labels
# ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
# spec.serviceAccountName, status.hostIP, status.podIP,
# status.podIPs.
fieldRef:
# Version of the schema the FieldPath is written in terms
# of, defaults to "v1".
apiVersion: app.kinetica.com/v1
# Path of the field to select in the specified API version.
fieldPath: string
# Selects a resource of the container: only resources limits
# and requests (limits.cpu, limits.memory,
# limits.ephemeral-storage, requests.cpu, requests.memory and
# requests.ephemeral-storage) are currently supported.
resourceFieldRef:
# Container name: required for volumes, optional for env
# vars
containerName: string
# Specifies the output format of the exposed resources,
# defaults to "1"
divisor:
# Required: resource to select
resource: string
# Selects a key of a secret in the pod's namespace
secretKeyRef:
# The key of the secret to select from. Must be a valid
# secret key.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the Secret or its key must be defined
optional: true
# HomePath - Set the location of the Grafana home directory.
homePath: "/opt/gpudb/kagent/stats/grafana"
# GraphiteHost - Host Address
host: "0.0.0.0"
# Set the name of the container image to use.
image:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets
# in the same gpudb-namespace to use for pulling any of the
# images used by this PodSpec. If specified, these secrets will
# be passed to individual puller implementations for them to
# use. For example, in the case of docker, only DockerConfig
# type secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Whether to enable the Stats Server on the Cluster. Default:
# true
isEnabled: true
# Periodic probe of container liveness. Container will be
# restarted if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Logs - Set the location of the Loki configuration file.
logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Resource Requests & Limits for the Stats Pod.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# Whether to enable the Stats Server on the Cluster. Default: true
isEnabled: true
# Loki - Loki specific configuration.
loki:
# Set the arguments for the command within the container to run.
args:
["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
--config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
--storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
--storage.tsdb.retention.time=7d --web.enable-lifecycle"]
# Set the command within the container to run.
command: ["/bin/sh"]
# ConfigFile - Set the location of the Loki configuration file.
configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
# ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
# ConfigMap
configFileAsConfigMap: true
# The port that Stats will be running on. It runs only on the head
# node pod in the cluster. Default: 9091
containerPort:
# Number of port to expose on the pod's IP address. This must be
# a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must
# be a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name.
# Name for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# List of environment variables to set in the container.
env:
- name: string
# Variable references $(VAR_NAME) are expanded using the
# previously defined environment variables in the container and
# any service environment variables. If a variable cannot be
# resolved, the reference in the input string will be
# unchanged. Double $$ are reduced to a single $, which allows
# for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
# produce the string literal "$(VAR_NAME)". Escaped references
# will never be expanded, regardless of whether the variable
# exists or not. Defaults to "".
value: string
# Source for the environment variable's value. Cannot be used if
# value is not empty.
valueFrom:
# Selects a key of a ConfigMap.
configMapKeyRef:
# The key to select.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the ConfigMap or its key must be defined
optional: true
# Selects a field of the pod: supports metadata.name,
# metadata.namespace, `metadata.labels
# ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
# spec.serviceAccountName, status.hostIP, status.podIP,
# status.podIPs.
fieldRef:
# Version of the schema the FieldPath is written in terms
# of, defaults to "v1".
apiVersion: app.kinetica.com/v1
# Path of the field to select in the specified API version.
fieldPath: string
# Selects a resource of the container: only resources limits
# and requests (limits.cpu, limits.memory,
# limits.ephemeral-storage, requests.cpu, requests.memory and
# requests.ephemeral-storage) are currently supported.
resourceFieldRef:
# Container name: required for volumes, optional for env
# vars
containerName: string
# Specifies the output format of the exposed resources,
# defaults to "1"
divisor:
# Required: resource to select
resource: string
# Selects a key of a secret in the pod's namespace
secretKeyRef:
# The key of the secret to select from. Must be a valid
# secret key.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the Secret or its key must be defined
optional: true
# ExpandEnv
expandEnv: true
# Set the name of the container image to use.
image:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets
# in the same gpudb-namespace to use for pulling any of the
# images used by this PodSpec. If specified, these secrets will
# be passed to individual puller implementations for them to
# use. For example, in the case of docker, only DockerConfig
# type secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Whether to enable the Stats Server on the Cluster. Default:
# true
isEnabled: true
# Periodic probe of container liveness. Container will be
# restarted if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Logs - Set the location of the Loki configuration file.
logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Resource Requests & Limits for the Stats Pod.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# Storage - Set the path of the Loki storage.
storage: "/opt/gpudb/kagent/stats/storage/loki-storage"
# Which vmss/node group etc. to use as the NodeSelector
pool: "compute"
# Prometheus - Prometheus specific configuration.
prometheus:
# Set the arguments for the command within the container to run.
args:
["-c","/opt/gpudb/kagent/stats/prometheus/prometheus --log.level=debug
--config.file=/opt/gpudb/kagent/stats/prometheus/prometheus.yml --web.listen-address=0.0.0.0:9090
--storage.tsdb.path=/opt/gpudb/kagent/stats/storage/prometheus-storage
--storage.tsdb.retention.time=7d --web.enable-lifecycle"]
# Set the command within the container to run.
command: ["/bin/sh"]
# ConfigFile - Set the location of the Loki configuration file.
configFile: "/opt/gpudb/kagent/stats/loki/loki.yml"
# ConfigFileAsConfigMap - If true the ConfigFile is mounted from a
# ConfigMap
configFileAsConfigMap: true
# The port that Stats will be running on. It runs only on the head
# node pod in the cluster. Default: 9091
containerPort:
# Number of port to expose on the pod's IP address. This must be
# a valid port number, 0 < x < 65536.
containerPort: 1
# What host IP to bind the external port to.
hostIP: string
# Number of port to expose on the host. If specified, this must
# be a valid port number, 0 < x < 65536. If HostNetwork is
# specified, this must match ContainerPort. Most containers do
# not need this.
hostPort: 1
# If specified, this must be an IANA_SVC_NAME and unique within
# the pod. Each named port in a pod must have a unique name.
# Name for the port that can be referred to by services.
name: string
# Protocol for port. Must be UDP, TCP, or SCTP. Defaults
# to "TCP".
protocol: "TCP"
# List of environment variables to set in the container.
env:
- name: string
# Variable references $(VAR_NAME) are expanded using the
# previously defined environment variables in the container and
# any service environment variables. If a variable cannot be
# resolved, the reference in the input string will be
# unchanged. Double $$ are reduced to a single $, which allows
# for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
# produce the string literal "$(VAR_NAME)". Escaped references
# will never be expanded, regardless of whether the variable
# exists or not. Defaults to "".
value: string
# Source for the environment variable's value. Cannot be used if
# value is not empty.
valueFrom:
# Selects a key of a ConfigMap.
configMapKeyRef:
# The key to select.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the ConfigMap or its key must be defined
optional: true
# Selects a field of the pod: supports metadata.name,
# metadata.namespace, `metadata.labels
# ['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
# spec.serviceAccountName, status.hostIP, status.podIP,
# status.podIPs.
fieldRef:
# Version of the schema the FieldPath is written in terms
# of, defaults to "v1".
apiVersion: app.kinetica.com/v1
# Path of the field to select in the specified API version.
fieldPath: string
# Selects a resource of the container: only resources limits
# and requests (limits.cpu, limits.memory,
# limits.ephemeral-storage, requests.cpu, requests.memory and
# requests.ephemeral-storage) are currently supported.
resourceFieldRef:
# Container name: required for volumes, optional for env
# vars
containerName: string
# Specifies the output format of the exposed resources,
# defaults to "1"
divisor:
# Required: resource to select
resource: string
# Selects a key of a secret in the pod's namespace
secretKeyRef:
# The key of the secret to select from. Must be a valid
# secret key.
key: string
# Name of the referent. More info:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
# TODO: Add other useful fields. apiVersion, kind, uid?
name: string
# Specify whether the Secret or its key must be defined
optional: true
# Set the name of the container image to use.
image:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets
# in the same gpudb-namespace to use for pulling any of the
# images used by this PodSpec. If specified, these secrets will
# be passed to individual puller implementations for them to
# use. For example, in the case of docker, only DockerConfig
# type secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Whether to enable the Stats Server on the Cluster. Default:
# true
isEnabled: true
# Periodic probe of container liveness. Container will be
# restarted if the probe fails. Cannot be updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
livenessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Set the Prometheus logging level.
logLevel: "debug"
# Logs - Set the location of the Loki configuration file.
logs: "/opt/gpudb/kagent/stats/logs" name: "stats"
# Periodic probe of container service readiness. Container will be
# removed from service endpoints if the probe fails. Cannot be
# updated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
readinessProbe:
# Exec specifies the action to take.
exec:
# Command is the command line to execute inside the container,
# the working directory for the command is root ('/') in the
# container's filesystem. The command is simply exec'd, it is
# not run inside a shell, so traditional shell instructions
# ('|', etc) won't work. To use a shell, you need to
# explicitly call out to that shell. Exit status of 0 is
# treated as live/healthy and non-zero is unhealthy.
command: ["string"]
# Minimum consecutive failures for the probe to be considered
# failed after having succeeded. Defaults to 3. Minimum value
# is 1.
failureThreshold: 1
# GRPC specifies an action involving a GRPC port.
grpc:
# Port number of the gRPC service. Number must be in the range
# 1 to 65535.
port: 1
# Service is the name of the service to place in the gRPC
# HealthCheckRequest
# (see
# https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
# If this is not specified, the default behavior is defined
# by gRPC.
service: string
# HTTPGet specifies the http request to perform.
httpGet:
# Host name to connect to, defaults to the pod IP. You
# probably want to set "Host" in httpHeaders instead.
host: string
# Custom headers to set in the request. HTTP allows repeated
# headers.
httpHeaders:
- name: string
# The header field value
value: string
# Path to access on the HTTP server.
path: string
# Name or number of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Scheme to use for connecting to the host. Defaults to HTTP.
scheme: string
# Number of seconds after the container has started before
# liveness probes are initiated. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
initialDelaySeconds: 1
# How often (in seconds) to perform the probe. Default to 10
# seconds. Minimum value is 1.
periodSeconds: 1
# Minimum consecutive successes for the probe to be considered
# successful after having failed. Defaults to 1. Must be 1 for
# liveness and startup. Minimum value is 1.
successThreshold: 1
# TCPSocket specifies an action involving a TCP port.
tcpSocket:
# Optional: Host name to connect to, defaults to the pod IP.
host: string
# Number or name of the port to access on the container.
# Number must be in the range 1 to 65535. Name must be an
# IANA_SVC_NAME.
port:
# Optional duration in seconds the pod needs to terminate
# gracefully upon probe failure. The grace period is the
# duration in seconds after the processes running in the pod
# are sent a termination signal and the time when the processes
# are forcibly halted with a kill signal. Set this value longer
# than the expected cleanup time for your process. If this
# value is nil, the pod's terminationGracePeriodSeconds will be
# used. Otherwise, this value overrides the value provided by
# the pod spec. Value must be non-negative integer. The value
# zero indicates stop immediately via the kill signal
# (no opportunity to shut down). This is a beta field and
# requires enabling ProbeTerminationGracePeriod feature gate.
# Minimum value is 1. spec.terminationGracePeriodSeconds is
# used if unset.
terminationGracePeriodSeconds: 1
# Number of seconds after which the probe times out. Defaults to
# 1 second. Minimum value is 1. More info:
# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
timeoutSeconds: 1
# Resource Requests & Limits for the Stats Pod.
resources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# Set the location of the TSDB database.
storageTSDBPath: "/opt/gpudb/kagent/stats/storage/prometheus-storage"
# Set the time to hold data in the TSDB database.
storageTSDBRetentionTime: "7d"
# Timings - Prometheus Intervals & Timeouts
timings: evaluationInterval: "30s" scrapeInterval: "30s"
scrapeTimeout: "10s"
# Whether to share a single PV for Loki, Prometheus & Grafana or
# have a separate PV for each. Default: true
sharedPV: true
# Resource block specifically for use with SharedPV = true to set
# storage `requests` & `limits`
sharedPVResources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# Supporting images like socat,busybox etc.
supportingImages:
# Set the resource requests/limits for the BusyBox Pod(s).
busyBoxResources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# Set the name of the container image to use.
busybox:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets in
# the same gpudb-namespace to use for pulling any of the images
# used by this PodSpec. If specified, these secrets will be
# passed to individual puller implementations for them to use.
# For example, in the case of docker, only DockerConfig type
# secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Set the name of the container image to use.
socat:
# Set the policy for pulling container images.
imagePullPolicy: "IfNotPresent"
# ImagePullSecrets is an optional list of references to secrets in
# the same gpudb-namespace to use for pulling any of the images
# used by this PodSpec. If specified, these secrets will be
# passed to individual puller implementations for them to use.
# For example, in the case of docker, only DockerConfig type
# secrets are honored.
imagePullSecrets:
- name: string
# The image registry & optional port containing the repository.
registry: "docker.io"
# The image repository path.
repository: "kineticadevcloud/"
# SemVer = Semantic Version for the Tag SemVer semver.Version
semVer: string
# The image sha.
sha: ""
# The image tag.
tag: "v7.1.5.2"
# Set the resource requests/limits for the Socat Pod.
socatResources:
# Claims lists the names of resources, defined in
# spec.resourceClaims, that are used by this container. This is
# an alpha field and requires enabling the
# DynamicResourceAllocation feature gate. This field is
# immutable. It can only be set for containers.
claims:
- name: string
# Limits describes the maximum amount of compute resources
# allowed. More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
limits: {}
# Requests describes the minimum amount of compute resources
# required. If Requests is omitted for a container, it defaults
# to Limits if that is explicitly specified, otherwise to an
# implementation-defined value. Requests cannot exceed Limits.
# More info:
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requests: {}
# KineticaClusterStatus defines the observed state of KineticaCluster
status:
# CloudProvider the DB is deployed on
cloudProvider: string
# CloudRegion the DB is deployed on
cloudRegion: string
# ClusterSize the current number of ranks & type i.e. CPU or GPU of
# the cluster
clusterSize:
# ClusterSizeEnum - T-Shirt size of the Kinetica DB Cluster i.e. a
# representation of the number of nodes in a simple to understand
# T-Short size scheme. This indicates the size of the cluster i.e.
# the number of nodes. It does not identify the size of the cloud
# provider nodes. For node size see ClusterTypeEnum. Supported
# Values are: - XS S M L XL XXL XXXL
tshirtSize: string
# ClusterTypeEnum - An Enum of the node types of a KineticaCluster
# e.g. CPU, GPU along with the Cloud Provider node size e.g. size
# of the VM.
tshirtType: string
# The number of ranks (replicas) that the cluster was last run with
currentReplicas: 0
# The first start of a new cluster has completed.
firstStartComplete: false
# HostManagerStatusResponse - The contents of polling the HostManager
# on port 9300n are added to the BR status field. This allows clients
# to get the Host/Rank/Graph/ML status information.
hmStatus: cluster_leader: string cluster_operation: string graph:
status: string graph_status: string host_httpd_status: string
host_mode: string host_num_gpus: string host_pid: 1
host_stats_status: string host_status: string hostname: string hosts:
graph_status: string host_httpd_status: string host_mode: string
host_pid: 1 host_stats_status: string host_status: string ml_status:
string query_planner_status: string reveal_status: string
license_expiration: string license_status: string license_type:
string ml_status: string query_planner_status: string ranks: mode:
string
# Pid - The OS Process Id for the Rank.
pid: 1 status: string reveal_status: string system_idle_time:
string system_mode: string system_rebalancing: 1 system_status:
string text: status: string version: string
# The fully qualified Ingress routes.
ingressUrls: aaw: string dbMonitor: string files: string gadmin:
string postgresProxy: string ranks: {} reveal: string
# The fully qualified in-cluster Ingress routes.
internalIngressUrls: aaw: string dbMonitor: string files: string
gadmin: string postgresProxy: string ranks: {} reveal: string
# Identify FreeSaaS Cluster
isFreeSaaS: false
# HostOptions used during DB Cluster Scaling Functions
options: ram_limit: 1
# OutstandingBilling - A list of hours not yet billed for. Will only
# be present if the plan is Pay As You Go and the operator was unable
# to send the billing information due to an issue with the cloud
# providers billing APIs.
outstandingBillableHour:
- billable: true billed: true billedAt: string duration: string end:
string start: string
# The state or phase of the current DB installation
phase: stringv