@helm-charts/bitnami-consul v4.2.4-0.1.0
@helm-charts/bitnami-consul
Highly available and distributed service discovery and key-value store designed with support for the modern data center to make distributed systems and configuration easy.
Field | Value |
---|---|
Repository Name | bitnami |
Chart Name | consul |
Chart Version | 4.2.4 |
NPM Package Version | 0.1.0 |
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
## Bitnami HashiCorp Consul image version
## ref: https://hub.docker.com/r/bitnami/consul/tags/
##
image:
registry: docker.io
repository: bitnami/consul
tag: 1.4.4
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Consul replicas
replicas: 3
service:
## Consul service ports
port: 8500
rpcPort: 8400
serflanPort: 8301
serverPort: 8300
consulDnsPort: 8600
uiPort: 80
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Datacenter name for consul. If not supplied, will use the consul
## default 'dc1'
datacenterName: dc1
## Predefined value for gossip key.
## The key must be 16-bytes, can be generated with $(consul keygen)
# gossipKey: 887Syd/BOvbtvRAKviazMg==
## Use TLS to verify the authenticity of servers and clients.
## Check README for more information.
# tlsEncryptionSecretName: your-already-created-secret
## Extra configuration that will be added to the default one.
#localConfig: |-
# {
# "key": "value"
# }
## Consul domain name.
domain: consul
## Consul raft multiplier.
raftMultiplier: '1'
## updateStrategy for Consul statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
## Consul data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
persistence:
enabled: true
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
resources: {}
# requests:
# memory: 256Mi
# cpu: 100m
## Setting maxUnavailable will create a pod disruption budget that will prevent
## voluntarty cluster administration from taking down too many consul pods. If
## you set maxUnavailable, you should set it to ceil((n/2) - 1), where
## n = Replicas. For example, if you have 5 or 6 Replicas, you'll want to set
## maxUnavailable = 2. If you are using the default of 3 Replicas, you'll want
## to set maxUnavailable to 1.
maxUnavailable: 1
## nodeAffinity settings
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: cloud.google.com/gke-preemptible
# operator: NotIn
# values:
# - true
## Anti-Affinity setting. The default "hard" will use pod anti-affinity that is
## requiredDuringSchedulingIgnoredDuringExecution to ensure 2 services don't
## end up on the same node. Setting this to "soft" will use
## preferredDuringSchedulingIgnoredDuringExecution. If set to anything else,
## no anti-affinity rules will be configured.
antiAffinity: 'soft'
## Create dedicated UI service
##
ui:
service:
enabled: true
type: 'ClusterIP'
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
# cloud.google.com/load-balancer-type: "Internal"
# service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
loadBalancerIP:
## Configure the ingress resource that allows you to access the
## Consul UI. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## Set to true to enable ingress record generation
enabled: false
## Set this to true in order to add the corresponding annotations for cert-manager
certManager: false
## Ingress annotations done as key:value pairs
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
##
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
annotations:
# kubernetes.io/ingress.class: nginx
## The list of hostnames to be covered with this ingress record.
## Most likely this will be just one host, but in the event more hosts are needed, this is an array
hosts:
- name: consul-ui.local
path: /
# Set this to true in order to enable TLS on the ingress record
tls: false
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: consul-ui.local-tls
secrets:
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
##
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
# - name: consul-ui.local-tls
# key:
# certificate:
## Consul configmap
# configmap: |
# {
# "datacenter":"dc2",
# "domain":"consul",
# "data_dir":"/opt/bitnami/consul/data",
# "pid_file":"/opt/bitnami/consul/tmp/consul.pid",
# "server":true,
# "ui":false,
# "bootstrap_expect":3,
# "addresses": {
# "http":"0.0.0.0"
# },
# "ports": {
# "http":8500,
# "dns":8600,
# "serf_lan":8301,
# "server":8300
# },
# "serf_lan":"0.0.0.0"
# }
## Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
metrics:
enabled: false
image:
registry: docker.io
repository: prom/consul-exporter
tag: v0.3.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
resources: {}
podAnnotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9107'
## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {'beta.kubernetes.io/arch': 'amd64'}
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
HashiCorp Consul Helm Chart
HashiCorp Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure
TL;DR
$ helm repo add bitnami https://charts.bitnami.com/incubator
$ helm install bitnami/consul
Introduction
This chart bootstraps a HashiCorp Consul deployment on a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of Bitnami Kubernetes Production Runtime (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.
Prerequisites
- Kubernetes 1.4+ with Beta APIs enabled
- PV provisioner support in the underlying infrastructure
Installing the Chart
To install the chart with the release name my-release
:
$ helm install --name my-release bitnami/consul
The command deploys HashiCorp Consul on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Uninstalling the Chart
To uninstall/delete the my-release
deployment:
$ helm delete my-release
The command removes all the Kubernetes components associated with the chart and deletes the release. Remove also the chart using --purge
option:
$ helm delete --purge my-release
Configuration
The following tables lists the configurable parameters of the HashiCorp Consul chart and their default values.
Parameter | Description | Default |
---|---|---|
global.imageRegistry | Global Docker image registry | nil |
global.imagePullSecrets | Global Docker registry secret names as an array | [] (does not add image pull secrets to deployed pods) |
image.registry | HashiCorp Consul image registry | docker.io |
image.repository | HashiCorp Consul image name | bitnami/consul |
image.tag | HashiCorp Consul image tag | {VERSION} |
image.pullPolicy | Image pull policy | Always |
image.pullSecrets | Specify docker-registry secret names as an array | [] (does not add image pull secrets to deployed pods) |
replicas | Number of replicas | 3 |
port | HashiCorp Consul http listening port | 8500 |
service.rpcPort | HashiCorp Consul rpc listening port | 8400 |
service.serflanPort | Container serf lan listening port | 8301 |
service.serverPort | Container server listening port | 8300 |
service.consulDnsPort | Container dns listening port | 8600 |
service.uiPort | HashiCorp Consul UI port | 80 |
datacenterName | HashiCorp Consul datacenter name | dc1 |
gossipKey | Gossip key for all members | nil |
domain | HashiCorp Consul domain | consul |
clientAddress | Address in which HashiCorp Consul will bind client interfaces | 0.0.0.0 |
serflanAddress | Address used for Serf LAN communications | 0.0.0.0 |
raftMultiplier | Multiplier used to scale key Raft timing parameters | 10Gi |
securityContext.enabled | Enable security context | true |
securityContext.fsGroup | Group ID for the container | 1001 |
securityContext.runAsUser | User ID for the container | 1001 |
updateStrategy.type | Statefulset update strategy policy | RollingUpdate |
persistence.enabled | Use a PVC to persist data | true |
persistence.storageClass | Storage class of backing PVC | nil (uses alpha storage class annotation) |
persistence.accessMode | Use volume as ReadOnly or ReadWrite | ReadWriteOnce |
persistence.size | Size of data volume | 8Gi |
persistence.annotations | Annotations for the persistent volume | nil |
resources | Container resource requests and limits | {} |
maxUnavailable | Pod disruption Budget maxUnavailable | 1 |
nodeAffinity | HashiCorp Consul pod node-affinity setting | nil |
antiAffinity | HashiCorp Consul pod anti-affinity setting | soft |
ui.service.enabled | Use a service to access HashiCorp Consul Ui | true |
ui.service.type | Kubernetes Service Type | ClusterIP |
ui.service.annotations | Annotations for HashiCorp Consul UI service | {} |
ui.service.loadBalancerIP | IP if HashiCorp Consul UI service type is LoadBalancer | nil |
ingress.enabled | Enable ingress controller resource | false |
ingress.certManager | Add annotations for cert-manager | false |
ingress.annotations | Ingress annotations | [] |
ingress.hosts[0].name | Hostname to your HashiCorp Consul installation | consul-ui.local |
ingress.hosts[0].path | Path within the url structure | / |
ingress.hosts[0].tls | Utilize TLS backend in ingress | false |
ingress.hosts[0].tlsSecret | TLS Secret (certificates) | consul-ui.local-tls |
ingress.secrets[0].name | TLS Secret Name | nil |
ingress.secrets[0].certificate | TLS Secret Certificate | nil |
ingress.secrets[0].key | TLS Secret Key | nil |
configmap | HashiCorp Consul configuration to be injected as ConfigMap | nil |
metrics.enabled | Start a side-car prometheus exporter | false |
metrics.image | Exporter image | prom/consul-exporter |
metrics.imageTag | Exporter image tag | v0.3.0 |
metrics.imagePullPolicy | Exporter image pull policy | IfNotPresent |
metrics.resources | Exporter resource requests/limit | {} |
metrics.podAnnotations | Exporter annotations | {} |
nodeSelector | Node labels for pod assignment | {} |
livenessProbe.initialDelaySeconds | Delay before liveness probe is initiated | 30 |
livenessProbe.periodSeconds | How often to perform the probe | 10 |
livenessProbe.timeoutSeconds | When the probe times out | 5 |
livenessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
livenessProbe.failureThreshold | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
podAnnotations | Pod annotations | {} |
readinessProbe.initialDelaySeconds | Delay before readiness probe is initiated | 5 |
readinessProbe.periodSeconds | How often to perform the probe | 10 |
readinessProbe.timeoutSeconds | When the probe times out | 5 |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
readinessProbe.failureThreshold | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
$ helm install --name my-release --set domain=consul-domain,gossipKey=secretkey bitnami/consul
The above command sets the HashiCorp Consul domain to consul-domain
and sets the gossip key to secretkey
.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
$ helm install --name my-release -f values.yaml bitnami/consul
Tip: You can use the default values.yaml
Persistence
The Bitnami HashiCorp Consul image stores the HashiCorp Consul data at the /bitnami
path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Configuration section to configure the PVC or to disable persistence.
Ingress
This chart provides support for ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress or traefik you can utilize the ingress controller to service your HashiCorp Consul UI application.
To enable ingress integration, please set ingress.enabled
to true
Hosts
Most likely you will only want to have one hostname that maps to this
HashiCorp Consul installation, however it is possible to have more than one
host. To facilitate this, the ingress.hosts
object is an array.
For each item, please indicate a name
, tls
, tlsSecret
, and any
annotations
that you may want the ingress controller to know about.
Indicating TLS will cause HashiCorp Consul to generate HTTPS urls, and
HashiCorp Consul will be connected to at port 443. The actual secret that
tlsSecret
references does not have to be generated by this chart.
However, please note that if TLS is enabled, the ingress record will not
work until this secret exists.
For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers.
TLS Secrets
This chart will facilitate the creation of TLS secrets for use with the ingress controller, however this is not required. There are three common use cases:
- helm generates / manages certificate secrets
- user generates / manages certificates separately
- an additional tool (like kube-lego) manages the secrets for the application
In the first two cases, one will need a certificate and a key. We would expect them to look like this:
- certificate files should look like (and there can be more than one certificate if there is a certificate chain)
-----BEGIN CERTIFICATE-----
MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
...
jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
-----END CERTIFICATE-----
- keys should look like:
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
...
wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
-----END RSA PRIVATE KEY-----
If you are going to use helm to manage the certificates, please copy
these values into the certificate
and key
values for a given
ingress.secrets
entry.
If you are going to manage TLS secrets outside of helm, please know that you can create a TLS secret by doing the following:
kubectl create secret tls consul.local-tls --key /path/to/key.key --cert /path/to/cert.crt
Please see this example for more information.
Enable TLS encryption between servers
You must manually create a secret containing your PEM-encoded certificate authority, your PEM-encoded certificate, and your PEM-encoded private key.
kubectl create secret generic consul-tls-encryption \
--from-file=ca.pem \
--from-file=consul.pem \
--from-file=consul-key.pem
Take into account that you will need to create a config map with the proper configuration.
If the secret is specified, the chart will locate those files at /opt/bitnami/consul/certs/
, so you will want to use the below snippet to configure HashiCorp Consul TLS encryption in your config map:
"ca_file": "/opt/bitnami/consul/certs/ca.pem",
"cert_file": "/opt/bitnami/consul/certs/consul.pem",
"key_file": "/opt/bitnami/consul/certs/consul-key.pem",
"verify_incoming": true,
"verify_outgoing": true,
"verify_server_hostname": true,
After creating the secret, you can install the helm chart specyfing the secret name:
helm install bitnami/consul --set tlsEncryptionSecretName=consul-tls-encryption
Metrics
The chart can optionally start a metrics exporter endpoint on port 9107
for prometheus. The data exposed by the endpoint is intended to be consumed by a prometheus chart deployed within the cluster and as such the endpoint is not exposed outside the cluster.
Upgrading
To 3.1.0
Consul container was moved to a non-root approach. There shouldn't be any issue when upgrading since the corresponding securityContext
is enabled by default. Both the container image and the chart can be upgraded by running the command below:
$ helm upgrade my-release stable/consul
If you use a previous container image (previous to 1.4.0-r16) disable the securityContext
by running the command below:
$ helm upgrade my-release stable/consul --set securityContext.enabled=fase,image.tag=XXX
To 2.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 2.0.0. The following example assumes that the release name is consul:
$ kubectl delete statefulset consul --cascade=false
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago