4.2.4-0.1.0 • Published 5 years ago

@helm-charts/bitnami-consul v4.2.4-0.1.0

Weekly downloads
1
License
MIT
Repository
-
Last release
5 years ago

@helm-charts/bitnami-consul

Highly available and distributed service discovery and key-value store designed with support for the modern data center to make distributed systems and configuration easy.

FieldValue
Repository Namebitnami
Chart Nameconsul
Chart Version4.2.4
NPM Package Version0.1.0
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName

## Bitnami HashiCorp Consul image version
## ref: https://hub.docker.com/r/bitnami/consul/tags/
##
image:
  registry: docker.io
  repository: bitnami/consul
  tag: 1.4.4
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: Always
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

## Consul replicas
replicas: 3

service:
  ## Consul service ports
  port: 8500
  rpcPort: 8400
  serflanPort: 8301
  serverPort: 8300
  consulDnsPort: 8600
  uiPort: 80

## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Datacenter name for consul. If not supplied, will use the consul
## default 'dc1'
datacenterName: dc1

## Predefined value for gossip key.
## The key must be 16-bytes, can be generated with $(consul keygen)
# gossipKey: 887Syd/BOvbtvRAKviazMg==

## Use TLS to verify the authenticity of servers and clients.
## Check README for more information.
# tlsEncryptionSecretName: your-already-created-secret

## Extra configuration that will be added to the default one.
#localConfig: |-
#  {
#    "key": "value"
#  }

## Consul domain name.
domain: consul

## Consul raft multiplier.
raftMultiplier: '1'

## updateStrategy for Consul statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
  type: RollingUpdate

## Consul data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
##   set, choosing the default provisioner.  (gp2 on AWS, standard on
##   GKE, AWS & OpenStack)
##
persistence:
  enabled: true
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

resources: {}
#  requests:
#    memory: 256Mi
#    cpu: 100m

## Setting maxUnavailable will create a pod disruption budget that will prevent
## voluntarty cluster administration from taking down too many consul pods. If
## you set maxUnavailable, you should set it to ceil((n/2) - 1), where
## n = Replicas. For example, if you have 5 or 6 Replicas, you'll want to set
## maxUnavailable = 2. If you are using the default of 3 Replicas, you'll want
## to set maxUnavailable to 1.
maxUnavailable: 1

## nodeAffinity settings
# nodeAffinity:
#   requiredDuringSchedulingIgnoredDuringExecution:
#     nodeSelectorTerms:
#     - matchExpressions:
#       - key: cloud.google.com/gke-preemptible
#         operator: NotIn
#         values:
#         - true

## Anti-Affinity setting. The default "hard" will use pod anti-affinity that is
## requiredDuringSchedulingIgnoredDuringExecution to ensure 2 services don't
## end up on the same node. Setting this to "soft" will use
## preferredDuringSchedulingIgnoredDuringExecution. If set to anything else,
## no anti-affinity rules will be configured.
antiAffinity: 'soft'

## Create dedicated UI service
##
ui:
  service:
    enabled: true
    type: 'ClusterIP'
    ## Provide any additional annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    annotations: {}
    #  cloud.google.com/load-balancer-type: "Internal"
    #  service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    #  service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    loadBalancerIP:

## Configure the ingress resource that allows you to access the
## Consul UI. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## Set this to true in order to add the corresponding annotations for cert-manager
  certManager: false

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
  ##
  ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  annotations:
  #  kubernetes.io/ingress.class: nginx

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  hosts:
    - name: consul-ui.local
      path: /

      # Set this to true in order to enable TLS on the ingress record
      tls: false

      ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
      tlsSecret: consul-ui.local-tls

  secrets:
  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  # - name: consul-ui.local-tls
  #   key:
  #   certificate:

## Consul configmap
# configmap: |
#    {
#    "datacenter":"dc2",
#    "domain":"consul",
#    "data_dir":"/opt/bitnami/consul/data",
#    "pid_file":"/opt/bitnami/consul/tmp/consul.pid",
#    "server":true,
#    "ui":false,
#    "bootstrap_expect":3,
#    "addresses": {
#        "http":"0.0.0.0"
#    },
#    "ports": {
#        "http":8500,
#        "dns":8600,
#        "serf_lan":8301,
#        "server":8300
#    },
#    "serf_lan":"0.0.0.0"
#    }

## Pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}

metrics:
  enabled: false
  image:
    registry: docker.io
    repository: prom/consul-exporter
    tag: v0.3.0
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ##
    # pullSecrets:
    #   - myRegistryKeySecretName
  resources: {}
  podAnnotations:
    prometheus.io/scrape: 'true'
    prometheus.io/port: '9107'

## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {'beta.kubernetes.io/arch': 'amd64'}

## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

HashiCorp Consul Helm Chart

HashiCorp Consul has multiple components, but as a whole, it is a tool for discovering and configuring services in your infrastructure

TL;DR

$ helm repo add bitnami https://charts.bitnami.com/incubator
$ helm install bitnami/consul

Introduction

This chart bootstraps a HashiCorp Consul deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of Bitnami Kubernetes Production Runtime (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.

Prerequisites

  • Kubernetes 1.4+ with Beta APIs enabled
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm install --name my-release bitnami/consul

The command deploys HashiCorp Consul on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release. Remove also the chart using --purge option:

$ helm delete --purge my-release

Configuration

The following tables lists the configurable parameters of the HashiCorp Consul chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
image.registryHashiCorp Consul image registrydocker.io
image.repositoryHashiCorp Consul image namebitnami/consul
image.tagHashiCorp Consul image tag{VERSION}
image.pullPolicyImage pull policyAlways
image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
replicasNumber of replicas3
portHashiCorp Consul http listening port8500
service.rpcPortHashiCorp Consul rpc listening port8400
service.serflanPortContainer serf lan listening port8301
service.serverPortContainer server listening port8300
service.consulDnsPortContainer dns listening port8600
service.uiPortHashiCorp Consul UI port80
datacenterNameHashiCorp Consul datacenter namedc1
gossipKeyGossip key for all membersnil
domainHashiCorp Consul domainconsul
clientAddressAddress in which HashiCorp Consul will bind client interfaces0.0.0.0
serflanAddressAddress used for Serf LAN communications0.0.0.0
raftMultiplierMultiplier used to scale key Raft timing parameters10Gi
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container1001
securityContext.runAsUserUser ID for the container1001
updateStrategy.typeStatefulset update strategy policyRollingUpdate
persistence.enabledUse a PVC to persist datatrue
persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
persistence.sizeSize of data volume8Gi
persistence.annotationsAnnotations for the persistent volumenil
resourcesContainer resource requests and limits{}
maxUnavailablePod disruption Budget maxUnavailable1
nodeAffinityHashiCorp Consul pod node-affinity settingnil
antiAffinityHashiCorp Consul pod anti-affinity settingsoft
ui.service.enabledUse a service to access HashiCorp Consul Uitrue
ui.service.typeKubernetes Service TypeClusterIP
ui.service.annotationsAnnotations for HashiCorp Consul UI service{}
ui.service.loadBalancerIPIP if HashiCorp Consul UI service type is LoadBalancernil
ingress.enabledEnable ingress controller resourcefalse
ingress.certManagerAdd annotations for cert-managerfalse
ingress.annotationsIngress annotations[]
ingress.hosts[0].nameHostname to your HashiCorp Consul installationconsul-ui.local
ingress.hosts[0].pathPath within the url structure/
ingress.hosts[0].tlsUtilize TLS backend in ingressfalse
ingress.hosts[0].tlsSecretTLS Secret (certificates)consul-ui.local-tls
ingress.secrets[0].nameTLS Secret Namenil
ingress.secrets[0].certificateTLS Secret Certificatenil
ingress.secrets[0].keyTLS Secret Keynil
configmapHashiCorp Consul configuration to be injected as ConfigMapnil
metrics.enabledStart a side-car prometheus exporterfalse
metrics.imageExporter imageprom/consul-exporter
metrics.imageTagExporter image tagv0.3.0
metrics.imagePullPolicyExporter image pull policyIfNotPresent
metrics.resourcesExporter resource requests/limit{}
metrics.podAnnotationsExporter annotations{}
nodeSelectorNode labels for pod assignment{}
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
livenessProbe.periodSecondsHow often to perform the probe10
livenessProbe.timeoutSecondsWhen the probe times out5
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
podAnnotationsPod annotations{}
readinessProbe.initialDelaySecondsDelay before readiness probe is initiated5
readinessProbe.periodSecondsHow often to perform the probe10
readinessProbe.timeoutSecondsWhen the probe times out5
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install --name my-release --set domain=consul-domain,gossipKey=secretkey bitnami/consul

The above command sets the HashiCorp Consul domain to consul-domain and sets the gossip key to secretkey.

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install --name my-release -f values.yaml bitnami/consul

Tip: You can use the default values.yaml

Persistence

The Bitnami HashiCorp Consul image stores the HashiCorp Consul data at the /bitnami path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Configuration section to configure the PVC or to disable persistence.

Ingress

This chart provides support for ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress or traefik you can utilize the ingress controller to service your HashiCorp Consul UI application.

To enable ingress integration, please set ingress.enabled to true

Hosts

Most likely you will only want to have one hostname that maps to this HashiCorp Consul installation, however it is possible to have more than one host. To facilitate this, the ingress.hosts object is an array.

For each item, please indicate a name, tls, tlsSecret, and any annotations that you may want the ingress controller to know about.

Indicating TLS will cause HashiCorp Consul to generate HTTPS urls, and HashiCorp Consul will be connected to at port 443. The actual secret that tlsSecret references does not have to be generated by this chart. However, please note that if TLS is enabled, the ingress record will not work until this secret exists.

For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers.

TLS Secrets

This chart will facilitate the creation of TLS secrets for use with the ingress controller, however this is not required. There are three common use cases:

  • helm generates / manages certificate secrets
  • user generates / manages certificates separately
  • an additional tool (like kube-lego) manages the secrets for the application

In the first two cases, one will need a certificate and a key. We would expect them to look like this:

  • certificate files should look like (and there can be more than one certificate if there is a certificate chain)
-----BEGIN CERTIFICATE-----
MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
...
jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
-----END CERTIFICATE-----
  • keys should look like:
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
...
wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
-----END RSA PRIVATE KEY-----

If you are going to use helm to manage the certificates, please copy these values into the certificate and key values for a given ingress.secrets entry.

If you are going to manage TLS secrets outside of helm, please know that you can create a TLS secret by doing the following:

kubectl create secret tls consul.local-tls --key /path/to/key.key --cert /path/to/cert.crt

Please see this example for more information.

Enable TLS encryption between servers

You must manually create a secret containing your PEM-encoded certificate authority, your PEM-encoded certificate, and your PEM-encoded private key.

kubectl create secret generic consul-tls-encryption \
  --from-file=ca.pem \
  --from-file=consul.pem \
  --from-file=consul-key.pem

Take into account that you will need to create a config map with the proper configuration.

If the secret is specified, the chart will locate those files at /opt/bitnami/consul/certs/, so you will want to use the below snippet to configure HashiCorp Consul TLS encryption in your config map:

  "ca_file": "/opt/bitnami/consul/certs/ca.pem",
  "cert_file": "/opt/bitnami/consul/certs/consul.pem",
  "key_file": "/opt/bitnami/consul/certs/consul-key.pem",
  "verify_incoming": true,
  "verify_outgoing": true,
  "verify_server_hostname": true,

After creating the secret, you can install the helm chart specyfing the secret name:

helm install bitnami/consul --set tlsEncryptionSecretName=consul-tls-encryption

Metrics

The chart can optionally start a metrics exporter endpoint on port 9107 for prometheus. The data exposed by the endpoint is intended to be consumed by a prometheus chart deployed within the cluster and as such the endpoint is not exposed outside the cluster.

Upgrading

To 3.1.0

Consul container was moved to a non-root approach. There shouldn't be any issue when upgrading since the corresponding securityContext is enabled by default. Both the container image and the chart can be upgraded by running the command below:

$ helm upgrade my-release stable/consul

If you use a previous container image (previous to 1.4.0-r16) disable the securityContext by running the command below:

$ helm upgrade my-release stable/consul --set securityContext.enabled=fase,image.tag=XXX

To 2.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 2.0.0. The following example assumes that the release name is consul:

$ kubectl delete statefulset consul --cascade=false
4.2.4-0.1.0

5 years ago

4.2.3-0.1.0

5 years ago

4.2.2-0.1.0

5 years ago

4.2.1-0.1.0

5 years ago

4.2.0-0.1.0

5 years ago

4.1.0-0.1.0

5 years ago

4.0.6-0.1.0

5 years ago

4.0.5-0.1.0

5 years ago

4.0.4-0.1.0

5 years ago

4.0.3-0.1.0

5 years ago

4.0.2-0.1.0

5 years ago

4.0.1-0.1.0

5 years ago

4.0.0-0.1.0

5 years ago

3.1.2-0.1.0

5 years ago

3.1.1-0.1.0

5 years ago

3.1.0-0.1.0

5 years ago

3.0.0-0.1.0

5 years ago

2.4.3-0.1.0

5 years ago

2.4.2-0.1.0

5 years ago

2.4.1-0.1.0

5 years ago

2.4.0-0.1.0

5 years ago

2.3.0-0.1.0

5 years ago

2.2.0-0.1.0

5 years ago

2.1.1-0.1.0

5 years ago

2.1.0-0.1.0

5 years ago

2.0.4-0.1.0

5 years ago

2.0.3-0.1.0

5 years ago

2.0.2-0.1.0

5 years ago

2.0.1-0.1.0

5 years ago

2.0.0-0.1.0

5 years ago

1.1.0-0.1.0

5 years ago

1.0.1-0.1.0

5 years ago

1.0.0-0.1.0

5 years ago

0.0.2-0.1.0

5 years ago

0.0.1-0.1.0

5 years ago