@helm-charts/elastic-elasticsearch v7.0.0-alpha1-0.1.0
@helm-charts/elastic-elasticsearch
Elasticsearch
Field | Value |
---|---|
Repository Name | elastic |
Chart Name | elasticsearch |
Chart Version | 7.0.0-alpha1 |
NPM Package Version | 0.1.0 |
---
clusterName: 'elasticsearch'
nodeGroup: 'master'
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ''
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: 'true'
ingest: 'true'
data: 'true'
replicas: 3
minimumMasterNodes: 2
esMajorVersion: 7
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: 'docker.elastic.co/elasticsearch/elasticsearch'
imageTag: '7.0.0'
imagePullPolicy: 'IfNotPresent'
podAnnotations:
{}
# iam.amazonaws.com/role: es-cluster
esJavaOpts: '-Xmx1g -Xms1g'
resources:
requests:
cpu: '100m'
memory: '2Gi'
limits:
cpu: '1000m'
memory: '2Gi'
initResources:
{}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: '0.0.0.0'
volumeClaimTemplate:
accessModes: ['ReadWriteOnce']
resources:
requests:
storage: 30Gi
persistence:
annotations: {}
extraVolumes:
[]
# - name: extras
# emptyDir: {}
extraVolumeMounts:
[]
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraInitContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: 'kubernetes.io/hostname'
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: 'hard'
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: 'Parallel'
protocol: http
httpPort: 9200
transportPort: 9300
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable:
1
# GroupID for the elasticsearch user. The official elastic docker images always have the id of 1000
fsGroup: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: 'wait_for_status=green&timeout=1s'
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ''
fullnameOverride: ''
Elasticsearch Helm Chart
This functionality is in alpha status and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but alpha features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official Elasticsearch docker image
Requirements
- Helm >= 2.8.0
- Kubernetes >= 1.8
- Minimum cluster requirements include the following to run this chart with default settings. All of these settings are configurable.
- Three Kubernetes nodes to respect the default "hard" affinity settings
- 1GB of RAM for the JVM heap
Usage notes and getting started
- This repo includes a number of example configurations which can be used as a reference. They are also used in the automated testing of this chart
- Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). If you are using a different Kubernetes provider you will likely need to adjust the
storageClassName
in thevolumeClaimTemplate
- The default storage class for GKE is
standard
which by default will give youpd-ssd
type persistent volumes. This is network attached storage and will not perform as well as local storage. If you are using Kubernetes version 1.10 or greater you can use Local PersistentVolumes for increased performance - The chart deploys a statefulset and by default will do an automated rolling update of your cluster. It does this by waiting for the cluster health to become green after each instance is updated. If you prefer to update manually you can set
updateStrategy: OnDelete
- It is important to verify that the JVM heap size in
esJavaOpts
and to set the CPU/Memoryresources
to something suitable for your cluster - To simplify chart and maintenance each set of node groups is deployed as a separate helm release. Take a look at the multi example to get an idea for how this works. Without doing this it isn't possible to resize persistent volumes in a statefulset. By setting it up this way it makes it possible to add more nodes with a new storage size then drain the old ones. It also solves the problem of allowing the user to determine which node groups to update first when doing upgrades or changes.
- We have designed this chart to be very un-opinionated about how to configure Elasticsearch. It exposes ways to set environment variables and mount secrets inside of the container. Doing this makes it much easier for this chart to support multiple versions with minimal changes.
Migration from helm/charts stable
If you currently have a cluster deployed with the helm/charts stable chart you can follow the migration guide
Installing
- Add the elastic helm charts repo
helm repo add elastic https://helm.elastic.co
- Install it
helm install --name elasticsearch elastic/elasticsearch --version 7.0.0-alpha1
Compatibility
This chart is tested with the latest supported versions. The currently tested versions are:
5.x | 6.x | 7.x |
---|---|---|
5.6.16 | 6.7.1 | 7.0.0 |
Examples of installing older major versions can be found in the examples directory.
While only the latest releases are tested, it is possible to easily install old or new releases by overriding the imageTag
. To install version 7.0.0
of Elasticsearch it would look like this:
helm install --name elasticsearch elastic/elasticsearch --set imageTag=7.0.0
Configuration
Parameter | Description | Default |
---|---|---|
clusterName | This will be used as the Elasticsearch cluster.name and should be unique per cluster in the namespace | elasticsearch |
nodeGroup | This is the name that will be used for each group of nodes in the cluster. The name will be clusterName-nodeGroup-X | master |
masterService | Optional. The service name used to connect to the masters. You only need to set this if your master nodeGroup is set to something other than master . See Clustering and Node Discovery for more information. |
|
roles | A hash map with the specific roles for the node group | master: true data: true ingest: true |
replicas | Kubernetes replica count for the statefulset (i.e. how many pods) | 3 |
minimumMasterNodes | The value for discovery.zen.minimum_master_nodes. Should be set to (master_eligible_nodes / 2) + 1 . Ignored in Elasticsearch versions >= 7. | 2 |
esMajorVersion | Used to set major version specific configuration | 7 |
esConfig | Allows you to add any config files in /usr/share/elasticsearch/config/ such as elasticsearch.yml and log4j2.properties . See values.yaml for an example of the formatting. | {} |
extraEnvs | Extra environment variables which will be appended to the env: definition for the container | {} |
extraVolumes | Additional volumes to be passed to the tpl function | |
extraVolumeMounts | Additional volumeMounts to be passed to the tpl function | |
extraInitContainers | Additional init containers to be passed to the tpl function | |
secretMounts | Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See values.yaml for an example | {} |
image | The Elasticsearch docker image | docker.elastic.co/elasticsearch/elasticsearch |
imageTag | The Elasticsearch docker image tag | 7.0.0 |
imagePullPolicy | The Kubernetes imagePullPolicy value | IfNotPresent |
podAnnotations | Configurable annotations applied to all Elasticsearch pods | {} |
esJavaOpts | Java options for Elasticsearch. This is where you should configure the jvm heap size | -Xmx1g -Xms1g |
resources | Allows you to set the resources for the statefulset | requests.cpu: 100m requests.memory: 2Gi limits.cpu: 1000m limits.memory: 2Gi |
initResources | Allows you to set the resources for the initContainer in the statefulset | {} |
networkHost | Value for the network.host Elasticsearch setting | 0.0.0.0 |
volumeClaimTemplate | Configuration for the volumeClaimTemplate for statefulsets. You will want to adjust the storage (default 30Gi ) and the storageClassName if you are using a different storage class | accessModes: [ "ReadWriteOnce" ] resources.requests.storage: 30Gi |
persistence.annotations | Additional persistence annotations for the volumeClaimTemplate | {} |
antiAffinityTopologyKey | The anti-affinity topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | kubernetes.io/hostname |
antiAffinity | Setting this to hard enforces the anti-affinity rules. If it is set to soft it will be done "best effort". Other values will be ignored. | hard |
nodeAffinity | Value for the node affinity settings | {} |
podManagementPolicy | By default Kubernetes deploys statefulsets serially. This deploys them in parallel so that they can discover eachother | Parallel |
protocol | The protocol that will be used for the readinessProbe. Change this to https if you have xpack.security.http.ssl.enabled set | http |
httpPort | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set http.port in extraEnvs | 9200 |
transportPort | The transport port that Kubernetes will use for the service. If you change this you will also need to set transport port configuration in extraEnvs | 9300 |
updateStrategy | The updateStrategy for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to OnDelete will allow you to manually delete each pod during upgrades | RollingUpdate |
maxUnavailable | The maxUnavailable value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | 1 |
fsGroup | The Group ID (GID) for securityContext.fsGroup so that the Elasticsearch user can read from the persistent volume | 1000 |
terminationGracePeriod | The terminationGracePeriod in seconds used when trying to stop the pod | 120 |
sysctlVmMaxMapCount | Sets the sysctl vm.max_map_count needed for Elasticsearch | 262144 |
readinessProbe | Configuration fields for the readinessProbe | failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 |
clusterHealthCheckParams | The Elasticsearch cluster health status params that will be used by readinessProbe command | wait_for_status=green&timeout=1s |
imagePullSecrets | Configuration for imagePullSecrets so that you can use a private registry for your image | [] |
nodeSelector | Configurable nodeSelector so that you can target specific nodes for your Elasticsearch cluster | {} |
tolerations | Configurable tolerations | [] |
ingress | Configurable ingress to expose the Elasticsearch service. See values.yaml for an example | enabled: false |
Try it out
In examples/ you will find some example configurations. These examples are used for the automated testing of this helm chart
Default
To deploy a cluster with all default values and run the integration tests
cd examples/default
make
Multi
A cluster with dedicated node types
cd examples/multi
make
Security
A cluster with X-Pack security enabled
- Generate SSL certificates following the official docs
- Make sure you have a copy of your license handy.
- Create Kubernetes secrets for authentication credentials, X-Pack license and certificates
kubectl create secret generic elastic-credentials --from-literal=password=changeme --from-literal=username=elastic kubectl create secret generic elastic-license --from-file=license.json kubectl create secret generic elastic-certificates --from-file=elastic-certificates.p12
- Deploy!
cd examples/security make
Attach into one of the containers
kubectl exec -ti $(kubectl get pods -l release=helm-es-security -o name | awk -F'/' '{ print $NF }' | head -n 1) bash
Install the X-Pack license
curl -XPUT 'http://localhost:9200/_xpack/license' -H "Content-Type: application/json" -d @/usr/share/elasticsearch/config/license/license.json
- Test that authentication is now enabled
curl 'http://localhost:9200/' # This one will fail curl -u elastic:changeme 'http://localhost:9200/'
- Install some test data to play around with
wget https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz && gunzip logs.jsonl.gz && curl -u elastic:changeme -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
FAQ
How to install plugins?
The recommended way to install plugins into our docker images is to create a custom docker image.
The Dockerfile would look something like:
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
And then updating the image
in values to point to your custom image.
There are a couple reasons we recommend this.
- Tying the availability of Elasticsearch to the download service to install plugins is not a great idea or something that we recommend. Especially in Kubernetes where it is normal and expected for a container to be moved to another host at random times.
- Mutating the state of a running docker image (by installing plugins) goes against best practices of containers and immutable infrastructure.
How to use the keystore?
- Create a Kubernetes secret containing the keystore
$ kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
- Mount it into the container via
secretMounts
secretMounts: - name: elasticsearch-keystore secretName: elasticsearch-keystore path: /usr/share/elasticsearch/config/elasticsearch.keystore subPath: elasticsearch.keystore
Local development environments
This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.
Minikube
This chart also works successfully on minikube in addition to typical hosted Kubernetes environments.
An example values.yaml
file for minikube is provided under examples/
.
In order to properly support the required persistent volume claims for the Elasticsearch StatefulSet
, the default-storageclass
and storage-provisioner
minikube addons must be enabled.
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
cd examples/minikube
make
Note that if helm
or kubectl
timeouts occur, you may consider creating a minikube VM with more CPU cores or memory allocated.
Docker for Mac - Kubernetes
It is also possible to run this chart with the built in Kubernetes cluster that comes with docker-for-mac.
cd examples/docker-for-mac
make
Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two Service
definitions in Kubernetes, one with the name $clusterName-$nodeGroup
and another named $clusterName-$nodeGroup-headless
.
Only Ready
pods are a part of the $clusterName-$nodeGroup
service, while all pods (Ready
or not) are a part of $clusterName-$nodeGroup-headless
.
If your group of master nodes has the default nodeGroup: master
then you can just add new groups of nodes with a different nodeGroup
and they will automatically discover the correct master. If your master nodes have a different nodeGroup
name then you will need to set masterService
to $clusterName-$masterNodeGroup
.
The chart value for masterService
is used to populate discovery.zen.ping.unicast.hosts
, which Elasticsearch nodes will use to contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting masterService
to the desired Service
name of the related cluster is sufficient.
For an example of deploying both a group master nodes and data nodes using multiple releases of this chart, see the accompanying values files in examples/multi
.
Testing
This chart uses pytest to test the templating logic. The dependencies for testing can be installed from the requirements.txt
in the parent directory.
pip install -r ../requirements.txt
make pytest
You can also use helm template
to look at the YAML being generated
make template
It is possible to run all of the tests and linting inside of a docker container
make test
Integration Testing
Integration tests are run using goss which is a serverspec like tool written in golang. See goss.yaml for an example of what the tests look like.
To run the goss tests against the default example:
cd examples/default
make goss
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago
6 years ago