@helm-charts/elastic-kibana v6.5.4-alpha2-0.1.0
@helm-charts/elastic-kibana
Kibana
| Field | Value |
|---|---|
| Repository Name | elastic |
| Chart Name | kibana |
| Chart Version | 6.5.4-alpha2 |
| NPM Package Version | 0.1.0 |
---
elasticsearchURL: 'http://elasticsearch-master:9200'
replicas: 1
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs:
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts:
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
image: 'docker.elastic.co/kibana/kibana'
imageTag: '6.5.4'
imagePullPolicy: 'IfNotPresent'
resources:
requests:
cpu: '100m'
memory: '500m'
limits:
cpu: '1000m'
memory: '1Gi'
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: 'kubernetes.io/hostname'
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: 'hard'
httpPort: 5601
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
updateStrategy:
type: 'Recreate'
service:
type: ClusterIP
port: 5601
# Set this if you are setting server.ssl.enabled in Kibana.
# This value is a hostname accepted by the SSL certificate provided to kibana.
# Configuring this allows the readinessProbe to successfully check Kibana over HTTPS.
kibanaSSLHostname: localhost
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets: []
nodeSelector: {}
tolerations: []
affinity: {}Kibana Helm Chart
This functionality is in alpha status and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but alpha features are not subject to the support SLA of official GA features.
This helm chart is a lightweight way to configure and run our official Kibana docker image
Requirements
- Kubernetes 1.8/1.9/1.10/1.11
- Helm
Installing
- Add the elastic helm charts repo
helm repo add elastic https://helm.elastic.co - Install it
helm install --name kibana elastic/kibana --version 6.5.4-alpha2
Configuration
| Parameter | Description | Default |
|---|---|---|
elasticsearchURL | The URL used to connect to Elasticsearch. | http://elasticsearch-master:9200 |
replicas | Kubernetes replica count for the deployment (i.e. how many pods) | 1 |
extraEnvs | Extra environment variables which will be appended to the env: definition for the container | {} |
secretMounts | Allows you easily mount a secret as a file inside the deployment. Useful for mounting certificates and other secrets. See values.yaml for an example | {} |
image | The Kibana docker image | docker.elastic.co/kibana/kibana |
imageTag | The Kibana docker image tag | 6.5.4 |
imagePullPolicy | The Kubernetes imagePullPolicy value | IfNotPresent |
resources | Allows you to set the resources for the statefulset | requests.cpu: 100mrequests.memory: 2Gilimits.cpu: 1000mlimits.memory: 2Gi |
antiAffinityTopologyKey | The anti-affinity topology key. By default this will prevent multiple Kibana instances from running on the same Kubernetes node | kubernetes.io/hostname |
antiAffinity | Setting this to hard enforces the anti-affinity rules. If it is set to soft it will be done "best effort" | hard |
httpPort | The http port that Kubernetes will use for the healthchecks and the service. | 5601 |
maxUnavailable | The maxUnavailable value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod | 1 |
updateStrategy | Allows you to change the default update strategy for the deployment. A standard upgrade of Kibana requires a full stop and start which is why the default strategy is set to Recreate | Recreate |
readinessProbe | Configuration for the readinessProbe | failureThreshold: 3initialDelaySeconds: 10periodSeconds: 10successThreshold: 3timeoutSeconds: 5 |
imagePullSecrets | Configuration for imagePullSecrets so that you can use a private registry for your image | [] |
nodeSelector | Configurable nodeSelector so that you can target specific nodes for your Kibana instances | {} |
tolerations | Configurable tolerations | [] |
ingress | Configurable ingress to expose the Kibana service. See values.yaml for an example | enabled: false |
kibanaSSLHostname | A hostname matched by the SSL certificate used in Kibana. Note: This only matters if you have enabled SSL in Kibana by setting SERVER_SSL_ENABLED=true in extraEnvs. | localhost |
Examples
In examples/ you will find some example configurations. These examples are used for the automated testing of this helm chart
Default
- Deploy the default Elasticsearch helm chart
- Deploy Kibana with the default values
cd examples/default make - You can now setup a port forward and access Kibana at http://localhost:5601
kubectl port-forward deployment/helm-kibana-default-kibana 5601
Security
- Deploy a security enabled Elasticsearch cluster
- Deploy Kibana with the security example
cd examples/security make - You can now setup a port forward and access Kibana at http://localhost:5601 with the credentials
elastic:changemekubectl port-forward deployment/helm-kibana-default-kibana 5601
Testing
This chart uses pytest to test the templating logic. The dependencies for testing can be installed from the requirements.txt in the parent directory.
pip install -r ../requirements.txt
make testYou can also use helm template to look at the YAML being generated
make templateIt is possible to run all of the tests and linting inside of a docker container
make test7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago
7 years ago