0.0.2-0.1.0 • Published 5 years ago

@helm-charts/banzaicloud-stable-tidb v0.0.2-0.1.0

Weekly downloads
1
License
MIT
Repository
-
Last release
5 years ago

@helm-charts/banzaicloud-stable-tidb

A TiDB Helm chart for Kubernetes

FieldValue
Repository Namebanzaicloud-stable
Chart Nametidb
Chart Version0.0.2
NPM Package Version0.1.0
# Default values for tidb
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

pd:
  ## pd container name
  ##
  name: pd

  ## pd image version
  ## ref: https://hub.docker.com/r/pingcap/pd/tags/
  ##
  ## Default: none
  image: pingcap/pd:v1.0.5

  replicaCount: 3

  ## Specify an imagePullPolicy (Required)
  ## It's recommended to change this to 'Always' if the image tag is 'latest'
  ## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
  imagePullPolicy: IfNotPresent

  service:
    ## Kubernetes service type
    type: ClusterIP

    ## Specify the nodePort value for the LoadBalancer and NodePort service types.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ##
    # nodePort:

    ## Provide any additonal annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    # annotations:

    PeerPort: 2380
    ClientPort: 2379

  ## Configure resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 256Mi
      cpu: 120m

tidb:
  ## db container name
  ##
  name: db

  ## tidb image version
  ## ref: https://hub.docker.com/r/pingcap/tidb/tags/
  ##
  ## Default: none
  image: pingcap/tidb:v1.0.5

  replicaCount: 2

  ## Specify an imagePullPolicy (Required)
  ## It's recommended to change this to 'Always' if the image tag is 'latest'
  ## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
  imagePullPolicy: IfNotPresent

  service:
    ## Kubernetes service type
    type: ClusterIP

    ## Specify the nodePort value for the LoadBalancer and NodePort service types.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ##
    # nodePort:

    ## Provide any additonal annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    # annotations:

    mysql: 4000
    status: 10080

  ## Configure resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 128Mi
      cpu: 100m

tikv:
  ## tikv container name
  ##
  name: kv

  ## bidb image version
  ## ref: https://hub.docker.com/r/pingcap/tikv/tags/
  ##
  ## Default: none
  image: pingcap/tikv:v1.0.5

  ## Specify an imagePullPolicy (Required)
  ## It's recommended to change this to 'Always' if the image tag is 'latest'
  ## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
  imagePullPolicy: IfNotPresent

  replicaCount: 3

  service:
    ## Kubernetes service type
    type: ClusterIP

    ## Specify the nodePort value for the LoadBalancer and NodePort service types.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
    ##
    # nodePort:

    ## Provide any additonal annotations which may be required. This can be used to
    ## set the LoadBalancer service type to internal only.
    ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
    ##
    # annotations:
    ClientPort: 20160

    ## Enable persistence using Persistent Volume Claims
    ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
  persistence:
    enabled: false
    ## tikv data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "-"
    # accessMode: ReadWriteOnce
    # size: 8Gi

  ## Configure resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 128Mi
      cpu: 100m

TiDB

TiDB on Kubernetes: https://banzaicloud.com/blog/tidb-kubernetes/

TiDB (The pronunciation is: /‘taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hybrid Transactional/Analytical Processing (HTAP) database. Inspired by the design of Google F1 and Google Spanner, TiDB features infinite horizontal scalability, strong consistency, and high availability. The goal of TiDB is to serve as a one-stop solution for online transactions and analyses.

tl;dr:

$ helm repo add banzaicloud-incubator http://kubernetes-charts-incubator.banzaicloud.com
$ helm repo update
$ helm install banzaicloud-incubator/tidb

Introduction

This chart bootstraps a TiDB deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.7+ with Beta APIs enabled
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm install --name my-release banzaicloud-incubator/tidb

The command deploys TiDB on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following tables lists the configurable parameters of the TiDB chart and their default values.

ParameterDescriptionDefault
pd.namePlacement Drive container namepd
pd.imagePlacement Drive container imagepingcap/pd:{VERSION}
pd.replicaCountReplica Count3
pd.service.typeKubernetes service type to exposeClusterIP
pd.service.nodePortPort to bind to for NodePort service typenil
pd.service.annotationsAdditional annotations to add to servicenil
pd.service.PeerPortPort to bind to for Peer service type2380
pd.service.ClientPortPort to bind to for Client service type2379
pd.imagePullPolicyImage pull policy.IfNotPresent
pd.resourcesCPU/Memory resource requests/limitsMemory: 256Mi, CPU: 100m
tidb.nameTiDB container namedb
tidb.imageTiDB container imagepingcap/tidb:{VERSION}
tidb.replicaCountReplica Count2
tidb.service.typeKubernetes service type to exposeClusterIP
tidb.service.nodePortPort to bind to for NodePort service typenil
tidb.service.annotationsAdditional annotations to add to servicenil
tidb.service.mysqlPort to bind to for Mysql service type4000
tidb.service.statusPort to bind to for Status service type10080
tidb.imagePullPolicyImage pull policy.IfNotPresent
tidb.persistence.enabled | Use a PVC to persist data |false`
tidb.persistence.existingClaimUse an existing PVCnil
tidb.persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
tidb.persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
tidb.persistence.sizeSize of data volume8Gi
tidb.resourcesCPU/Memory resource requests/limitsMemory: 128Mi, CPU: 100m
tikv.nameTiKV container namekv
tikv.imageTiKV container imagepingcap/tikv:{VERSION}
tikv.replicaCountReplica Count3
tikv.service.typeKubernetes service type to exposeClusterIP
tikv.service.nodePortPort to bind to for NodePort service typenil
tikv.service.annotationsAdditional annotations to add to servicenil
tidb.service.ClientPortPort to bind to for Client service type20160
tikv.imagePullPolicyImage pull policy.IfNotPresent
tikv.persistence.enabledUse a PVC to persist datafalse
tikv.persistence.existingClaimUse an existing PVCnil
tikv.persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
tikv.persistence.accessModeUse volume as ReadOnly or ReadWriteReadWriteOnce
tikv.persistence.sizeSize of data volume8Gi
tikv.resourcesCPU/Memory resource requests/limitsMemory: 128Mi, CPU: 100m

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example:

$ helm install --name my-release -f values.yaml banzaicloud-incubator/tidb

Tip: You can use the default values.yaml

## Persistence

The chart mounts a [Persistent Volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) volume at this location. The volume is created using dynamic volume provisioning, by default. An existing PersistentVolumeClaim can be defined.

### Existing PersistentVolumeClaims

1. Create the PersistentVolume
1. Create the PersistentVolumeClaim
1. Install the chart
```bash
$ helm install --set persistence.existingClaim=PVC_NAME banzaicloud-incubator/tidb