1.9.0-0.1.0 • Published 5 years ago

@helm-charts/bitnami-kafka v1.9.0-0.1.0

Weekly downloads
1
License
MIT
Repository
-
Last release
5 years ago

@helm-charts/bitnami-kafka

Apache Kafka is a distributed streaming platform.

FieldValue
Repository Namebitnami
Chart Namekafka
Chart Version1.9.0
NPM Package Version0.1.0
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName

## Bitnami Kafka image version
## ref: https://hub.docker.com/r/bitnami/kafka/tags/
##
image:
  registry: docker.io
  repository: bitnami/kafka
  tag: 2.2.0
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: Always
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

  ## Set to true if you would like to see extra information on logs
  ## It turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  debug: false

## StatefulSet controller supports automated updates. There are two valid update strategies: RollingUpdate and OnDelete
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
##
updateStrategy: RollingUpdate

## Partition update strategy
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
##
# rollingUpdatePartition:

replicaCount: 1

config: |-
#  broker.id=-1
#  listeners=PLAINTEXT://:9092
#  advertised.listeners=PLAINTEXT://KAFKA_IP:9092
#  num.network.threads=3
#  num.io.threads=8
#  socket.send.buffer.bytes=102400
#  socket.receive.buffer.bytes=102400
#  socket.request.max.bytes=104857600
#  log.dirs=/opt/bitnami/kafka/data
#  num.partitions=1
#  num.recovery.threads.per.data.dir=1
#  offsets.topic.replication.factor=1
#  transaction.state.log.replication.factor=1
#  transaction.state.log.min.isr=1
#  log.flush.interval.messages=10000
#  log.flush.interval.ms=1000
#  log.retention.hours=168
#  log.retention.bytes=1073741824
#  log.segment.bytes=1073741824
#  log.retention.check.interval.ms=300000
#  zookeeper.connect=ZOOKEEPER_SERVICE_NAME
#  zookeeper.connection.timeout.ms=6000
#  group.initial.rebalance.delay.ms=0

## Kafka docker image available customizations
## https://github.com/bitnami/bitnami-docker-kafka#configuration
##
## Allow to use the PLAINTEXT listener.
allowPlaintextListener: true

## The address the socket server listens on.
# listeners:

## Hostname and port the broker will advertise to producers and consumers.
# advertisedListeners:

## ID of the Kafka node.
brokerId: -1

## Switch to enable topic deletion or not.
deleteTopicEnable: false

## Kafka's Java Heap size.
heapOpts: -Xmx1024m -Xms1024m

## The number of messages to accept before forcing a flush of data to disk.
logFlushIntervalMessages: 10000

## The maximum amount of time a message can sit in a log before we force a flush.
logFlushIntervalMs: 1000

## A size-based retention policy for logs.
logRetentionBytes: _1073741824

## The interval at which log segments are checked to see if they can be deleted.
logRetentionCheckIntervalMs: 300000

## The minimum age of a log file to be eligible for deletion due to age.
logRetentionHours: 168

## The maximum size of a log segment file. When this size is reached a new log segment will be created.
logSegmentBytes: _1073741824

## Log message format version
logMessageFormatVersion: ''

## A comma separated list of directories under which to store log files.
logsDirs: /opt/bitnami/kafka/data

## The largest record batch size allowed by Kafka
maxMessageBytes: _1000012

## Default replication factors for automatically created topics
defaultReplicationFactor: 1

## The replication factor for the offsets topic
offsetsTopicReplicationFactor: 1

## The replication factor for the transaction topic
transactionStateLogReplicationFactor: 1

## Overridden min.insync.replicas config for the transaction topic
transactionStateLogMinIsr: 1

## The number of threads doing disk I/O.
numIoThreads: 8

## The number of threads handling network requests.
numNetworkThreads: 3

## The default number of log partitions per topic.
numPartitions: 1

## The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
numRecoveryThreadsPerDataDir: 1

## The receive buffer (SO_RCVBUF) used by the socket server.
socketReceiveBufferBytes: 102400

## The maximum size of a request that the socket server will accept (protection against OOM).
socketRequestMaxBytes: _104857600

## The send buffer (SO_SNDBUF) used by the socket server.
socketSendBufferBytes: 102400

## Timeout in ms for connecting to zookeeper.
zookeeperConnectionTimeoutMs: 6000

## Authentication parameteres
## https://github.com/bitnami/bitnami-docker-kafka#security
##
auth:
  ## Switch to enable the kafka authentication.
  enabled: false

  ## Name of the existing secret containing credentials for brokerUser, interBrokerUser and zookeeperUser.
  #existingSecret:

  ## Name of the existing secret containing the certificate files that will be used by Kafka.
  #certificatesSecret:

  ## Password for the above certificates if they are password protected.
  #certificatesPassword:

  ## Kafka client user.
  brokerUser: user

  ## Kafka client password.
  # brokerPassword:

  ## Kafka inter broker communication user.
  interBrokerUser: admin

  ## Kafka inter broker communication password.
  # interBrokerPassword:
  ## Kafka Zookeeper user.
  #zookeeperUser:
  ## Kafka Zookeeper password.
  #zookeeperPassword:

## Kubernetes Security Context
## https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
service:
  type: ClusterIP
  port: 9092

  ## Specify the NodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  # nodePort:

  ## Use loadBalancerIP to request a specific static IP,
  # loadBalancerIP:

  ## Service annotations done as key:value pairs
  annotations:

## Kafka data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
##   set, choosing the default provisioner.  (gp2 on AWS, standard on
##   GKE, AWS & OpenStack)
##
persistence:
  enabled: true
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

## Node labels and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
nodeSelector: {}
tolerations: []

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
#  limits:
#    cpu: 200m
#    memory: 1Gi
#  requests:
#    memory: 256Mi
#    cpu: 250m

## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
  enabled: true
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 2
  successThreshold: 1

readinessProbe:
  enabled: true
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1

## Prometheus Exporters / Metrics
##
metrics:
  ## Prometheus Kafka Exporter: exposes complimentary metrics to JMX Exporter
  kafka:
    enabled: false

    image:
      registry: docker.io
      repository: danielqsj/kafka-exporter
      tag: v1.0.1
      pullPolicy: Always
      ## Optionally specify an array of imagePullSecrets.
      ## Secrets must be manually created in the namespace.
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ##
      # pullSecrets:
      #   - myRegistryKeySecretName

    ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator
    interval: 10s

    ## Port kafka-exporter exposes for Prometheus to scrape metrics
    port: 9308

    ## Resource limits
    resources: {}
  #      limits:
  #        cpu: 200m
  #        memory: 1Gi
  #      requests:
  #        cpu: 100m
  #        memory: 100Mi

  ## Prometheus JMX Exporter: exposes the majority of Kafkas metrics
  jmx:
    enabled: false

    image:
      registry: docker.io
      repository: solsson/kafka-prometheus-jmx-exporter@sha256
      tag: a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
      pullPolicy: Always
      ## Optionally specify an array of imagePullSecrets.
      ## Secrets must be manually created in the namespace.
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ##
      # pullSecrets:
      #   - myRegistryKeySecretName

    ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator
    interval: 10s

    ## Port jmx-exporter exposes Prometheus format metrics to scrape
    exporterPort: 5556

    resources:
      {}
      # limits:
      #   cpu: 200m
      #   memory: 1Gi
      # requests:
      #   cpu: 100m
      #   memory: 100Mi

    ## Credits to the incubator/kafka chart for the JMX configuration.
    ## https://github.com/helm/charts/tree/master/incubator/kafka
    ##
    ## Rules to apply to the Prometheus JMX Exporter.  Note while lots of stats have been cleaned and exposed,
    ## there are still more stats to clean up and expose, others will never get exposed.  They keep lots of duplicates
    ## that can be derived easily.  The configMap in this chart cleans up the metrics it exposes to be in a Prometheus
    ## format, eg topic, broker are labels and not part of metric name. Improvements are gladly accepted and encouraged.
    configMap:
      ## Allows disabling the default configmap, note a configMap is needed
      enabled: true
      ## Allows setting values to generate confimap
      ## To allow all metrics through (warning its crazy excessive) comment out below `overrideConfig` and set
      ## `whitelistObjectNames: []`
      overrideConfig:
        {}
        # jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
        # lowercaseOutputName: true
        # lowercaseOutputLabelNames: true
        # ssl: false
        # rules:
        # - pattern: ".*"
      ## If you would like to supply your own ConfigMap for JMX metrics, supply the name of that
      ## ConfigMap as an `overrideName` here.
      overrideName: ''
    ## Port the jmx metrics are exposed in native jmx format, not in Prometheus format
    jmxPort: 5555
    ## JMX Whitelist Objects, can be set to control which JMX metrics are exposed.  Only whitelisted
    ## values will be exposed via JMX Exporter.  They must also be exposed via Rules.  To expose all metrics
    ## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []`
    ## (2) commented out above `overrideConfig`.
    whitelistObjectNames: # []
      - kafka.controller:*
      - kafka.server:*
      - java.lang:*
      - kafka.network:*
      - kafka.log:*

##
## Zookeeper chart configuration
##
## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
##
zookeeper:
  enabled: true

externalZookeeper:
  ## This value is only used when zookeeper.enabled is set to false
  ## Server or list of external zookeeper servers to use.
  # servers:

Kafka

Kafka is a distributed streaming platform used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

TL;DR;

$ helm install bitnami/kafka

Introduction

This chart bootstraps a Kafka deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of Bitnami Kubernetes Production Runtime (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications.

Prerequisites

  • Kubernetes 1.4+ with Beta APIs enabled
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

$ helm install --name my-release bitnami/kafka

The command deploys Kafka on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following tables lists the configurable parameters of the Kafka chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
image.registryKafka image registrydocker.io
image.repositoryKafka Image namebitnami/kafka
image.tagKafka Image tag{VERSION}
image.pullPolicyKafka image pull policyAlways
image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
image.debugSpecify if debug values should be setfalse
updateStrategyUpdate strategy for the stateful setRollingUpdate
rollingUpdatePartitionPartition update strategynil
replicaCountNumber of Kafka nodes1
configConfiguration file for Kafkanil
allowPlaintextListenerAllow to use the PLAINTEXT listenertrue
listenersThe address the socket server listens on.nil
advertisedListenersHostname and port the broker will advertise to producers and consumers.nil
brokerIdID of the Kafka node-1
deleteTopicEnableSwitch to enable topic deletion or not.false
heapOptsKafka's Java Heap size.-Xmx1024m -Xms1024m
logFlushIntervalMessagesThe number of messages to accept before forcing a flush of data to disk.10000
logFlushIntervalMsThe maximum amount of time a message can sit in a log before we force a flush.1000
logRetentionBytesA size-based retention policy for logs._1073741824
logRetentionCheckIntervalMsThe interval at which log segments are checked to see if they can be deleted.300000
logRetentionHoursThe minimum age of a log file to be eligible for deletion due to age.168
logSegmentBytesThe maximum size of a log segment file. When this size is reached a new log segment will be created._1073741824
logMessageFormatVersionLogging message format version.
logsDirsA comma separated list of directories under which to store log files./opt/bitnami/kafka/data
maxMessageBytesThe largest record batch size allowed by Kafka.1000012
defaultReplicationFactorDefault replication factors for automatically created topics1
offsetsTopicReplicationFactorThe replication factor for the offsets topic1
transactionStateLogReplicationFactorThe replication factor for the transaction topic1
transactionStateLogMinIsrOverridden min.insync.replicas config for the transaction topic1
numIoThreadsThe number of threads doing disk I/O.8
numNetworkThreadsThe number of threads handling network requests.3
numPartitionsThe default number of log partitions per topic.1
numRecoveryThreadsPerDataDirThe number of threads per data directory to be used for log recovery at startup and flushing at shutdown.1
socketReceiveBufferBytesThe receive buffer (SO_RCVBUF) used by the socket server.102400
socketRequestMaxBytesThe maximum size of a request that the socket server will accept (protection against OOM)._104857600
socketSendBufferBytesThe send buffer (SO_SNDBUF) used by the socket server.102400
zookeeperConnectionTimeoutMsTimeout in ms for connecting to zookeeper.6000
auth.enabledSwitch to enable the kafka authentication.false
auth.existingSecretName of the existing secret containing credentials for brokerUser, interBrokerUser and zookeeperUser.nil
auth.certificatesSecretName of the existing secret containing the certificate files that will be used by Kafka.nil
auth.certificatesPasswordPassword for the above certificates if they are password protected.nil
auth.brokerUserKafka client user.user
auth.brokerPasswordKafka client password.nil
auth.interBrokerUserKafka inter broker communication useradmin
auth.interBrokerPasswordKafka inter broker communication password.nil
auth.zookeeperUserKafka Zookeeper user.nil
auth.zookeeperPasswordKafka Zookeeper password.nil
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container1001
securityContext.runAsUserUser ID for the container1001
service.typeKubernetes Service typeClusterIP
service.portKafka port9092
service.nodePortKubernetes Service nodePortnil
service.loadBalancerIPloadBalancerIP for Kafka Servicenil
service.annotationsService annotations
persistence.enabledEnable persistence using PVCtrue
persistence.storageClassPVC Storage Class for Kafka volumenil
persistence.accessModePVC Access Mode for Kafka volumeReadWriteOnce
persistence.sizePVC Storage Request for Kafka volume8Gi
persistence.annotationsAnnotations for the PVC{}
nodeSelectorNode labels for pod assignment{}
tolerationsToleration labels for pod assignment[]
resourcesCPU/Memory resource requests/limitsMemory: 256Mi, CPU: 250m
livenessProbe.enabledwould you like a livessProbed to be enabledtrue
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
livenessProbe.periodSecondsHow often to perform the probe10
livenessProbe.timeoutSecondsWhen the probe times out5
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed1
readinessProbe.enabledwould you like a readinessProbe to be enabledtrue
readinessProbe.initialDelaySecondsDelay before liveness probe is initiated5
readinessProbe.periodSecondsHow often to perform the probe10
readinessProbe.timeoutSecondsWhen the probe times out5
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed1
metrics.kafka.enabledWhether or not to create a separate Kafka exporterfalse
metrics.kafka.image.registryKafka exporter image registrydocker.io
metrics.kafka.image.repositoryKafka exporter image namedanielqsj/kafka-exporter
metrics.kafka.image.tagKafka exporter image tagv1.0.1
metrics.kafka.image.pullPolicyKafka exporter image pull policyAlways
metrics.kafka.image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
metrics.kafka.intervalInterval that Prometheus scrapes Kafka metrics when using Prometheus Operator10s
metrics.kafka.portKafka Exporter Port which exposes metrics in Prometheus format for scraping9308
metrics.kafka.resourcesAllows setting resource limits for kafka-exporter pod{}
metrics.jmx.resourcesAllows setting resource limits for jmx sidecar container{}
metrics.jmx.enabledWhether or not to expose JMX metrics to Prometheusfalse
metrics.jmx.image.registryJMX exporter image registrydocker.io
metrics.jmx.image.repositoryJMX exporter image namesolsson/kafka-prometheus-jmx-exporter@sha256
metrics.jmx.image.tagJMX exporter image taga23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8
metrics.jmx.image.pullPolicyJMX exporter image pull policyAlways
metrics.jmx.image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
metrics.jmx.intervalInterval that Prometheus scrapes JMX metrics when using Prometheus Operator10s
metrics.jmx.exporterPortJMX Exporter Port which exposes metrics in Prometheus format for scraping5556
metrics.jmx.configMap.enabledEnable the default ConfigMap for JMXtrue
metrics.jmx.configMap.overrideConfigAllows config file to be generated by passing values to ConfigMap{}
metrics.jmx.configMap.overrideNameAllows setting the name of the ConfigMap to be used""
metrics.jmx.jmxPortThe jmx port which JMX style metrics are exposed (note: these are not scrapeable by Prometheus)5555
metrics.jmx.whitelistObjectNamesAllows setting which JMX objects you want to expose to via JMX stats to JMX Exporter(see values.yaml)
zookeeper.enabledSwitch to enable or disable the Zookeeper helm charttrue
externalZookeeper.serversServer or list of external zookeeper servers to use.nil

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install --name my-release \
  --set kafkaPassword=secretpassword,kafkaDatabase=my-database \
    bitnami/kafka

The above command sets the Kafka kafka account password to secretpassword. Additionally it creates a database named my-database.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install --name my-release -f values.yaml bitnami/kafka

Tip: You can use the default values.yaml

Production and horizontal scaling

The following repo contains the recommended production settings for Kafka server in an alternative values file. Please read carefully the comments in the values-production.yaml file to set up your environment

To horizontally scale this chart, first download the values-production.yaml file to your local folder, then:

$ helm install --name my-release -f ./values-production.yaml bitnami/kafka
$ kubectl scale statefulset my-kafka-slave --replicas=3

Enable security for Kafka and Zookeeper

If you enabled the authentication for Kafka, the SASL_SSL listener will be configured with your provided inputs. In particular you can set the following pair of credentials:

  • brokerUser/brokerPassword: To authenticate kafka clients against kafka brokers
  • interBrokerUser/interBrokerPassword: To authenticate kafka brokers between them.
  • zookeeperUser/zookeeperPassword: In the case that the Zookeeper chart is deployed with SASL authentication enabled.

In order to configure the authentication, you must create a secret containing the kafka.keystore.jks and kafka.trustore.jks certificates and pass the secret name with the --auth.certificatesSecret option when deploying the chart.

You can create the secret with this command assuming you have your certificates in your working directory:

kubectl create secret generic kafka-certificates --from-file=./kafka.keystore.jks --from-file=./kafka.truststore.jks

As an example of Kafka installed with authentication you can use this command:

helm install --name my-release bitnami/kafka --set auth.enabled=true \
             --set auth.brokerUser=brokerUser --set auth.brokerPassword=brokerPassword \
             --set auth.interBrokerUser=interBrokerUser --set auth.interBrokerPassword=interBrokerPassword \
             --set auth.zookeeperUser=zookeeperUser --set auth.zookeeperPassword=zookeeperPassword \
             --set zookeeper.auth.enabled=-true --set zookeeper.auth.serverUser=zookeeperUser --set zookeeper.auth.serverPassword=zookeeperPassword \
             --set zookeeper.auth.clientUser=zookeeperUser --set zookeeper.auth.clientPassword=zookeeperPassword \
             --set auth.certificatesSecret=kafka-certificates

Note: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the --auth.certificatesPassword option to provide your password.

Persistence

The Bitnami Kafka image stores the Kafka data at the /bitnami/kafka path of the container.

Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. See the Configuration section to configure the PVC or to disable persistence.

Upgrading

To 1.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is kafka:

$ kubectl delete statefulset kafka-kafka --cascade=false
$ kubectl delete statefulset kafka-zookeeper --cascade=false
1.9.0-0.1.0

5 years ago

1.8.2-0.1.0

5 years ago

1.8.1-0.1.0

5 years ago

1.8.0-0.1.0

5 years ago

1.7.2-0.1.0

5 years ago

1.7.1-0.1.0

5 years ago

1.7.0-0.1.0

5 years ago

1.6.0-0.1.0

5 years ago

1.5.0-0.1.0

5 years ago

1.4.1-0.1.0

5 years ago

1.4.0-0.1.0

5 years ago

1.3.3-0.1.0

5 years ago

1.3.2-0.1.0

5 years ago

1.3.0-0.1.0

5 years ago

1.2.6-0.1.0

5 years ago

1.2.5-0.1.0

5 years ago

1.2.4-0.1.0

5 years ago

1.2.3-0.1.0

5 years ago

1.2.2-0.1.0

5 years ago

1.2.1-0.1.0

5 years ago

1.2.0-0.1.0

5 years ago

1.10.1-0.1.0

5 years ago

1.10.0-0.1.0

5 years ago

1.1.3-0.1.0

5 years ago

1.1.2-0.1.0

5 years ago

1.1.1-0.1.0

5 years ago

1.1.0-0.1.0

5 years ago

1.0.2-0.1.0

5 years ago

1.0.1-0.1.0

5 years ago

1.0.0-0.1.0

5 years ago

0.0.9-0.1.0

5 years ago

0.0.8-0.1.0

5 years ago

0.0.7-0.1.0

5 years ago

0.0.6-0.1.0

5 years ago

0.0.5-0.1.0

5 years ago

0.0.4-0.1.0

5 years ago

0.0.3-0.1.0

5 years ago

0.0.2-0.1.0

5 years ago

0.0.1-0.1.0

5 years ago