0.0.17-0.1.0 • Published 5 years ago

@helm-charts/banzaicloud-stable-zeppelin v0.0.17-0.1.0

Weekly downloads
1
License
MIT
Repository
-
Last release
5 years ago

@helm-charts/banzaicloud-stable-zeppelin

A Helm chart for Kubernetes

FieldValue
Repository Namebanzaicloud-stable
Chart Namezeppelin
Chart Version0.0.17
NPM Package Version0.1.0
# Default values for zeppelin
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1

image:
  repository: banzaicloud/zeppelin-server
  tag: v0.9.0-k8s-1.0.76
  pullPolicy: IfNotPresent

resources:
  limits:
    cpu: 2
    memory: 2048
  requests:
    cpu: 500m
    memory: 1024

additionalOptions:

nodeSelector: {}

tolerations: []

affinity: {}

service:
  type: ClusterIP
  internalPort: 8080
  externalPort: 8080
  rpcPort: 38853

userCredentialSecretName: ''
username: 'admin'
password: 'zeppelin'

interpreterConnectTimeout: 120000

ingress:
  baseUrl: '/zeppelin'
  enabled: false
  annotations:
    {}
    #kubernetes.io/ingress.class: traefik
    #ingress.kubernetes.io/ssl-redirect: "false"
    #traefik.frontend.rule.type: PathPrefix
  hosts:
    - '/'
    # - "domain.com/xyz"
    # - "domain.com"
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

logService:
  zeppelinLogLevel: DEBUG
  zeppelinFacility: LOCAL4
  zeppelinLogPattern: '%5p [%d] ({%t} %F[%M]:%L) - %m%n'
  sparkLogLevel: INFO
  sparkFacility: LOCAL4
  sparkLogPattern: '[%p] XXX %c:%L - %m%n'
  applicationLoggerName: application
  applicationLogLevel: INFO
  applicationFacility: LOCAL4
  applicationLogPattern: '[%p] XXX %c:%L - %m%n'

notebookStorage:
  type: ''
  path: ''
azureStorageAccountName: ''
azureStorageAccessKey: ''

sparkSubmitOptions:
  sparkImage:
    name: banzaicloud/zeppelin-spark
    tag: v0.9.0-k8s-1.0.76
  k8sNameSpace: default
  sparkDriverCores: 1
  sparkDriverLimitCores: 1
  sparkExecutorCores: 1
  sparkDriverMemory: 1G
  sparkExecutorMemory: 1G
  dynamicAllocation: false
  shuffleService: false
  driverServiceAccountName: 'spark'
  sparkLocalDir: /tmp/spark-local

sparkEventLogStorage:
  cloudProvider: ''
  logDirectory: ''
  secretName: ''

  awsAccessKeyId: ''
  awsSecretAccessKey: ''

  aliOssEndpoint: ''
  aliOssRegion: ''
  aliAccessKeyId: ''
  aliSecretAccessKey: ''

  azureStorageAccessKey: ''
  azureStorageAccountName: ''

  googleJson: ''

  oracleRegion: ''
  oracleHost: ''
  oracleTenancyId: ''
  oracleUserId: ''
  oracleApiKeyFingerprint: ''

Zeppelin Chart

Zeppelin is a web based notebook for interactive data analytics with Spark, SQL and Scala.

Chart Details

Installing the Chart

To install the chart:

$ helm install banzaicloud-stable/zeppelin

Configuration

The following tables lists the configurable parameters of the Zeppelin Sever chart and their default values, in case you want to preserve your Spark application logs on S3 or Azure storage.

ParameterRequiredDescriptionExample
usernamenoAdmin username, by default is admin
passwordnoSalted password of admin user, by default is zeppelinYou can salt your own password using shiro cli tool java -jar ~/dev/tools/shiro-tools-hasher-1.3.2-cli.jar -p
userCredentialSecretNamenoCredentials above are set in a K8s secret. Instead of specifying username & password directly you can provide the name of this K8s secret containing these fields
logService.hostyes if you want to send logs to SyslogHost address of Syslog service10.44.0.12
logService.zeppelinLogPortyes if you want to send logs to SyslogUDP port for Zeppelin logs512
logService.sparkLogPortyes if you want to send logs to SyslogUDP port for Spark Driver and Executor logs512
logService.applicationLogPortyes if you want to send logs to SyslogUDP port for Application logs512
logService.applicationLoggerNamenoName of log4j logger for Application logsby default: application
logService.zeppelinLogLevelnolog4j log level for Zeppelin logsby default: DEBUG
logService.zeppelinLogPatternnolog4j log pattern for Zeppelin logsby default: "%5p %d ({%t} %F%M:%L) - %m%n"
logService.sparkLogLevelnolog4j log level for Spark logsby default: INFO
logService.sparkLogPatternnolog4j log pattern for Spark logsby default: "%p %c:%L - %m%n"
logService.applicationLogLevelnolog4j log level for Application logsby default: INFO
logService.applicationLogPatternnolog4j log pattern for Application logsby default: "%p %c:%L - %m%n"
sparkSubmitOptions.eventLogDirectoryyesthe URL to the directory for event logss3a://yourBucketNamewasb://your_blob_container_name@you_storage_account_name.blob.core.windows.netgs://yourBucketName
notebookStorage.typenostorage type for notebookss3azuregsby default no storage is configured
notebookStorage.pathnostorage path for notebooksbucket name in case of S3 / GS, file share name for Azure
azureStorageAccountNameonly in case of using Azure StorageName of your Azure storage accountsee Notes
azureStorageAccessKeyonly in case of using Azure StorageAccess key for your Azure storage accountsee Notes

Notes

  • in case of using S3 and Google Storage, we don't pass credentials and access keys we're using IAM roles and policies on Amazon and Service Account based access on Google Cloud
  • in case of Azure the storage account name would be the dns prefix it's created (e.g. mystorage.blob.core.windows.net - the name would be mystorage), and you can you either the primary or secondary keys