0.20.1 • Published 6 months ago

@shapeshiftoss/cluster-launcher v0.20.1

Weekly downloads
-
License
MIT
Repository
-
Last release
6 months ago

Cluster launcher

The cluster launcher is a pulumi package used to create an eks cluster. This is currently being used to deploy unchained's cluster. Eventually it could include other cloud kubernetes providers like GKE, AKS, etc ...

Dependencies

Helm setup

The following charts must be added to your repo list:

helm repo add traefik https://traefik.github.io/charts
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo add piraeus-charts https://piraeus.io/helm-charts/
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add eks https://aws.github.io/eks-charts/
helm repo update

Installing

To use from javascript or Typescript in Node.js install using either:

npm:

$ npm install @shapeshiftoss/cluster-launcher

or yarn:

$ yarn add @shapeshiftoss/cluster-launcher

Example Usage

Configure Route53 / DNS Registrar

In order for external-dns and cert-manager to opperated correctly. rootDnsName must be created in route53 manually and NS servers must be updated on registrar

  1. Go to route53 in AWS console
  2. Create a new Hosted Zone by clicking Create hosted zone
  3. Enter your Domain Name that you own and plan on using for this EKS cluster. Leave it public and save.
  4. Copy the name servers found in the NS record it should be 4 values looking something like:
    ns-1570.awsdns-04.co.uk.
    ns-810.awsdns-37.net.
    ns-265.awsdns-33.com.
    ns-1050.awsdns-03.org.
  5. Update / Change nameservers wherever your domain is currently setup.

Configure AWS CLI credentials

  1. If you do not have a aws_access_key_id and aws_secret_access_key create one for your user on IAM

    • Select your user --> Security credentials --> create Access Key
  2. If you do not have a credentials file found ~/.aws/credentials create one.

    $ touch ~/.aws/credentials
  3. If you haven't setup a profile before just setup a default copying the credentials you created

    $ cat <<EOT >> ~/.aws/credentials
    [default]
    aws_access_key_id = <Your Access Key ID>
    aws_secret_access_key = <Your Secret Access Key>
    EOT

    if you have simply create a new profile whatever you want to name it inside []. You will need to specify profile in the EKSClusterLauncherArgs otherwise it will use default

    $ cat <<EOT >> ~/.aws/credentials
    [New-Profile-Name]
    aws_access_key_id = <Access-Key-ID>
    aws_secret_access_key = <Secret-Access-Key>
    EOT

Now you are ready to use the EKSClusterLauncher

import { EKSClusterLauncher } from '@shapeshiftoss/cluster-launcher'

const cluster = await EKSClusterLauncher.create(app, {
    rootDomainName: 'example.com', // Domain configured in Route53
    instanceTypes: ['t3.small', 't3.medium', 't3.large'] // List of instances to be used for worker nodes
})

const kubeconfig = cluster.kubeconfig

const k8sProvider = new Provider('kube-provider', { kubeconfig })

Deployed resources

This package deploys everything nessesary for an opperational eks cluster including:

  • VPC (subnets, route tables, NAT, Internet Gateway)
  • EKS Cluster (Master Node)
  • Managed Node group per AZ (Worker Nodes)
  • Namespace in cluster for all of the additional services <name>-infra
  • Additional Services:
    • Cert Manager configured for lets encrypt
    • Traefik as Ingress Controller
    • External DNS for dynamic configuration of route53 records from Ingress objects
    • AWS Node Termination Handler to ensure we can gracefully stop services if SPOT instances are preempted
    • A simple Hello World app at helloworld.<rootDomainName> to see that all components are working correctly
    • A PLG (Promtail, Loki, Grafana) stack for log aggregation is available but not deployed by default

Access Grafana

A very basic PLG stack can be implemented to aid in troubleshooting, this is how you can access Grafana from outside your cluster.

Replace <templated variables> with variables specific to your deployment

  1. In the namespace where grafana is hosted get the admin password kubectl get secret <grafana secret> -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

  2. Forward grafana port to local machine kubectl port-forward service/<grafana service> 8080:80

  3. On your local machine, navigate to localhost:8080 admin / <password retrieved during step 1>

Additional Notes

  • traefik dashboard is accessible through port forwarding at path /dashboard/#
  • we are currently using instance role for route53, but this can be dangerous because ALL pods in cluster will be allowed to modify route53. Be careful with what workloads are running in this cluster. more information
  • If using persistent volumes in the Loki stack you'll want to ensure EBS volumes are cleaned up if logging is disabled. Default behavior is that persistent volume claims are not deleted when it's parent StatefulSet is deleted. Functionality to cleanup a PVC when a StatefulSet is removed is slated for release in Kubernetes v1.23
0.20.1

6 months ago

0.20.0

6 months ago

0.19.0

1 year ago

0.15.0

1 year ago

0.17.0

1 year ago

0.19.0-alpha2

1 year ago

0.15.0-alpha.0

1 year ago

0.16.0-alpha.0

1 year ago

0.14.0

1 year ago

0.16.0

1 year ago

0.18.0

1 year ago

0.19.0-alpha

1 year ago

0.11.0

1 year ago

0.12.0

1 year ago

0.13.0

1 year ago

0.10.0

1 year ago

0.9.0

1 year ago

0.8.3-alpha.0

1 year ago

0.8.1

2 years ago

0.8.2

2 years ago

0.8.0

2 years ago

0.7.2

2 years ago

0.7.1

2 years ago

0.7.0

2 years ago

0.6.9

2 years ago

0.6.8

2 years ago

0.6.7

3 years ago

0.6.6

3 years ago

0.6.5

3 years ago

0.6.4

3 years ago

0.6.3

3 years ago

0.6.2

3 years ago

0.6.1

3 years ago

0.5.3

3 years ago

0.6.0

3 years ago

0.5.0

3 years ago

0.5.2

3 years ago

0.5.1

3 years ago

0.3.0

3 years ago

0.3.6

3 years ago

0.3.5

3 years ago

0.4.1

3 years ago

0.3.2

3 years ago

0.4.0

3 years ago

0.3.1

3 years ago

0.2.2

3 years ago

0.3.4

3 years ago

0.4.2

3 years ago

0.3.3

3 years ago

0.2.1

3 years ago

0.2.0

3 years ago

0.1.0

3 years ago