@etherisc/amqp v1.3.0
DIP Platform
Documentation
Test coverage summary
Module | % Stmts | % Branch | % Funcs | % Lines |
---|---|---|---|---|
estore_api.v1.0.0 | 0% (0/135) | 0% (0/6) | 0% (0/29) | 0% (0/135) |
estore_contracts.v1.0.0 | - | - | - | - |
estore_ui.v1.0.0 | 1.92% (5/261) | 0% (0/81) | 2.33% (1/43) | 2.06% (5/243) |
@etherisc/etherisc_flight_delay_api.v0.1.1 | 2.38% (2/84) | 0% (0/20) | 3.33% (1/30) | 2.86% (2/70) |
@etherisc/etherisc_flight_delay_ui.v0.1.1 | 35.9% (14/39) | 0% (0/4) | 30.77% (4/13) | 40% (14/35) |
@etherisc/dip_artifacts_storage.v1.0.0 | 43.18% (19/44) | 50% (1/2) | 33.33% (3/9) | 48.72% (19/39) |
@etherisc/dip_contracts.v1.0.0 | - | - | - | - |
@etherisc/dip_ethereum_client.v0.1.1 | - | - | - | - |
@etherisc/dip_event_listener.v0.1.0 | 43.18% (57/132) | 42.86% (6/14) | 44.44% (12/27) | 45.9% (56/122) |
@etherisc/dip_event_logging.v0.2.0 | 45.71% (16/35) | 75% (3/4) | 54.55% (6/11) | 51.61% (16/31) |
@etherisc/dip_fiat_payment_gateway.v0.1.1 | - | - | - | - |
@etherisc/dip_fiat_payout_gateway.v0.1.1 | - | - | - | - |
@etherisc/dip_pdf_generator.v1.0.1 | - | - | - | - |
@etherisc/dip_policy_storage.v0.1.1 | 50.97% (79/155) | 62.5% (5/8) | 41.46% (17/41) | 55.24% (79/143) |
postgresql-service.v1.0.0 | - | - | - | - |
@etherisc/microservice.v0.4.3 | - | - | - | - |
Setup environments
A. Setup local development environment
- Install Docker.
- Install NodeJS. NodeJs version should be >= 11, npm >= 6.
npm ci
to install package dependenciesnpm run bootstrap
to install dependencies for Lerna packagesnpm run dev:services:run
to run Docker Compose with RabbitMQ and PostreSQLnpm run migrate
to run migrations. Optionally, you can runnpm run seed
to fill the databases with test data, where applicable.- Many individual packages in
app_microservices
andcore_microservices
are configured by the files called.env
that contain values for environment variables that a package expects to be present in the cloud environment. Where possible, bootstrap script from step 4 fills the defaults in from.env.sample
, but developers are free to modify.env
files as appropriate. npm run dev
to start applications.npm login
login into npm account with access to @etherisc organization private packages.npm run publish
to update NPM packages
B. Setup local development e2e test environment
- Install Minikube. Make sure
kubectl
is the latest version. Run Minikube:
minikube start
will start Minikube. You may want to configure it for better performance:`minikube cache add nginx:stable` `minikube cache add postgres:10.5` `minikube cache add node:11.2.0` `minikube config set memory 4096`
minikube ip
will return local Minikube IPminikube dashboard
will open Minikube dashboard for local Kubernetes clusterminikube delete
will delete Minikube clusterNote that the IP is new each time you restart minikube. You can get it at any time by running
minikube ip
. Keep it handy for all other ports we'll potentially expose later on in the process.npm ci
to install package dependenciesNPM_TOKEN=<token> npm run deploy:minikube
to deploy to Minikube. To get the token sign in to npm and create token of type Publish onhttps://www.npmjs.com/settings/etherisc_user/tokens/create
.
Notes
By navigating to a
<minikubeip>:31672
in your browser you can open RabbitMQ's management plugin. The default administrative credentials areadmin/guest
.fdd.web
is available on<minikubeip>:80
.postgresql
is available on<minikubeip>:30032
. Connections stringpostgresql://dipadmin:dippassword@postgres:5432/dip
.To check whether the pods were created:
kubectl get pods --show-labels
kubectl describe pod <pod name>
kubectl logs <pod name>
- For the front-end services, the deployments should ideally be accompanied by services exposing node-ports outward. But to forward the ports so deployment port interfaces are available from your local environment, run:
kubectl port-forward deployment/< DEPLOYMENT NAME> 8080:8080 8081:8081
Final param is a list of space-delimetered port pairs going local:minikube.
B-2. Deploy to Minikube bundled with the local docker (alternative to setting up Minikube).
If you are a Mac user and have Docker for Mac 17.12 CE Edge and higher, or 18.06 Stable and higher.
1. Configure Kubernetes for Docker
2. npm ci
to install package dependencies
3. NPM_TOKEN=<token> npm run deploy:docker
The deploy script will prompt you for values you'd like your environment to have configured in Secrets. Some of them have default values pre-configured.
Notes
All the notes for minikube deployment apply, but in case of local docker setup, will need to be replaced with
localhost
To connect to cluster service with a local management / edit tool, you'd need to start a port-forwarding process:
kubectl port-forward svc/service name (port that will be available to you locally):(service port)
For example:
kubectl port-forward svc/postgres 5432:5432
C. Setup local development environment for deployment to GKE clusters
- Install and set up kubectl
- Install and initialize Google Cloud SDK
- Create account / login to Google Cloud Platform Console
- In GCP dashboard navigate to Kubernetes Engine > Clusters and create new cluster. Choose "Advanced options" and check "Try the new Stackdriver beta Monitoring and Logging experience" checkbox. This will enable platform-wide logging.
- In the description of the newly created cluster, find and click the "connect" button and run the generated command in the local console you are going to use for deploy.
npm install
to install package dependenciesgcloud auth configure-docker --quiet
to authorize to Google RegistryGCLOUD_PROJECT_ID=<project name> GCLOUD_CLUSTER=<cluster name> GCLOUD_ZONE=<cluster zone> NPM_TOKEN=<token> npm run deploy:gke
to deploy to GKE cluster. To get the token sign in to npm and create token of type Publish onhttps://www.npmjs.com/settings/etherisc_user/tokens/create
.
D. Setup deployment to GKE clusters from Bitbucket Pipelines CI
Setup Google Cloud
- Create account / login to Google Cloud Platform Console
- Select or create a GCP project (manage resources page)
- Make sure that billing is enabled for your project (learn how)
- Enable the App Engine Admin API (enable APIs)
Create Kubernetes cluster
- In GCP dashboard navigate to Kubernetes Engine > Clusters
- Create new cluster
- If you deploy first time, answer
Y
when the deployment script asks you whether you want toSet Secret variables?
in general, as well as each specific set of secrets later on. - Choose "Advanced options" and check "Try the new Stackdriver beta Monitoring and Logging experience" checkbox. This will enable platform-wide logging.
Create authorization credentials for Bitbucket
Create an App Engine service account and API key. Bitbucket needs this information to deploy to App Engine.
In the Google Cloud Platform Console, go to the Credentials page.
Click Create credentials > Service account key.
In the next page select Compute Engine default service account in the Service account dropdown.
Click the Create button. A copy of the JSON file downloads to your computer. (This is your JSON credential file)
Configure the environment variables required by the pipeline script
Open up your terminal and browse to the location of your JSON credential file from earlier. Then run the command below to encode your file in base64 format. Copy the output of the command to your clipboard.
base64 <your-credentials-file.json>
Go to your repository settings in Bitbucket and navigate to Pipelines > Environment variables. Create a new variable named GCLOUD_API_KEYFILE and paste the encoded service account credentials in it.
Add another variable called GCLOUD_PROJECT_ID and set the value to the key of your Google Cloud project that you created in the first step your-project-name
.
Add GCLOUD_CLUSTER, GCLOUD_ZONE variables to specify your GKE cluster.
Use custom commands specified in bitbucket-pipelines.yml to deploy info Kubernetes cluster.
Manually switching between kubectl
contexts ( existing deployments )
To add context for already existing cluster to your local kubectl
, use this guide.
The following command will show all the configured environments:
> kubectl config get-contexts
To switch the kubectl context between environments:
> kubectl config use-context <contextname>
Note: do not switch contexts during deploy, since the next kubectl instruction will apply to the new active context instead of the one you started the deploy with.
View logs from Kubernetes
During cluster creation choose "Advanced options" and check "Try the new Stackdriver beta Monitoring and Logging experience" checkbox.
This will enable platform-wide logging.
In order to view logs navigate to "Logging > Logs".
Select filter: "Kubernetes Container > YOUR-CLUSTER > default > All container_name"
In advanced filter add lines to the query:NOT textPayload:(GET /ready)
NOT textPayload:(GET /live)