pts-users-service v1.0.0
micro service boiler plate code
Developers start guide
This code is a starter boilerplate code to start creating a micro service
Necessary tools
These tools are needed to build , deploy, run and contribute to a new micro service
- Node JS
- Typescript
- Npm
- Git
How to use the boiler plate code to create a micro service ?
Here are the steps :
- Go to Azure devops -> Repos -> find / search pts-microservice-boilerplate -> Clone -> Copy the url
- Click on repositories and import repository -> Give your micro service a new name which defines what it does e.g. pts-imaging-microservice
- Once the code is imported in the new repo , clone the service to your machine.
- To test if the service runs in boilerplate helloworld mode , run: npm run clean && npm run build && npm run start or you can use VSCode debug to start the service.
- Once you browse to http://localhost:10010 you will se the swagger with helloworld apis and they work as well with Google Authentication
- Now you can start editing the boiler plate code to convert it to the micro service.
Start off with swagger.yaml
- change the description of the swagger , change the field x-swagger-router-controller to point to your own named controller e.g. images
- Change the method names from HelloWorld names to your custom names
- Change the definitions and parameters
Change the correspoding elements for openapi-spec.yaml as this will help in developer portal release along with apigee and your review won't take much time to reach production.
Controller
- Change the name from sample to one defined in swagger.yaml x-swagger-router-controller
- Change the method name and start your implementation :)
Classes
- Add implementation classes under src\classes like Google , Azure
- You can write service classes and add to existing Utils classes
Security middlewares
- For swagger authentication , there are pre build authentication middlewares available like Google Security using bearer token and more to be added like Sauth authentication
- For custom additional security middlewares you can change the name and file available at src\Auth.ts , do read on swagger middlewares and implement same as mentioned in Auth.ts
- Do change the import in App.ts where it is integrated with swagger middlewares node module
- Change the package.json to point to latest api-middlewares node_module available at https://slb-swt.visualstudio.com/psuite/_packaging?_a=package&feed=petrotechnical-suite&package=%40slb-pts%2Fapi-middlewares&protocolType=Npm or just do an npm install and check if it matches the latest version.
Docker image
- Replace the port in expose by the port configured by you.
- Run the following commands to make your code run locally in docker
- for local docker to run with our middleware artifact , we need to make the .npmrc consistent with how we deploy the app engine by passing registry password and registry token. Please check pipeline of pts-proxy-token for reference , where regpassword is your PAT token and the registry token is obtained by running the command
npm install -g vsts-npm-auth and then vsts-npm-auth -config .npmrc this will generate the token in your home directory npmrc
- docker build -t <username>/<microservice-name> .
- check if your image appears after running docker images, it should also be containing node alpine image as well
- run docker run -p <port to expose>:<your app port> -d <username>/<microservice-name>
- NOTE : as our swagger authentication only works with localhost:10010 as we have registered it in the GCloud urls the port to expose on your dev shoulr be 10010 or you directly test with the PostMan or equivalent tool.
- run docker ps , if the container does not appear then there is some issue , so run docker ps -a to check the failed container and docker logs containerid to check the errors.
- Go to postman or any other client and access the service using http://localhost:<port to expose>/api-path with token set.
- You should get a response successfully :)
Developer portal for Apigee
- When deploying microservice on developer portal you will need an open api spec, to generate that i have created a utility called swagger2apigee_openapi.py
- It needs a yaml package which can be installed using pip install -r requirements.txt
- Post which you just need to run python swagger2apigee_openapi.py and voila , you have an Apigee portal supported open api spec which should make your solution with microservice available in portal with less approval hassles , you can edit the generated spec manually or suggest changes in script to incorporate new changes.
K8S Template:
- Do check the azure pipeline yml to see the template use and its template is in k8s-service-deployment.yaml
- The certificate (slb platforms certificate) is downloaded and used as tls secret , this step is easier without the template and hence imported as a file else encoding into base64 will be tough and cumbersome.
- The template variables are dynamic and you can change in your micro service and do not change in the template.
- Please wait for the ingress to create with OK , only then the service will be available via the hostname in the template , you can check the cluster for ingress details.
- Check the cluster memory and change details if your service is not accomodating or change to a new cluster.
Deploy on PR :
- Please keep ENG_SHARED_PROJECT_ID blank for enabling Deploy on PR
- Set the variable only to run your branch manually.
- Please ensure to not set this to other than your eng environment.
QP integration:
- All the variables need to be defined in your pipeline which are not common like WHITESOURCE_PROJECT_TOKEN, SITE_URL and name rest of them can be checked if are available in the ENG or NON PROD library. Configure VSTS_USER and PAT once confident to check in artifacts to QP repo.
- Check and run the pipeline before checkin whether the artifacts are correct
Apigee instructions
- Please go through pts-apigee-proxies README for instructions related to new microservices configuration
OpenShift Related Information
Getting Started
The instructions in this section will help you in setting up this project in your local environment.
Pre-requisites
- Node.js installed on local machine
- Google Cloud SDK installed on local machine
- Access to Google Cloud with SLB account
- Access to below GCP projects. Please verify the access by login to GCP Console.
- pitc-shared-qa : User Service reads the Metadata of this project during startup.
- sis-lift-n-shift-stage-new: it is one of the PTS Tenant GCP project. We will be using this project to test the APIs exposed by this user service
- Make sure that the “pitc-shared-qa” project is set as current project in gcloud cli. Verify by running command “gcloud config list”.
Setup
Clone the source code
Run below command on command to clone the source code from 'openshift' brnach
git clone https://slb-swt@dev.azure.com/slb-swt/psuite/_git/pts-users-service -b openshift
Prepare Resources required to create secrets
Need to update .npmrc file present in the Azure Git repo with auth details locally. Note: required only if you are performing build in local or outside Azure Devops Pipeline.
- Generate a PAT personal access token with packaging read & write scopes.
- Copy the PAT generated in last step & encode it with Base64
- on Mac/Linux run command
echo -n "YOUR_PAT_GOES_HERE" | base64
- on Windows, run Powershell command
[Convert]::ToBase64String([system.Text.Encoding]::UTF8.GetBytes("YOUR_PAT_GOES_HERE"))
- on Mac/Linux run command
For Docker build: The
.npmrc
file requires the base64 encoded PAT value from the the environment variableTOKEN
. Follow below steps to configure the environment variable.Create an environment variable
TOKEN
with the value of base64 generated token created in the Generate PAT step.Add below lines at the end of .npmrc file by replacing your EMAIL
; begin auth token //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:username=slb-swt //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:_password=$TOKEN //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:email=<SLB EMAIL> //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:username=slb-swt //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:_password=$TOKEN //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:email=<SLB EMAIL> ; end auth token
For local NodeJs and local Kubernetes environment builds. Add below lines at the end of .npmrc file by replacing your EMAIL and Token generated above.
; begin auth token //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:username=slb-swt //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:_password=<TOKEN> //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/registry/:email=<SLB EMAIL> //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:username=slb-swt //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:_password=<TOKEN> //pkgs.dev.azure.com/slb-swt/_packaging/petrotechnical-suite/npm/:email=<SLB EMAIL> ; end auth token
Get serviceaccount.json from pitc-shared-qa GCP project (If you don't have access to download reach out to PTS Shared Services Team)
Running locally as NodeJs application
Open the command prompt and go to the application root directory. Follow below steps to run application locally
- Run command
npm install
to install the dependencies. - Run command
npm run build
to build the application. - Run command
npm start
to run appliation locally.
This will start the application on url http://localhost:10010/
Running locally as Docker Container
The steps mentioned here will help to run this application locally inside docker container the same way as it run on Openshift using the Red Hat certified NodeJs ubi image.
Building the Application docker image
Make sure that .npmrc is updated by following the steps mentioned in section Prepare Resources required to create secrets. Run below command to build an application image locally,
docker build -t pts-users-service .
Running the Application docker image
The image built in last step requires the Google Application Credential to be supplied through json. This json will be mounted as a volume at runtime to the container. Execute below command to run the image as a container locally.
docker run -p 10010:10010 -it --rm --name pts-users-service -v <local-machine-path-to-serviceaccount.json>:/etc/pts/secrets/serviceaccount.json -e GOOGLE_APPLICATION_CREDENTIALS=/etc/pts/secrets/serviceaccount.json pts-image-service:latest
where, replace <local-machine-path-to-serviceaccount.json> with the absolute path of serviceaccount.json file created in in section Prepare Resources required to create secrets. Please make sure that the any parent directory of <local-machine-path-to-serviceaccount.json> is configured in docker as allowed mount locations (Docker>Settings>Resources>File sharing)
This will start the application on 10010 port of host machine. Application can be accessed over url : http://localhost:10010
Building & Deploying on local Kubernetes Environment
Refer to following README.md for building service. Refer to following README.md for deploying service.
Building & Deploying on OpenShift Cluster
Everything is automated by implementing azure-pipelines.yml
located in this repository. Just executing the pipeline with correct OpenShift Cluster Login token which is set as variable, will trigger the build and deployment of service to OpenShift cluster.
Register callback and OAuth Redirect url in GCP Project (Initial onetime setup)
- We need to register the callback, OAuth Redirect, SAuth Urls in GCP APIs & Services > Crendentials, to access the service deployed via OCP route. (Yet not sure, which urls have to be configured so listed three probable urls)
- Callback URL format is: https:///auth/google/callback
- OAuth Client URL Format is: https:///oauth2-redirect.html
- SAuth URL formant is: https:///auth/sauth
References
- For information on how to add additional OpenShift Resources to the deployment processrefer to documentation
- For information on how to override default helm chart values refer to documentation
3 years ago