program_leader_portal_py v1.0.0
Spark
Allowing our program leads to interact with our participants.
Prerequisites
Install the pre-commit hook:
cd program_lead_portal_py
# if you do not have pre-commit already
brew install pre-commit
pre-commit install --install-hooks
Refer to comments at top of file in .pre-commit-config.yaml
to learn how to use and update.
Release
To move a code change from your computer to production requires two steps, building a docker image and then deploying a docker image.
Building
The first step is to branch off of master
, make your changes, commit, push and then put a PR back into master
.
Note: We are no longer branching off of dev!
Once the tests are approved, your code is up to date and you have your +2, then merge that branch back into master
.
That will tell circle to create a docker image, run tests and let you know when it's ready to deploying.
Protobufs
You'll need to install the protobufs library to be able to generate the Python definition files from the .proto files in the www/serializers directory. To install the protobufs binary, follow the instructions at https://github.com/protocolbuffers/protobuf#protocol-compiler-installation. In particular, note that not only will you need to download and extract the .tar.gz or .zip file, but you will also need to make the project and install it, as noted in the instructions at https://github.com/protocolbuffers/protobuf/blob/master/src/README.md. To install the protoc executable, once you've extracted the downloadable, cd to the extracted directory and execute:
$ ./configure
$ make
$ make check
$ sudo make install
$ sudo ldconfig
The Python tutorial can be found at https://developers.google.com/protocol-buffers/docs/pythontutorial.
Once you've updated a .proto file, you'll need to generate the Python object definitions from them. E.g.:
$ protoc -I=www/serializers/protobufs --python_out=www/serializers/protobufs www/serializers/protobufs/celery_tasks.proto
This command will take the input from the -I
flag and output the resulting Python files to the
location indicated by the --python-out
flag.
For some reason, you also need to indicate the .proto file itself as the final argument in the
command.
The instructions in the tutorial don't explain why this is necessary, but the instructions may be
updated in the future.
Deploying
Deploys also happen in two steps. First you deploy to stage. You have an opportunity to check stage and/or ciruclate changes. After that you are ready to deploy to production. That is done through the UI form Circle:
A deploy will ask our docker image repo for the corresponding image to release, and tell aptible to deploy that. We no longer have to rebuild the computer at each step.
Setup With Docker Composer
- Check out the
docker_dev
repo and let Docker do the work
cd docker_dev/docker
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
./start_apps.sh plp
dssh docker_plp_1
cd /plp
# start specific service(s) you want
honcho -f Procfile.dev start <service> & (# service is web, beat, or worker)
# or you can start all services
honcho -f Procfile.dev start
Configuration
Do not, under any circumstances, store sensitive configuration variables (e.g. database URL, passwords, API keys, account IDs, etc) in config files.
Use environment variables instead.
Check Environment Variables on Production
Production has a superset of the env vars you need on dev.
To get the current list:
aptible config --app ketothrive-plp-prod | cut -d'=' -f1
Do not use the production values for your dev environment!
Variable | Value | Credstash name |
---|---|---|
APP_ENVIRONMENT | dev | |
APP_TYPE | program_leader | |
AUTHY_API_KEY | plp.dev.AUTHY_API_KEY | |
AUTHY_API_PREFIX | plp.dev.AUTHY_API_PREFIX | |
AWS_ACCESS_KEY | plp.dev.AWS_ACCESS_KEY | |
AWS_SECRET_KEY | plp.dev.AWS_SECRET_KEY | |
CELERY_DEFAULT_QUEUE | plp.dev.CELERY_DEFAULT_QUEUE | |
DATABASE_URL | plp.dev.DATABASE_URL | |
MESSAGE_BROKER_URL | plp.dev.MESSAGE_BROKER_URL | |
NPM_TOKEN | plp.dev.NPM_TOKEN | |
REDIS_URL | plp.dev.REDIS_URL | |
SALESFORCE_API_KEY | plp.dev.SALESFORCE_API_KEY | |
SENDGRID_API_KEY | plp.dev.SENDGRID_API_KEY | |
TWILIO_ACCOUNT_SID | plp.dev.TWILIO_ACCOUNT_SID | |
TWILIO_AUTH_TOKEN | plp.dev.TWILIO_AUTH_TOKEN | |
TWILIO_NOTIFY_SERVICE_ID | plp.dev.TWILIO_NOTIFY_SERVICE_ID | |
TWILIO_PHONE_NUMBER | plp.dev.TWILIO_PHONE_NUMBER | |
service__identity_service | plp.dev.service__identity_service | |
service__labs | plp.dev.service__labs |
Prod also has these; you may ignore them:
DISABLE_WEAK_CIPHER_SUITES
FORCE_SSL
PRIVATE_RSA_KEY
PUBLIC_RSA_KEY
Frontend Development
Frontend development and testing should be done in your local development environment, outside of the docker container. It is recommended to use Node Version Manager to make sure you are running the correct version of Node and NPM.
- Make sure you have the NPM_TOKEN environment variable:
export NPM_TOKEN=$(credstash get plp.dev.NPM_TOKEN)
- Install Node Version Manager https://github.com/creationix/nvm#installation
Run
nvm install
in this directory. This will install the version of Node described in the.nvmrc
file and update your localnode
command to point to that version of Node.a. If you activate a different version of Node at some point, run
nvm use
in this directory to switch to the correct version for Spark development again.
To start the dev server (watch mode for frontend development):
npm i # Install packages if they are missing or out of date
npm run start-dev-server
To build assets with the production build (useful if you want to run the app without developing the frontend):
npm i
npm run build-assets
If you want to just get things up and running, you can run:
make frontend-server
If for some reason your front-end changes do not take effect, restart flask
and refresh the page.
Python Tests (Pytest)
Most tests in the server code base use the Pytest framework.
py.test .
Behavior Tests
We use Python Behave for testing our APIs. These tests can be found in the features/ directory.
You'll need to bring up all services required for PLP and make sure nothing is running at the PLP
port (2900) before running the Behave tests.
Starting the app with ./start_apps.sh plp
should bring up all services required for Behave tests.
If you are trying to run Behave tests on baremetal (not in a Docker container), just pass the -l
flag to start_apps.sh
(e.g., ./start_apps.sh plp -cl
) and it will bring up all services except
the PLP container.
behave
If the Behave process gets stuck for some reason, it's most likely the Flask server that's stuck.
Kill the run_server.py
process and Behave should continue.
ps aux | grep runserver
for it and kill <pid>
it.
If you kill Behave directly, you might end up with the test database still in your local Postgres
instance.
If you get some database cruft left over you can see all databases from the PSQL terminal with \l
and drop each database using drop database <database name>;
.
Postgres will prevent you from deleting the database if there is still an open connection to it - in
this case from the runserver.py
process.
Behave CLI Tips
You can select specific directories or files or tests (by line number of the scenario) when
executing behave
:
behave features/login
behave features/login/identity_token.feature
behave features/login/identity_token.feature:45
If you want to run a single test or specific tests, you can tag it/them with a @wip
tag
(or any other tag) and then run only the tests with that tag:
behave --tags=wip
You can also pass the --stop
flag to behave
to have it stop on the first failure.
Flaky Tag
We use the @flaky
tag to mark flaky tests. These tests are run separately in CircleCI.
Doing so allows us to run fewer tests when one of the flaky tests fails.
The flaky tests job also prints the full test output in Circle rather than just the dot-per-test
format we use in the backend_feature_test
job.
The job also prints out the explanations for all flaky tests found in
features/flaky_test_explanations.txt
at the end of the run.
Furthermore, a failure in the flaky tests job will not cause the job to fail.
Adding the @flaky
tags should be seen as a last resort.
Please try to diagnose the flaky test or bring it within acceptable success rates before adding the
flaky tag.
Doing so will also reduce the frequency with which the flaky tests job fails.
If you must add the @flaky
tag to a test, please also add an explanation in the
features/flaky_test_explanations.txt
file.
Frontend Tests
As mentioned above, frontend tests should be run in your local development environment.
npm i
npm test
To run Jest tests in watch mode (helpful for development):
npm run test:watch
End-to-End tests
To run the end-to-end Puppeteer tests that exercise the frontend and
backend together, first make sure that the frontend assets are
available. This means either having a Webpack dev server running, or
running the production build with npm run build-assets
.
The tests can then be run using the NPM script.
npm run test:end-to-end
Debugging End-to-End Tests
An extra NPM script is provided to run the tests in debug mode.
npm run test:end-to-end-debug
This disables headless mode for the Chromium instance and enables
setting breakpoints in .steps.js
files using
jestPuppeteer.debug()
. See the jest-puppeteer
reference
for more information.
Living Documentation
For full documentation on design as well as authoring and debugging end-to-end tests, refer to the End-to-End Testing Guide.
Feature Flags
We use feature flags only allow some users to experience some features, or to retain the ability to turn something off for everybody. They are created and modified in the feature flag section of the admin app.
Using Preview
Local Backend Development
- The following instructions are for setting up the app without the use of docker (you should use docker, see instructions above)
Start all other necessary containers, except the PLP container.
You can do so by using the -l
flag with start_apps.sh
:
$ ./start_apps.sh plp -cl
Then use the export_vars
function provided in the docker_dev
functions to export all environment variables for PLP:
$ export_vars plp
This will add all necessary variables to your shell, after modifying them to redirect all traffic to the local ports where ports from the containers are forwarded.
You should now be able to install all requirements on your baremetal machine, launch the server, and run tests.
Environment
- Python 3.4
- Node 6.9.4 (
brew install node@6
) - Npm 3 (should be installed with node above)
- Not required but useful: use virtualenv to set up a python virtual environment
Production Interventions and Fixes
Set Up Elasticsearch User Index
Same as populating below, except you would run the scripts/search/set_up_user_index.py
script
instead.
This script will set up the user profile index and make sure tags are mapped properly.
This script will create the index if the index does not already exist.
If the index does already exist, this script will change the mappings so tags are properly handled.
Populate Elasticsearch User Index
SSH to the production (or staging, if you're trying to update staging) container and run the
scripts/search/populate_user_index.py
script:
$ aptible login
$ aptible ssh --app <prod: ketothrive-plp-prod, stage: virta-plp-stage>
root@<aptible container>:/app# python scripts/search/populate_user_index.py
Update Elasticsearch User Index
Same as populating, except you would run the scripts/search/update_user_index.py
script instead.
This script also supports a dry-run option that will just display the stats (the number of matching,
missing and differing documents) and quit without pushing any changes to the Elasticsearch instance.
You can pass --dry-run
, --dryrun
, dry-run
or dryrun
to the script to do a dry run.
You can pass --print-details
, --printdetails
, print-details
or printdetails
to the script to
print extra details (missing, differing, and extra keys) on differing documents.
Requeue Dropped Messages
Same as populating the Elasticsearch User Index above, except you would run the
scripts/messages/requeue_dropped_messages.py
script instead.
This script takes a start_time and an end_time to determine the window in which to look for dropped
text messages in the database.
It will then look through the database for messages that were meant to be sent between the start and
end times that were dropped and requeue them for delivery.
You can also pass the --dry-run
or --print-details
flags to do a dry run and print the details
of the operation.
Additional scripts
The cli/
module exposes an alternative entry point for running Pytest.
run test
If you get errors with hints to remove pycache dirs, or have stale .pyc files you can remove those as follows:
run clean
4 years ago