square-classification-pipeline v1.0.0
square-classification-pipeline
Find and report on geological things like quartz, weathered zones, etc.
Architecture Diagram

Database Schema Diagram

Local Development
serverless invoke local --function squareGenerator --path tmp/sqs-payloads/square-generator.jsonWhere tmp/sqs-payloads/square-generator.json is of the form:
{
"Records": [
{
"body": "{ \"companyId\": \"1\", \"projectId\": \"1\", \"holeExternalId\": \"my_hole_name\", \"preprocessorOperations\": [\"coherent-only\"], \"runUuid\": \"9f0dc3bf-397b-45d0-92c2-3d5e966337c1\", \"pipelineName\": \"weathering\" }"
}
]
}Local tests
Start your local DB
npm run local:dbThen run the tests
npm run testInstall dependencies
If it's your first time installing, run
./scripts/github-package-init.shto install the private repo
npm installDB migrations
For local migrations run
./scripts/local-db-migrate.shto undo local migrations run (once per migration you wish to undo)
./scripts/local-db-migrate-undo.shto undo migrations in a non prod environment run (once per migration you wish to undo)
ENVIRONMENT={ENV} BASTION_KEY={KEY_LOCATION} ./scripts/nonprod-db-migrate-undo.shfor example a test command below
ENVIRONMENT=test BASTION_KEY=~/.ssh/id_ecs_keypair_test.pem ./scripts/nonprod-db-migrate-undo.shDB performance
If you're experiencing timeouts when the post-processing lambda is writing to the API, these parameters may help:
Post-processing lambda SQS trigger concurrency
This throttle allows us to control the maximum number of messages handled by the post-processing lambda at any one time. Note that this throttle is applied to the event source and not the function, so setting this has no impact on the function's reserved concurrency. Once the concurrency limit is reached, the Lambda will simply stop reading messages from the source queue until capacity is available.
See functions.postModelProcessing.events[0].sqs.maximumConcurrency in serverless.yml. For more information about limiting concurrency at the event source, see here.
API Batch size
After a pipeline is complete, the post model processing lambda will send the results to the Square Classification API to write the results to the database.
Writes to the API are batched to allow results for larger holes to complete within the 30s limit enforced by API gateway. If requests are timing out, lowering this batch size may help. See here.
Performance analysis
See PERFORMANCE.md
2 years ago