cloud-stack v1.0.0
cloud-stack
A CLI-Tool for creating an saas boilerplate, including signup, billing, admin-tool and affiliate.
installation
npm i -g cloud-stack
usage
cloud-stack
Just answer all the prompts and the rest will be taken care of.
what will be generated
Depending on your configuration, your generated project-folder should look something like this:
.
├── admin
├── affiliate
├── api
├── autoscaler
├── borgmatic
├── client
├── reverse-proxy
├── website
├── .env
├── .gitignore
└── docker-compose.yaml
.env
The .env-file includes configuration for the database, including passwords (these have been automatically generated).
It also includes the borg configuration, if you chose to generate a borg repository for backups. Notice that you still need to enter the passphrase you used manually.
The configuration you entered for automatic mails is also found here, aswell as the environment (production=false).
It also includes a password for an default admin-account, that gets generated as you start up the api. The email for that account is admin@domain by default.
docker-compose.yaml
In here you will find a complete stack with all your different services (api, client, admin etc.).
We are using template strings (e.g. ${db_root_password}
) for injecting strings from our .env-file.
You shouldn't really have to bother with this file at all, just notice that you are not able to start the stack right away with docker-compose up
, because the ssl-certificates for your domain are required for that.
Otherwise, basically everything is preconfigured, starting with an reverse-proxy which lets you run your api at domain/api, your client at domain/client and so on, up to automatic scaling of your servers. If you chose to use borg for your backups, all mounting for files in the api .files folder and ssh-files for remote repositories are also configured.
reverse-proxy
In here is just the configuration file for the proxy.
autoscaler
In here you will everything regarding the autoscaling. Information about how this gets started in production can be found in the deployment
-section.
admin, affiliate, client and website
These are all angular-projects, they can be started with
npm i
and then
npm run start
Notice that this starts them with a specific port, so that they don't block each other on port 4200. Keep that in mind if you want to start them with ng serve
.
A default admin-account (username: admin@domain, password: found in the .env-file) gets generated on startup of the api, so that you an start straightaway.
If the environment is not production, there will also be accounts generated for the client aswell as the affiliate-program (if you chose to generate it).
Defaults for them are:
- username: user@domain, password: 12345678 for the client, and
- username: affiliate@domain, password: 12345678 for the affiliate-client
All of these angular-projects are otherwise build pretty normal, the only difference being a Dockerfile and and nginx-custom.conf. These are required to run them with the docker-compose file.
Notice that the clients Dockerfile is a bit different. This is because we want to be able to test the clients already configured pwa(progessive-web-app)-abilities. This only works in production environment, so the environment.prod.ts still includes localhost instead of the real domain. For production you want to deploy the project with docker, which causes that to be overwritten with the real domain.
The client was also generated with cypress for automated testing.
special commands for the client:
npm run test
: runs cypress-testsnpm run server:build
: builds the project and runs as pwanpm run server
: runs already build project as pwanpm run ci
: runs cypress-tests only in console(usefull with CI/CD-Pipes)npm run analyze
: uses webpack-build-analyzer to analyze bundle-size
api
This is an loopback-project. It has an exposed openapi.spec on localhost:3000, which basically explains all its functionality. It also has an exposed assets-folder that includes e.g. images for automated-emails.
Notice that you need to check api/src/datasources/db.datasource.config.json for the password etc. of the database-connection.
For production you have to make sure to keep that in line with the .env-file, for local developing there should be an external database-server.
You can start it just like the angular-projects with:
npm i
and then
npm run start
Deployment
All of these steps are made on an production-server
requirements
Some files are not in the git-repository due to security concerns, these need to be copied/created manually:
- api/src/secrets.json
- api/src/datasources/db.datasource.config.json
- .env
After that you need to:
- create ssl-certificates for your domain:
Notice that you need to add your domain aswell as the domain with the ending (e.g. .com, .de etc.) The sequence of this matters, because the reverse-proxy will try to get the certificate from the folder with the ending. If the folder-name is still wrong, you can just edit the config in the reverse-proxy folder to get the certificates from that folder-name instead.
sudo certbot certonly --standalone
- have a local borg-repository up and running // if you chose to generate one
sudo borg init -e repokey path/to/a/folder/you/want/the/repository/in
- have your own docker-registry up and running
sudo docker service create --name registry --publish published=5000,target=5000 registry:2
Startup
- build the complete stack (this takes a while)
sudo docker-compose build
- push the stack to your registry
sudo docker-compose push
- start the autoscaling-service
sudo docker stack deploy -c autoscaler/docker-compose.yaml autoscaler
- finally start the stack (this command ensures that the .env injection in the compose-file works)
sudo bash -c 'docker stack deploy -c <(docker-compose -f docker-compose.yaml config) myCustomNameForThatStack'
3 years ago