@npm-wharf/command-hub v1.1.0
command-hub
A service built to securely manage the continuous deployment to multiple Kubernetes clusters via their hikaru services.
Goals
command-hub exists to solve the challenge of managing deployments to multiple Kubernetes clusters using the installed hikaru service API. command-hub is designed to work with hikaru's asymmetric key encrypted token exchange to prevent an attacker being able to take control of a cluster's installed software.
command-hub provides multiple ways to manage the list of endpoints it can manage. By default, it supports the ability to keep a list of clusters stored in a redis endpoint. The ability to change this with a custom storage back-end via a very simple plugin approach is explained later.
Library form
Installing command-hub as a module will allow you to call the API endpoints as simple function calls after providing some configuration: the URL, the token and the location of the key files needed to encrypt and sign the token.
CLI mode
The CLI version exists to support CI contexts in which you wish to notify your clusters of newly built image:tags so that they can perform upgrades as configured. As with the library form, arguments or environment variables are required to specify the URL, token and location of the key files.
Security
Both the hub API and hikaru API are designed to use API tokens and asymmetric encryption. This means that each endpoint needs its own private key plus the public key(s) of the systems they communicate with. A script is included in the repo ./create-keys.sh which will create three 4096 bit RSA key pairs under a ./certs folder with pairs for hub clients, the hub and the hikaru service endpoints.
Environment Variables
Service configuration:
API_TOKEN- the token used by clients to authenticate with the hub APIHIKARU_TOKEN- the token to use to authenticate with hikaru endpointsHUB_PRIVATE_KEY- path to secret private keyHIKARU_PUBLIC_KEY- path to shared public key for hikaru installationsCLIENT_PUBLIC_KEY- path to shared public key for client's calling hub APIETCD- the URL to etcd, required by kickerd (even if unnused)
If storing cluster endpoints in redis:
REDIS_URL- redis url to connect to for storing hikaru endpoint information
If using providing an API to fetch cluster URLs from:
CLUSTER_API_HOST- the root http url - example:https://host:port/apiCLUSTER_API_LIST- the url used to fetch the cluster list - example:/clusterCLUSTER_API_CLUSTER- the url to get cluster detail - example:/cluster/{id}CLUSTER_API_TOKEN- a bearer token to use for authenticatingCLUSTER_API_USER- a username to use for authenticatingCLUSTER_API_PASS- a password to use for authenticating
CLI and module specific configuration variables:
HUB_URL- the url where command-hub is hosted (for module and CLI configuration)CLIENT_PRIVATE_KEY- path to secret client private keyHUB_PUBLIC_KEY- path to shared public key for hub
ETCD Keys (kicker.toml)
Like hikaru, command-hub uses kickerd to monitor etcd for environment variable values and changes to the keys. Here is the etcd key to environment variable mapping:
comhub_api_token-API_TOKENhikaru_api_token-HIKARU_TOKENlocal_private_key-HUB_PRIVATE_KEYhikaru_public_key-HIKARU_PUBLIC_KEYclient_public_key-CLIENT_PUBLIC_KEYredis_url-REDIS_URLcluster_api_host-CLUSTER_API_HOSTcluster_api_list-CLUSTER_API_LISTcluster_api_cluster-CLUSTER_API_CLUSTERcluster_api_token-CLUSTER_API_TOKENcluster_api_user-CLUSTER_API_USERcluster_api_pass-CLUSTER_API_PASS
HTTP API
List Clusters
Retrieves a list of clusters.
GET /api/cluster
Response
content-type: application/json
200 OK
{
"clusters": [
{ "name": "one", "url": "https://one.root.net" },
{ "name": "two", "url": "https://two.root.net" }
]
}Add Cluster - optional
Adds a cluster endpoint. Only used if using a backing store instead of calling to another API for cluster listing/metadata.
POST /api/cluster
content-type: application/json
{
"name": "one",
"url": "https://one.root.net"
}Response
201 Created
Remove Cluster - optional
Removes a cluster endpoint. Only used if using a backing store instead of calling to another API for cluster listing/metadata.
DELETE /api/cluster/{name}
Response
204 No Content
Note: this is not the same as calling the hikaru remove command. That command is not currently supported from this API as it's such an uncommon and destructive use case.
Get Upgrade Candidates
Returns a hash containing lists of workloads:
upgradehas the list of workloads eligible for upgrade.obsoleteis the list of compatible workloads that have a newer version than the posted imageequalis the list of compatible workloads that already have the imageerroris the list of workloads that were ignored which includes adiffproperty with a brief explanation of why they were ignored
GET /api/cluster/{name}/image/{image}?filter=
GET /api/cluster/{name}/image/{repo}/{image}?filter=
GET /api/cluster/{name}/image/{registry}/{repo}/{image}?filter=
The filter query parameters accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
The reason for the multiple forms may not be obvious until you see examples:
GET /api/cluster/one/image/nginx:1.13-alpine
GET /api/cluster/one/image/arobson/hikaru:latest
GET /api/cluster/one/image/quay.io/coreos/etcd:v3.3.3
You could make the last form more permissive by telling it to only consider the imageOwner:
GET /api/cluster/one/image/quay.io/coreos/etcd:v3.3.3?filter=imageOwner
So that it would upgrade any workload using any etcd image regardless of whether or not it was the coreos Docker image or not.
Response
content-type: "application/json"
200 OK
{
"upgrade": [
{
{
"namespace": "namespace-name",
"type": "workload-type",
"service": "workload-name",
"image": "docker-repo/docker-image:tag",
"container": "container-name",
"metadata": {
"imageName": "docker-image",
"imageOwner": "docker-repo",
"owner": "tag-owner or docker-repo",
"repo": "tag-repo or docker-repo",
"branch": "tag-master or 'master'",
"fullVersion": "tag-version-and-prerelease or 'latest'",
"version": "tag-version or 'latest'",
"prerelease": "tag-prerelease or null"
},
"labels": {
"name": "workload-name",
"namespace": "workload-namespace-name"
},
"diff": "upgrade|obsolete|equal|error",
"comparedTo": "full-image-spec-used-in-call"
}
}
],
"obsolete": [],
"equal": [],
"error": []
}Upgrade Workloads With Image
Returns a hash containing lists of workloads.
upgradehas the list of workloads upgraded.obsoleteis the list of compatible workloads that have a newer version than the posted imageequalis the list of compatible workloads that already have the imageerroris the list of workloads that were ignored which includes adiffproperty with a brief explanation of why they were ignored
POST /api/cluster/{name}/image/{image}?filter=
POST /api/cluster/{name}/image/{repo}/{image}?filter=
POST /api/cluster/{name}/image/{registry}/{repo}/{image}?filter=
The filter query parameters accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
The reason for the multiple forms may not be obvious until you see examples:
POST /api/cluster/one/image/nginx:1.13-alpine
POST /api/cluster/one/image/arobson/hikaru:latest
POST /api/cluster/one/image/quay.io/coreos/etcd:v3.3.3
You could make the last form more permissive by telling it to only consider the imageOwner:
POST /api/cluster/one/image/quay.io/coreos/etcd:v3.3.3?filter=imageOwner
So that it would upgrade any workload using any etcd image regardless of whether or not it was the coreos Docker image or not.
Response
content-type: application/json
200 OK
{
"upgrade": [
{
"namespace": "namespace-name",
"type": "workload-type",
"service": "workload-name",
"image": "docker-repo/docker-image:tag",
"container": "container-name",
"metadata": {
"imageName": "docker-image",
"imageOwner": "docker-repo",
"owner": "tag-owner or docker-repo",
"repo": "tag-repo or docker-repo",
"branch": "tag-master or 'master'",
"fullVersion": "tag-version-and-prerelease or 'latest'",
"version": "tag-version or 'latest'",
"prerelease": "tag-prerelease or null"
},
"labels": {
"name": "workload-name",
"namespace": "workload-namespace-name"
},
"diff": "upgrade|obsolete|equal|error",
"comparedTo": "full-image-spec-used-in-call"
}
],
"obsolete": [],
"equal": [],
"error": []
}Upgrade All Clusters' Workloads With Image
NOTES: this call will likely take some time to complete and is very close to the cluster-specific upgrade call. Please be careful when issuing upgrades.
Returns a hash containing lists of workloads for each cluster upgraded. The top level key is the cluster alias.
urlthe url for the cluster's hikaru endpointupgradehas the list of workloads upgradedobsoleteis the list of compatible workloads that have a newer version than the posted imageequalis the list of compatible workloads that already have the imageerroris the list of workloads that were ignored which includes adiffproperty with a brief explanation of why they were ignored
On failure, the properties returned change to:
urlthe url for the cluster's hikaru endpointfailed->truemessagea simple explanation that the upgrade failederrorthe stack track containing details for the failure
POST /api/cluster/image/{image}?filter=
POST /api/cluster/image/{repo}/{image}?filter=
POST /api/cluster/image/{registry}/{repo}/{image}?filter=
The filter query parameters accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
The reason for the multiple forms may not be obvious until you see examples:
POST /api/cluster/image/nginx:1.13-alpine
POST /api/cluster/image/arobson/hikaru:latest
POST /api/cluster/image/quay.io/coreos/etcd:v3.3.3
You could make the last form more permissive by telling it to only consider the imageOwner:
POST /api/cluster/image/quay.io/coreos/etcd:v3.3.3?filter=imageOwner
So that it would upgrade any workload using any etcd image regardless of whether or not it was the coreos Docker image or not.
Response
content-type: application/json
200 OK
{
"upgrade": [
{
"namespace": "namespace-name",
"type": "workload-type",
"service": "workload-name",
"image": "docker-repo/docker-image:tag",
"container": "container-name",
"metadata": {
"imageName": "docker-image",
"imageOwner": "docker-repo",
"owner": "tag-owner or docker-repo",
"repo": "tag-repo or docker-repo",
"branch": "tag-master or 'master'",
"fullVersion": "tag-version-and-prerelease or 'latest'",
"version": "tag-version or 'latest'",
"prerelease": "tag-prerelease or null"
},
"labels": {
"name": "workload-name",
"namespace": "workload-namespace-name"
},
"diff": "upgrade|obsolete|equal|error",
"comparedTo": "full-image-spec-used-in-call"
}
],
"obsolete": [],
"equal": [],
"error": []
}Find Workloads By Image
Returns metadata for any workload that has an image matching the text supplied.
GET /api/cluster/one/workload/{image}
GET /api/cluster/one/workload/{repo}/{image}
GET /api/cluster/one/workload/{registry}/{repo}/{image}
The primary difference between this and the call for upgrade candidates is that this considers anything that matches whatever image segment is provided and returns a single list with no consideration given to upgrade eligibility.
It's just there to make it easy to:
- get a list of manifests using any nginx image
- find a list of manifests from a specific image owner
- find out if any manifests are using a particular version
Result
200 OK
content-type: application/json
[
{
"namespace": "namespace-name",
"type": "workload-type",
"service": "workload-name",
"image": "docker-repo/docker-image:tag",
"container": "container-name",
"metadata": {
"imageName": "docker-image",
"imageOwner": "docker-repo",
"owner": "tag-owner or docker-repo",
"repo": "tag-repo or docker-repo",
"branch": "tag-master or 'master'",
"fullVersion": "tag-version-and-prerelease or 'latest'",
"version": "tag-version or 'latest'",
"prerelease": "tag-prerelease or null"
},
"labels": {
"name": "workload-name",
"namespace": "workload-namespace-name"
}
}
]CLI
List Clusters
comhub cluster listAdd Cluster - optional
comhub cluster add {name} {url}Remove Cluster - optional
comhub cluster remove {name}Get Upgrade Candidates
comhub get candidates --cluster {name} --image {spec} --filter {properties}The image argument accepts any valid Docker image specification:
image:tag- official images in Docker Hubrepo/image:tag- images in Docker Hubregistry/repo/image:tag- images in other registries
The filter argument accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
Returns upgrade candidate workloads grouped by lists:
- workloads to be upgraded are in the the
upgradelist . - workloads with a newer version are in the
obsoletelist. - workloads with the supplied version are in the
equallist. - ignored workloads are in the
errorlist.
Get Upgrade Candidates On All Clusters
Returns upgrade candidates but grouped by cluster along with count summaries:
comhub get-all candidates --image {spec} --filter {properties}The image argument accepts any valid Docker image specification:
image:tag- official images in Docker Hubrepo/image:tag- images in Docker Hubregistry/repo/image:tag- images in other registries
The filter argument accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
Returns upgrade candidate workloads grouped by lists:
- workloads to be upgraded are in the the
upgradelist . - workloads with a newer version are in the
obsoletelist. - workloads with the supplied version are in the
equallist. - ignored workloads are in the
errorlist.
Upgrade Workloads With Image
comhub upgrade --cluster {name} --image {spec} --filter {properties}The image argument accepts any valid Docker image specification:
image:tag- official images in Docker Hubrepo/image:tag- images in Docker Hubregistry/repo/image:tag- images in other registries
The filter argument accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
Upgrades eligible workloads and returns them in the following lists:
- workloads that were upgraded are in the the
upgradelist . - workloads skipped because they have a newer version are in the
obsoletelist. - workloads skipped because they already have the supplied version are in the
equallist. - ignored workloads are in the
errorlist.
Upgrade Workloads On All Clusters With Image
Upgrade workloads across all known clusters.
comhub upgrade-all --image {spec} --filter {properties}The image argument accepts any valid Docker image specification:
image:tag- official images in Docker Hubrepo/image:tag- images in Docker Hubregistry/repo/image:tag- images in other registries
The filter argument accepts a comma delimited list of fields that you want used to determine upgrade eligibility. Valid fields are:
imageNameimageOwnerownerrepobranchfullVersionversionbuildcommit
Upgrades eligible workloads and returns them in the following lists:
- workloads that were upgraded are in the the
upgradelist . - workloads skipped because they have a newer version are in the
obsoletelist. - workloads skipped because they already have the supplied version are in the
equallist. - ignored workloads are in the
errorlist.
Find Workloads By Image
Returns metadata for any workload that has an image matching the text supplied.
comhub find --cluster {name} --image {fragment}Where image can match any part of the image specification: registry, repo or image name.
Find Workloads By Image On All Clusters
Returns metadata for any workload that has an image matching the text supplied (across all clusters):
comhub find-all --image {fragment}Results are presented grouped by cluster.
Where image can match any part of the image specification: registry, repo or image name.
If Providing A Cluster API
The expected format of data from the list API is for a name or id property to match the cluster identifier and for a url property to provide the endpoint where the cluster can be contacted. If a hikaru subdomain is not how to reach the hikaru API for the cluster, then a hikaru property should be present on the cluster to specify the route which hikaru's API can be reached.