0.4.14 • Published 4 years ago

nkode v0.4.14

Weekly downloads
4
License
MIT
Repository
github
Last release
4 years ago

nkode

CLI to make it easy for data scientists to build docker images, train thier models and serve the models

oclif Version Downloads/week License

Usage

$ npm install -g nkode
$ nkode COMMAND
running command...
$ nkode (-v|--version|version)
nkode/0.4.14 darwin-x64 node-v12.19.0
$ nkode --help [COMMAND]
USAGE
  $ nkode COMMAND
...

Commands

nkode create

USAGE
  $ nkode create

OPTIONS
  --boolean
  --build
  --enum
  --help
  --integer
  --option
  --string
  --version

See code: src/commands/create/index.js

nkode create:endpoint

Create Remote endpoint for your model

USAGE
  $ nkode create:endpoint

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -d, --dataDir=dataDir        Data Model file name
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -m, --method=method          Method type to use build the Docker Image

DESCRIPTION
  ...
  This command will automatically build the docker image, push the image to the docker registry and deploy an endpoint.

See code: src/commands/create/endpoint.js

nkode create:image

Create Docker Image with your model, required packages and data

USAGE
  $ nkode create:image

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -d, --dataDir=dataDir        Path to test data directory
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -m, --method=method          Method type to use build the Docker Image

DESCRIPTION
  ...
  This command will automatically build the docker image, push the image to the docker registry and deploy the image 
  to a pod to run the training remotely.

See code: src/commands/create/image.js

nkode create:notebook

Ceate a Jupyter notebook server to work on

USAGE
  $ nkode create:notebook

OPTIONS
  -b, --baseImage=baseImage                Base Docker Image to use for creating the Jupyter Server
  -c, --cpuQuota=cpuQuota                  Allocated vCPU for your notebook server
  -m, --memoryQuota=memoryQuota            Allocated Memory in Gb for your notebook server
  -n, --notebookName=notebookName          Name of the the notebook. (Optional)
  -s, --namespace=namespace                Your account namespace. By default this is set to your individual namespace
  -v, --persistentVolume=persistentVolume  Name of the persistent volume for your server

DESCRIPTION
  ...
  This command will automatically create a Jupyter notebook server in your account to get started with your datascience
  work. You also get a persistent storage so that all your data is stored. You can delete the server and recreate 
  another
  one as needed. Your data is never lost! Following are the defaults
    -  Allocated CPU : 1 vCPU
    -  Allocated Memory : 1 GB
    -  Base Image : Nokode image with tensorflow
    -  Persistent Storage : 10 GB

    You can change these settings using the flags including access to GPU instances.

See code: src/commands/create/notebook.js

nkode help [COMMAND]

display help for nkode

USAGE
  $ nkode help [COMMAND]

ARGUMENTS
  COMMAND  command to show help for

OPTIONS
  --all  see all commands in CLI

See code: @oclif/plugin-help

nkode init

Checks and initializes any dependencies

USAGE
  $ nkode init

DESCRIPTION
  ...
  Run this one time

See code: src/commands/init.js

nkode train

USAGE
  $ nkode train

OPTIONS
  --boolean
  --build
  --enum
  --help
  --integer
  --option
  --string
  --version

See code: src/commands/train/index.js

nkode train:distributed

Build Docker Images and trains on remote resources as requested

USAGE
  $ nkode train:distributed

OPTIONS
  -b, --baseImage=baseImage      Base Docker Image to use
  -c, --cpu=cpu                  Specify CPU resoures you would like to reserve
  -d, --dataDir=dataDir          Path to test data directory
  -e, --entryPoint=entryPoint    Entry Point whether it is a function, python file or notebook
  -g, --gpu=gpu                  Specify GPU resoures you would like to reserve
  -m, --method=method            Method type to use build the Docker Image
  -o, --operator=operator        Framework Operator to use for distributed training
  -p, --psCount=psCount          PS Count for TensorFlow Distributed Training Job
  -t, --masterCount=masterCount  Master Count for PyTorch Distributed Training Job
  -v, --gpuvendor=gpuvendor      Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
  -w, --workerCount=workerCount  Worker Count for TF Job or Pytorch distributed training job.
  -y, --memory=memory            Specify Memory resoures you would like to reserve

DESCRIPTION
  ...
  This command will automatically create a docker image and train on remote resource with required resource 
  requirements.
  Use this if you have specific requirements of resource needs for training. The request will fail if there are no 
  enough resources

See code: src/commands/train/distributed.js

nkode train:hpt:init

Use this command to initial search algorithm to use. A configuration

USAGE
  $ nkode train:hpt:init

OPTIONS
  -s, --searchAlgorithm=searchAlgorithm  The search algorithm that you want platform to use to find the best
                                         hyperparameters print

DESCRIPTION
  YAML file will be downloaded to your current folder which can be used to configure your experiment.
  Once you are ready with configuration, run 'nkode train:hpt:startExperiment' to start the experiment.
  By default, Random search algorithm will be selected
  ...
  Following are the search algorithms supported and flag values to use while running the command
  1. Grid search : grid
  2. Random search : random
  3. Bayesian optimization : bayesianoptimization
  4. Hyperband : hyperband
  5. Tree of Parzen Estimators (TPE) : tpe
  6. Covariance Matrix Adaptation Evolution Strategy (CMA-ES) :cmaes
  7. Neural Architecture Search based on ENAS : enas
  8. Differentiable Architecture Search (DARTS) : darts

See code: src/commands/train/hpt/init.js

nkode train:hpt:startExperiment

Start the Hyperparameter Tuning experiment.

USAGE
  $ nkode train:hpt:startExperiment

OPTIONS
  -f, --fileName=fileName  Provide the file name of hpt configurarion yaml

DESCRIPTION
  ...
  Start the experiment and let the platform do the work! You can monitor status of the experiment
  by visiting the URL provided once the command is successfully executed.

See code: src/commands/train/hpt/startExperiment.js

nkode train:remoteResources

Build Docker Images and trains on remote resources as requested

USAGE
  $ nkode train:remoteResources

OPTIONS
  -b, --baseImage=baseImage    Base Docker Image to use
  -c, --cpu=cpu                Specify CPU resoures you would like to reserve
  -d, --dataDir=dataDir        Path to test data directory
  -e, --entryPoint=entryPoint  Entry Point whether it is a function, python file or notebook
  -g, --gpu=gpu                Specify GPU resoures you would like to reserve
  -m, --method=method          Method type to use build the Docker Image
  -v, --gpuvendor=gpuvendor    Specify GPU Vendor you would like to use. Supports NVIDA - default - and AMD
  -y, --memory=memory          Specify Memory resoures you would like to reserve

DESCRIPTION
  ...
  This command will automatically create a docker image and train on remote resource with required resource 
  requirements.
  Use this if you have specific requirements of resource needs for training. The request will fail if there are no 
  enough resources

See code: src/commands/train/remoteResources.js

0.4.14

4 years ago

0.4.13

4 years ago

0.4.12

4 years ago

0.4.10

4 years ago

0.4.9

4 years ago

0.4.11

4 years ago

0.4.8

4 years ago

0.4.7

4 years ago

0.4.6

4 years ago

0.4.5

4 years ago

0.4.3

4 years ago

0.4.2

4 years ago

0.4.1

4 years ago

0.4.0

4 years ago

0.3.1

4 years ago

0.3.0

4 years ago

0.2.0

4 years ago