0.5.2 • Published 6 years ago

deeplearning-js v0.5.2

Weekly downloads
4
License
MIT
Repository
github
Last release
6 years ago

deeplearning-js

npm v npm dm CircleCI Coverage Status

Intention

deeplearning-js is an open source JavaScript library for deep learning. deeplearning-js provides all JavaScript developers a new way to play around with deep learning models without learning unfamiliar Python, statistics or calculus knowledge.

Getting started

npm install deeplearning-js
yarn add deeplearning-js

API

Normalization

Normalize 1D Array data set.

Support normalization method:

  • minmax: (num - min) / (max - min)
  • zscore: (num - mean) / std

Usage

import { Normalization } from 'deeplearning-js';

expect(Normalization.zscore([1, 2, 3])).toEqual([-1.224744871391589, 0, 1.224744871391589]);
expect(Normalization.minmax([1, 2, 3])).toEqual([0, 0.5, 1]);

initializeParameters

Return initial parameters according to model structure.

Support activation functions:

  • linear
  • relu
  • sigmoid
  • softmax

Usage

const initialParameters = initializeParameters(
  [{
    size: trainingSet.input.shape[0],  // input layer nerouns
  }, {
    size: 56,                          // hidden layer nerouns
    activationFunc: 'relu',            // hidden layer activation function
  }, {
    ...                                // more hidden layers
  }, {
    size: trainingSet.output.shape[0], // output layer nerouns
    activationFunc: 'softmax',         // output layer activation function
  }],
  0,                                   // mean (default: 0)
  1,                                   // variance (default: 1)
  0.01,                                // scale (default: 0.01)
);

Return

{
  W1: number[][],
  b1: number[][],
  ...
  Wl: number[][],
  bl: number[][],
}

train

Return parameters and cost after training for 1 epoch.

Support cost functions:

  • quadratic
  • cross-entropy

Usage

train(
  input: number[][],
  output: number[][],
  parameters: any,
  costFunc: 'quadratic' | 'cross-entropy',
  learningRate: number,
)

Return

{
  parameters: {
    W1: number[][],
    b1: number[][],
    ...
    Wl: number[][],
    bl: number[][],
  },
  cost: number,
}

batchTrain

Return parameters and costs after multiple batches of epochs training.

Usage

batchTrain(
  currentBatch: number,
  totalBatch: number,
  batchSize: number,
  input: number[][],
  output: number[][],
  parameters: any,
  learningRate: number,
  costFunc: 'quadratic' | 'cross-entropy',
  onBatchTrainEnd: (ro: {                    // invoke when each batch training ends
    costs: number[],
    parameters: any
  }, currentBatch: number) => any,
  onTrainEnd: (ro: {                         // invoke when all batches training ends
    costs: number[],
    parameters: any,
  }) => any,
  costs?: number[] = [],
  disableRaf?: boolean = false,
)

Return

batchTrain is a recursive function so please handle intermediate training results in onBatchTrainEnd callback and final training results in onTrainEnd callback.

forwardPropagation

Return predict values based on input data and model parameters.

Usage

const forwardResults = forwardPropagation(input, parameters);
const predict = forwardResults.yHat;

Return

{
  yHat: number[][],                          // predict values
  caches: Cache[],                           // for backPropagation
  activationFuncs: string[],                 // for backPropagation
}
0.5.2

6 years ago

0.5.1

6 years ago

0.5.0

6 years ago

0.5.0-rc.3

6 years ago

0.5.0-rc.2

6 years ago

0.5.0-rc.1

6 years ago

0.4.0

6 years ago

0.3.10

6 years ago

0.3.9

6 years ago

0.3.8

6 years ago

0.3.7

6 years ago

0.3.6

6 years ago

0.3.5

6 years ago

0.3.4

6 years ago

0.3.3

6 years ago

0.3.2

6 years ago

0.3.1

6 years ago

0.3.0

6 years ago

0.2.6

6 years ago

0.2.5

6 years ago

0.2.4

6 years ago

0.2.3

6 years ago

0.2.1

6 years ago

0.2.0

6 years ago

0.1.0

6 years ago