3.0.0 • Published 4 days ago

averaged-perceptron v3.0.0

Weekly downloads
122
License
MIT
Repository
github
Last release
4 days ago

averaged-perceptron

A linear classifier with the averaged perceptron algorithm

Installing

npm install averaged-perceptron

Using

A simple (and unrealistic) example:

import averagedPerceptron from "averaged-perceptron";

const { predict, update } = averagedPerceptron();
const trainingDataset = [
  [{ height: 4, width: 2 }, "slim"],
  [{ height: 2, width: 4 }, "fat"],
  [{ height: 1, width: 4 }, "fat"],
  [{ height: 2, width: 2.1 }, "fat"],
  [{ height: 2.1, width: 2 }, "slim"],
  [{ height: 2, width: 1 }, "slim"],
  [{ height: 1, width: 2 }, "fat"],
  [{ height: 1, width: 1.1 }, "fat"],
  [{ height: 1.1, width: 1 }, "slim"],
  [{ height: 4, width: 1 }, "slim"],
];
const epochs = 1000;
for (let epoch = 0; epoch < epochs; epoch += 1) {
  const shuffledDataset = shuffle(trainingDataset); // Any Fisher–Yates shuffle
  shuffledDataset.forEach(([features, label]) => update(features, label));
}

predict({ height: 8, width: 2 }); // => "slim"
predict({ height: 2.1, width: 2 }); // => "slim"
predict({ height: 2, width: 2.1 }); // => "fat"
predict({ height: 2, width: 8 }); // => "fat"

A slightly more realistic example using the Iris dataset can be found in the tests.

API

averagedPerceptron([weights [, iterations]])

Returns a perceptron object. It may be initialized with weights, an object of objects with the weight of each feature-label pair. When initialized with weights, the number of iterations used to obtain them are iterations, or 0 by default.

import averagedPerceptron from "averaged-perceptron";

// Create a new perceptron
const { predict, update, weights } = averagedPerceptron();

If you want to train the model in multiple sessions, you may resume training by specifying the iterations, which is the number of times update() was called to obtain the weights. That way new update() calls are properly averaged against the pretrained weights.

import averagedPerceptron from "averaged-perceptron";

// Create a perceptron from pretrained weights to do further training
const weightsJSON = '{"x":{"a":0.4,"b":0.6},"y":{"a":0.8,"b":-0.4}}';
const weights = JSON.parse(weightsJSON);
const iterations = 1000; // weights obtained with 1000 update() calls
const { predict, update, weights } = averagedPerceptron(weights, iterations);
// Keep training by calling update()

predict(features)

Returns the label predicted from the values in features, or "" if none exists.

import averagedPerceptron from "averaged-perceptron";

const { predict } = averagedPerceptron({
  x: { a: 0.4, b: 0.6 },
  y: { a: 0.8, b: -0.4 },
});
predict({ x: 1, y: 1 }); // => "a"

update(features, label [, guess])

Returns the perceptron, updating its weights with the respective values in features if label does not equal guess. If guess is not given, it defaults to the output of predict(features).

import averagedPerceptron from "averaged-perceptron";

const { update } = averagedPerceptron();
update({ x: 1, y: 1 }, "a");

Note that update() may be given feature-label pairs whose weights have not been preinitialized, so the model may be used for online learning when the features or labels are unknown a priori.

weights()

Returns an object of objects with the weight of each feature-label pair.

import averagedPerceptron from "averaged-perceptron";

const { weights } = averagedPerceptron({
  x: { a: 0.4, b: 0.6 },
  y: { a: 0.8, b: -0.4 },
});
weights(); // => { x: { a: 0.4, b: 0.6 }, y: { a: 0.8, b: -0.4 } }

Note that the weights are stored as an object of objects, because this perceptron is optimized for sparse features.

3.0.0

4 days ago

2.1.2

2 years ago

2.1.3

2 years ago

2.1.1

3 years ago

2.1.0

3 years ago

2.0.0

4 years ago

1.0.4

4 years ago

1.0.3

4 years ago

1.0.2

5 years ago

1.0.1

5 years ago

1.0.0

5 years ago

1.0.0-rc.1

5 years ago

1.0.0-rc.0

5 years ago

1.0.0-beta.2

6 years ago

1.0.0-beta.1

6 years ago

1.0.0-beta.0

6 years ago