2.1.0 • Published 2 years ago

node-efficientnet v2.1.0

Weekly downloads
129
License
MIT
Repository
github
Last release
2 years ago

TensorflowJS EfficientNet

npm Node.js CI codecov Codacy Badge Run on Repl.it
Gitter

This repository contains a tensorflowJs implementation of EfficientNet, an object detection model trained on ImageNet and can detect 1000 different objects.

EfficientNet a lightweight convolutional neural network architecture achieving the state-of-the-art accuracy with an order of magnitude fewer parameters and FLOPS, on both ImageNet and five other commonly used transfer learning datasets.

The codebase is heavily inspired by the TensorFlow implementation.

Alt Text

👏 Supporters

Stargazers

Stargazers repo roster for @ntedgi/node-efficientnet

Forkers

Forkers repo roster for @ntedgi/node-efficientnet

Multilingual status

localestatustranslate by 👑
en
zh@luoye-fe
es@h383r
ar@lamamyf
ru@Abhighyaa
he@jhonDoe15
fr@burmanp
other⏩ (need help, PR welcome )

Table of Contents

  1. Just Want to Play With The Model
  2. Installation
  3. API
  4. Examples
  5. Usage
  6. About EfficientNet Models
  7. Models
  8. Multilingual status

How I Run This Project Locally ?

  • clone this repository
  • Just Want to Play ?

Usage:

EfficientNet has 8 different model checkpoints each checkpoint as different input layer resolution for larger input layer resolution, the greater the accuracy and the running time is slower.

for example lets take this images:

Installation

npm i --save node-efficientnet

API

EfficientNetCheckPointFactory.create(checkPoint: EfficientNetCheckPoint, options?: EfficientNetCheckPointFactoryOptions): Promise<EfficientNetModel>

Example: to create an efficientnet model you need to pass EfficientNetCheckPoint (available checkpoint B0..B7) each one of them represent different model

const {
  EfficientNetCheckPointFactory,
  EfficientNetCheckPoint,
} = require("node-efficientnet");

const model = await EfficientNetCheckPointFactory.create(
  EfficientNetCheckPoint.B7
);

const path2image = "...";

const topResults = 5;

const result = await model.inference(path2image, {
  topK: topResults,
  locale: "zh",
});

Of course, you can use local model file to speed up loading

You can download model file from efficientnet-tensorflowjs-binaries, please keep the directory structure consistent, just like:

local_model
  └── B0
    ├── group1-shard1of6.bin
    ├── group1-shard2of6.bin
    ├── group1-shard3of6.bin
    ├── group1-shard4of6.bin
    ├── group1-shard5of6.bin
    ├── group1-shard6of6.bin
    └── model.json
const path = require("path");
const {
  EfficientNetCheckPointFactory,
  EfficientNetCheckPoint,
} = require("node-efficientnet");

const model = await EfficientNetCheckPointFactory.create(
  EfficientNetCheckPoint.B7,
  {
    localModelRootDirectory: path.join(__dirname, "local_model"),
  }
);

const path2image = "...";

const topResults = 5;

const result = await model.inference(path2image, {
  topK: topResults,
  locale: "zh",
});

Examples

download files from remote and predict using model

const fs = require("fs");
const nodeFetch = require("node-fetch");

const {
  EfficientNetCheckPointFactory,
  EfficientNetCheckPoint,
} = require("node-efficientnet");

const images = ["car.jpg", "panda.jpg"];
const imageDir = "./samples";
const imageDirRemoteUri =
  "https://raw.githubusercontent.com/ntedgi/node-EfficientNet/main/samples";

if (!fs.existsSync(imageDir)) {
  fs.mkdirSync(imageDir);
}

async function download(image, cb) {
  const response = await nodeFetch.default(`${imageDirRemoteUri}/${image}`);
  const buffer = await response.buffer();
  fs.writeFile(`${imageDir}/${image}`, buffer, cb);
}

EfficientNetCheckPointFactory.create(EfficientNetCheckPoint.B2)
  .then((model) => {
    images.forEach(async (image) => {
      await download(image, () => {
        model.inference(`${imageDir}/${image}`).then((result) => {
          console.log(result.result);
        });
      });
    });
  })
  .catch((e) => {
    console.error(e);
  });

output :

[
  { label: "sports car, sport car", precision: 88.02440940394301 },
  {
    label: "racer, race car, racing car",
    precision: 6.647441678387659,
  },
  { label: "car wheel", precision: 5.3281489176693295 },
][
  ({
    label: "giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca",
    precision: 83.60747593436018,
  },
  { label: "skunk, poleca", precision: 11.61300759424677 },
  {
    label: "hog, pig, grunter, squealer, Sus scrofa",
    precision: 4.779516471393051,
  })
];

About EfficientNet Models

EfficientNets rely on AutoML and compound scaling to achieve superior performance without compromising resource efficiency. The AutoML Mobile framework has helped develop a mobile-size baseline network, EfficientNet-B0, which is then improved by the compound scaling method to obtain EfficientNet-B1 to B7.

EfficientNets achieve state-of-the-art accuracy on ImageNet with an order of magnitude better efficiency:

  • In high-accuracy regime, EfficientNet-B7 achieves the state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet with 66M parameters and 37B FLOPS. At the same time, the model is 8.4x smaller and 6.1x faster on CPU inference than the former leader, Gpipe.

  • In middle-accuracy regime, EfficientNet-B1 is 7.6x smaller and 5.7x faster on CPU inference than ResNet-152, with similar ImageNet accuracy.

  • Compared to the widely used ResNet-50, EfficientNet-B4 improves the top-1 accuracy from 76.3% of ResNet-50 to 82.6% (+6.3%), under similar FLOPS constraints.

Models

The performance of each model variant using the pre-trained weights converted from checkpoints provided by the authors is as follows:

Architecture@top1* Imagenet@top1* Noisy-Student
EfficientNetB00.7720.788
EfficientNetB10.7910.815
EfficientNetB20.8020.824
EfficientNetB30.8160.841
EfficientNetB40.8300.853
EfficientNetB50.8370.861
EfficientNetB60.8410.864
EfficientNetB70.8440.869

* - topK accuracy score for converted models (imagenet val set)


if (this.repo.isAwesome || this.repo.isHelpful) {
  Star(this.repo);
}
2.1.0

2 years ago

2.0.4

2 years ago

2.0.3

3 years ago

2.0.2

3 years ago

2.0.0

3 years ago

1.1.9

3 years ago

1.1.8

3 years ago

1.1.7

3 years ago

1.1.6

3 years ago

1.1.5

3 years ago

1.1.4

3 years ago

1.1.3

3 years ago

1.1.2

3 years ago

1.1.1

3 years ago

1.1.0

3 years ago

1.0.23

3 years ago

1.0.22

3 years ago

1.0.21

3 years ago

1.0.20

3 years ago

1.0.19

3 years ago

1.0.18

3 years ago

1.0.17

3 years ago

1.0.16

3 years ago

1.0.15

3 years ago

1.0.14

3 years ago

1.0.13

3 years ago

1.0.12

3 years ago

1.0.11

3 years ago

1.0.10

3 years ago

1.0.9

3 years ago

1.0.8

3 years ago

1.0.7

3 years ago

1.0.6

3 years ago

1.0.4

3 years ago

1.0.3

3 years ago

1.0.2

3 years ago

1.0.0

3 years ago