0.1.2 • Published 4 years ago

@vladmandic/piface v0.1.2

Weekly downloads
-
License
MIT
Repository
github
Last release
4 years ago

PiFace: 3D Face Detection, Iris Tracking and Age & Gender Prediction

Using TensorFlow/JS in a browser

And it's Fast!

Results on a low-end GPU (nVidia GTX 1050):

  • 50 FPS with face bounding box only
  • 25 FPS with face geometry prediction enabled
  • 20 FPS with iris geometry prediction enabled
  • 15 FPS with age & gender prediction enabled

URL: https://github.com/vladmandic/piface

Credits

This is an amalgamation of multiple existing models:

Install

npm install @vladmandic/piface`

All pre-trained models will be present in /models

Demo

Demo is included in /demo

Methods & Classes

Methods:

  • load(options): explicitly load all weights, can be skipped as weights will be loaded implicitly upon first usage
  • detect(image, options): detect image

Classes

  • options: options can be set using exported class before calling load() or detect() or passed to those methods
  • models: exported models for low-level access
  • results: last known results
  • triangulation: helper class used to establish full facial grid using mesh points

Configuration

const options = {
  mesh: true                 // detect face geometry
  iris: true                 // detect iris details
  ageGender: true,           // predict age and gender
  modelPathFaceMesh:     '/models/facemesh/model.json',
  modelPathBlazeFace:    '/models/blazeface/model.json',
  modelPathIris:         '/models/iris/model.json',
  modelPathSSRNetAge:    '/models/ssrnet-imdb-age/model.json',    // supports imdb, wiki and morph variations
  modelPathSSRNetGender: '/models/ssrnet-imdb-gender/model.json', // supports imdb, wiki and morph variations
  inputSize: 128,            // size of input for blazeface, 128 for front model or 256 for back model (TBD)
  maxContinuousChecks: 5,    // how many frames to go without running the bounding box detector
  detectionConfidence: 0.9,  // threshold for discarding a prediction
  maxFaces: 10,              // maximum number of faces detected in the input
  iouThreshold: 0.3,         // threshold for deciding whether boxes overlap too much
  scoreThreshold: 0.75,      // threshold for deciding when to remove boxes based on score
  cropSize: 128,             // size of face canvas in return object (TBD)
};

// no need to call piface.load() explicitly as weights will be loaded upon first usage
// options object or any of its properties are optional, if not provided default values noted above will be used

Where each `face` object consists of:

```js
  face = {
    faceConfidence, // float between 0 and 1
    box,            // face bounding box in format of array[x, y, width, height]
    mesh,           // array of facial points [x, y, z]
    annotations,    // object consisting of logical groups of points such as lips, eye, ear, etc.
    age,            // float
    gender,         // 'male' or 'female'
    iris,           // guessed distance to eye iris, multiply with your camera focal length to get actual distance
  }
  • If mesh = true in the configuration, mesh and annotations arrays will include face geometry details
  • If iris = true in the configuration, mesh and annotations arrays will include additional points describing eye iris and iris value will show logical distance to the eye.
  • If gender = true in the configuration, age and gender variables will be available, otherwise they are undefined.

Additionally, there is a helper class piface.triangulation which can be used to establish links between points into a full grid.

Example

  const piface = require('piface');

  canvas = document.getElementById('face-canvas'); // make sure you have 'face-canvas' element in your html to read from, can be canvas, image or video
  const ctx = canvas('2d');
  const faces = await piface.detect(image, options);
  for (const face of faces) {
    const label = `${face.gender} ${face.age} ${face.iris}`
    ctx.strokeStyle = 'rgba(255, 255, 255, 1)';
    ctx.fillStyle =  'rgba(255, 255, 255, 1)';
    ctx.beginPath();
    ctx.rect(face.box[0], face.box[1], face.box[2], face.box[3]); // draw box around detected face
    ctx.fillText(label, face.box[0] + 4, face.box[1] + 8); // print predicted age, gender and distance
    ctx.stroke();
    for (let i = 0; i < piface.triangulation.length / 3; i++) { // draw full face geometry as a 3D polygon mesh 
      const points = [
        piface.triangulation[i * 3 + 0], // x coordinate
        piface.triangulation[i * 3 + 1], // y coordinate
        piface.triangulation[i * 3 + 2]  // depth
      ].map((index) => face.mesh[index]); // map all face points to facial logical structure
      const region = new Path2D();
      region.moveTo(points[0][0], points[0][1]); // move to first point in each logical structure and draw polygon
      for (const point of points) {
        path.lineTo(point[0] * canvases.piface.width / image.width, point[1] * canvases.piface.height / image.height);
      }
      ctx.strokeStyle = `rgba(${127.5 + (2 * points[0][2])}, ${127.5 - (2 * points[0][2])}, 255, 0.5)`; // color each segment according to depth
      region.closePath();
      ctx.stroke(region);
    }
  }

Todo

  • Improve detection of smaller faces, add BlazeFace back model