1.8.1 • Published 5 months ago

homography v1.8.1

Weekly downloads
-
License
MIT
Repository
github
Last release
5 months ago

Homography.js

Homography.js is a lightweight High-Performance library for implementing homographies in Javascript or Node.js. It is designed to be easy-to-use (even for developers that are not familiar with Computer Vision), and able to run in real time applications (even in low-spec devices such as budget smartphones). It allows you to perform Affine, Projective or Piecewise Affine warpings over any Image or HTMLElement in your application by only setting a small set of reference points. Additionally, Image warpings can be made persistent (independent of any CSS property), so they can be easily drawn in a canvas, mixed or downloaded. Homography.js is built in a way that frees the user from all the pain-in-the-ass details of homography operations, such as thinking about output dimensions, input coordinate ranges, dealing with unexpected shifts, pads, crops or unfilled pixels in the output image or even knowing what a Transform Matrix is.

Features

Install

To use as a module in the browser (Recommended):

<script type="module">
  import { Homography } from "https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/Homography.js";
</script>

If you don't need to perform Piecewise Affine Transforms, you can also use a very lightweight UMD build that will expose the homography global variable and will charge faster:

<script src="https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/HomographyLightweight.min.js"></script>
...
// And then in your script
const myHomography = new homography.Homography();
// Remember to don't override the homography variable by naming your object "homography"

Via npm:

$ npm install homography
... 
import { Homography } from "homography";

Usage

In the Browser

Perform a basic Piecewise Affine Transform from four source points.

    // Select the image you want to warp
    const image = document.getElementById("myImage");
    
    // Define the reference points. In this case using normalized coordinates (from 0.0 to 1.0).
    const srcPoints = [[0, 0], [0, 1], [1, 0], [1, 1]];
    const dstPoints = [[1/5, 1/5], [0, 1/2], [1, 0], [6/8, 6/8]];
    
    // Create a Homography object for a "piecewiseaffine" transform (it could be reused later)
    const myHomography = new Homography("piecewiseaffine");
    // Set the reference points
    myHomography.setReferencePoints(srcPoints, dstPoints);
    // Warp your image
    const resultImage = myHomography.warp(image);
    ...

Perform a complex Piecewise Affine Transform from a large set of pointsInY * pointsInX reference points.

    ...
    // Define a set of reference points that match to a sinusoidal form. 
    // In this case in image axis (x : From 0 to width, y : From 0 to height) for convenience.
    let srcPoints = [], dstPoints = [];
    for (let y = 0; y <= h; y+=height/pointsInY){
        for (let x = 0; x <= w; x+=width/pointsInX){
            srcPoints.push([x, y]); // Add (x, y) as source points
            dstPoints.push([x, amplitude+y+Math.sin((x*n)/Math.PI)*amplitude]); // Apply sinus function on y
        }    
    }
    // Set the reference points (reuse the previous Homography object)
    myHomography.setReferencePoints(srcPoints, dstPoints);
    // Warp your image. As not image is given, it will reuse the one used for the previous example.
    const resultImage = myHomography.warp();
    ...
    

Perform a simple Affine Transform and apply it on a HTMLElement.

    ...
    // Set the reference points from which estimate the transform
    const srcPoints = [[0, 0], [0, 1], [1, 0]];
    const dstPoints = [[0, 0], [1/2, 1], [1, 1/8]];
    
    // Don't specify the type of transform to apply, so let the library decide it by itself. 
    const myHomography = new Homography(); // Default transform value is "auto".
    // Apply the transform over an HTMLElement from the DOM.
    myHomography.transformHTMLElement(document.getElementById("inputText"), squarePoints, rectanglePoints);
    ...

Calculate 250 different Projective Transforms, apply them over the same input Image and draw them on a canvas.

const ctx = document.getElementById("exampleCanvas").getContext("2d");

// Build the initial reference points (in this case, in image coordinates just for convenience)
const srcPoints = [[0, 0], [0, h], [w, 0], [w, h]];
let dstPoints = [[0, 0], [0, h], [w, 0], [w, h]];
// Create the homography object (it is not necessary to set transform as "projective" as it will be automatically detected)
const myHomography = new Homography(); 
// Set the static parameters of all the transforms sequence (it will improve the performance of subsequent warpings)
myHomography.setSourcePoints(srcPoints);
myHomography.setImage(inputImg);

// Set the parameters for building the future dstPoints at each frame (5 movements of 50 frames each one)
const framesPerMovement = 50;
const movements = [[[0, h/5], [0, -h/5], [0, 0], [0, 0]],
                   [[w, 0], [w, 0], [-w, 0], [-w, 0]],
                   [[0, -h/5], [0, h/5], [0, h/5], [0, -h/5]],
                   [[-w, 0], [-w, 0], [w, 0], [w, 0]],
                   [[0, 0], [0, 0], [0, -h/5], [0, h/5]]];

for(let movement = 0; movement<movements.length; movement++){
    for (let step = 0; step<framesPerMovement; step++){
        // Create the new dstPoints (in Computer Vision applications these points will usually come from webcam detections)
        for (let point = 0; point<srcPoints.length; point++){
            dstPoints[point][0] += movements[movement][point][0]/framesPerMovement;
            dstPoints[point][1] += movements[movement][point][1]/framesPerMovement;
        }
        
        // Update the destiny points and calculate the new warping. 
        myHomography.setDestinyPoints(dstPoints);
        const img = myHomography.warp(); //No parameters warp will reuse the previously setted image
        // Clear the canvas and draw the new image (using putImageData instead of drawImage for performance reasons)
        ctx.clearRect(0, 0, w, h);
        ctx.putImageData(img, Math.min(dstPoints[0][0], dstPoints[2][0]), Math.min(dstPoints[0][1], dstPoints[2][1]));
        await new Promise(resolve => setTimeout(resolve, 0.1)); // Just a trick for forcing canvas to refresh
    }
}

*Just take attention to the use of setSourcePoints(srcPoints), setImage(inputImg), setDestinyPoints(dstPoints) and warp(). The rest of code is just to generate coherent sequence of destiny points and drawing the results

API Reference

new Homography(transform = "auto", width, height)

Main class for performing geometrical transformations over images.
Homography is in charge of applying Affine, Projective or Piecewise Affine transformations over images, in a way that is as transparent and simple to the user as possible. It is specially intended for real-time applications. For this reason, this class keeps an internal state for avoiding redundant operations when reused, therefore, critical performance comes when multiple transformations are done over the same image.

Homography.setSourcePoints(points, image, width, height, pointsAreNormalized)

Sets the source reference points ([x1, y1, x2, y2, ..., xn, yn]) of the transform and, optionally, the image that will be transformed.
Source reference points is a set of 2-D coordinates determined in the input image that will exactly go to the correspondent destiny points coordinates (setted through setDstPoints()) in the output image. The rest of coordinates of the image will be interpolated through the geometrical transform estimated from these ones.

Homography.setDestinyPoints(points, pointsAreNormalized)

Sets the destiny reference points ([x1, y1, x2, y2, ..., xn, yn]) of the transform.
Destiny reference points is a set of 2-D coordinates determined for the output image. They must match with the source points, as each source points of the input image will be transformed for going exactly to its correspondent destiny points in the output image. The rest of coordinates of the image will be interpolated through the geometrical transform estimated from these correspondences.

Homography.setReferencePoints(srcPoints, dstPoints, image, width, height, srcpointsAreNormalized, dstPointsAreNormalized)

This function just makes a call to Homography.setSourcePoints(srcPoints[, image, width, height, srcPointsAreNormalized) and then Homography.setDestinyPoints(dstPoints[, dstPointsAreNormalized). It can be used for convenience when setting reference points for first time, but should be substituted by Homography.setSourcePoints() or Homography.setDestinyPoints() when performing multiple transforms where one of srcPoints or dstPoints remains unchanged, as it would decrease the overall performance.

Homography.setImage(image , width, height)

Sets the image that will be transformed when warping.
Setting the image before the destiny points (call to setDestinyPoints()) and the warping (call to warp()) will help to advance some calculations as well as to avoid future redundant operations when successive calls to setDestinyPoints()->warp() will occur in the future.

Homography.warp(image, asHTMLPromise = false)

Apply the setted transform to an image. Apply the homography to the given or the previously setted image and return it as ImageData or as a Promise. Output image will have enough width and height for enclosing the whole input image without any crop or pad once transformed. Any void section of the output image will be transparent. In case that an image is given, it will be internally setted, so any future call to warp() receiving no image parameter will apply the transformation over this image again. Remember that it will transform the whole input image for "affine" and "projective" transforms, while for "piecewiseaffine" transforms it will only transform the parts of the image that can be connected through the setted source points. It occurs because "piecewiseaffine" transforms define different Affine transforms for different sections of the input image, so it can not calculate transforms for undefined sections. If you want the whole output image in a Piecewise Affine transform you should set a source reference point in each corner of the input image ([x1, y1, x2, y2, ..., 0, 0, 0, height, width, 0, width, height]).

Homography.transformHTMLElement(element, srcPoints, dstPoints)

Apply the current Affine or Projective transform over an HTMLElement. Applying transform to any HTMLElement will be extremely fast.
If srcPoints and dstPoints are given, a new transform will be estimated from them. Take into account, that this function work by modifying the CSS trasform property, so it will not work for the "piecewiseaffine" option, as CSS does not support Piecewise Affine transforms.

Performance tests on a budget smartphone (a bit destroyed).