react-native-vision-object-detection v0.0.8
React Native Vision Object Detection
Get started
Install the package :
npm i react-native-vision-object-detection
or
yarn add react-native-vision-object-detection
Then, link the library components with your android project :
react-native link react-native-vision-object-detection
Firebase and Google Services
In order to use the firebase components, you have to create a project on the firebase console.
Once the project is created, download the google-services.json
and place it in your android/app
directory.
Then to use Google services, we have to update the gradle build files.
Add classpath 'com.google.gms:google-services:4.3.3'
to your android/build.gradle
:
buildscript {
...
repositories {
google()
...
}
dependencies {
...
classpath 'com.google.gms:google-services:4.3.3'
...
}
}
Now that we added the classpath reference of the android google services plugin, we can tell gradle to use it to build the app.
Append the following line to your android/app/build.gradle
: apply plugin: 'com.google.gms.google-services'
.
It is done !
How to use
Simple firebase call
You can use Firebase ML Kit to have a generic detection. The objects will be classified from 0 to 5, each giving a specific category of objects (unknown, fashion, plants, ...). The callback also gives the tracking ID associated with the detection.
import {VisionFirebase} from "react-native-vision-object-detection";
import * as React from "react";
import {View} from "react-native";
class MyDetector extends React.Component {
constructor(props: any) {
super(props);
}
onObjectDetected(objects) {
console.log(objects)
}
render() {
return <View style={{flex: 1}}>
<VisionFirebase showBoxes={true}
onObjectDetected={(object) => this.onObjectDetected(object)}
classification={true}
/>
</View>
}
}
Simple tensorflow call
If you already have a TensorFlow Lite model, you directly use it to detected objects.
/!\ Warning
Currently only the SSD Mobilenet model is supported.
Create a directory assets
in your android/app/src/main/
and place in your model.tflite
and the labels.txt
. The labels file contains the classes of detection.
And then, similar way we did with Firebase :
import {VisionTensorflow} from "react-native-vision-object-detection";
import * as React from "react";
import {View} from "react-native";
class MyDetector extends React.Component {
constructor(props: any) {
super(props);
}
onObjectDetected(objects) {
console.log(objects)
}
render() {
return <View style={{flex: 1}}>
<VisionTensorflow showBoxes={true}
onObjectDetected={(object) => this.onObjectDetected(object)}
modelPath={"model.tflite"}
labelsPath={"labels.txt"}
/>
</View>
}
}