This tutorial explains the steps involved in building a Face Recognition based Device Control Block to control the operation of a Buzzer Block in Temperature & Fire Monitoring & Alerting Tinker Block Application. You will specifically train an AI model to identify your face and control an Actuator block in response to your face being recognized by the AI model.
Before building Face-recognition based Device Control Block, please make sure to build:
Buzzer Block, Processor Block and Power Block
IoT-based Server Block
IoT-based Device Control Block
Device Integration Block
1. Open Teachable Machine tool by visiting this site: https://teachablemachine.withgoogle.com
2. Click "Get Started" and create a new "Image project".
3. Select "Standard Image model".
4. Click the "Webcam" button under "Class-1" class and record your face and take at least 20-30 photos.
5. Click the "Webcam" button under "Class-2" class and record without your face visible in view. Take at least 20-30 photos with the background.
6. Re-name "Class-1" to "Present" and "Class-2" to "Absent".
7. Click on "Train Model" button.
8. Once training is complete, click "Export Model" button.
9. In the pop-up window that appears, click on "Upload Model" button.
10. Once uploading is complete, copy the "shareable link" that appears and save it somewhere.
11. Copy and paste the Java script code (given in this page below, and not the one given in Teachable Machine) and save it as "My AI Project.html". (Note the file should have an extension ".html").
View Java script here...
<div>Face Recognition based Device Control Block</div>
<button type="button" onclick="init()">Start</button>
<div id="webcam-container"></div>
<div id="label-container"></div>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@teachablemachine/image@latest/dist/teachablemachine-image.min.js"></script>
<script src="https://code.jquery.com/jquery-1.9.1.js"></script>
<script type="text/javascript">
// More API functions here:
// https://github.com/googlecreativelab/teachablemachine-community/tree/master/libraries/image
// The link to your model provided by Teachable Machine export panel
const URL = "https://teachablemachine.withgoogle.com/models/0A687Lx2F/";
var found1 = false;
var found2 = false;
let model, webcam, labelContainer, maxPredictions;
// Load the image model and setup the webcam
async function init() {
const modelURL = URL + "model.json";
const metadataURL = URL + "metadata.json";
// load the model and metadata
// Refer to tmImage.loadFromFiles() in the API to support files from a file picker
// or files from your local hard drive
// Note: the pose library adds "tmImage" object to your window (window.tmImage)
model = await tmImage.load(modelURL, metadataURL);
maxPredictions = model.getTotalClasses();
// Convenience function to setup a webcam
const flip = true; // whether to flip the webcam
webcam = new tmImage.Webcam(200, 200, flip); // width, height, flip
await webcam.setup(); // request access to the webcam
await webcam.play();
window.requestAnimationFrame(loop);
// append elements to the DOM
document.getElementById("webcam-container").appendChild(webcam.canvas);
labelContainer = document.getElementById("label-container");
for (let i = 0; i < maxPredictions; i++) { // and class labels
labelContainer.appendChild(document.createElement("div"));
}
}
async function loop() {
webcam.update(); // update the webcam frame
await predict();
window.requestAnimationFrame(loop);
}
// run the webcam image through the image model
async function predict() {
// predict can take in an image, video or canvas html element
const prediction = await model.predict(webcam.canvas);
for (let i = 0; i < maxPredictions; i++) {
var classPrediction = prediction[i].className;
console.log("Predicted: ");
console.log(classPrediction);
console.log(classPrediction.localeCompare("Present"));
if (((classPrediction.localeCompare("Present") == 0) && (prediction[i].probability.toFixed(2)*100) > 80)) {
console.log("Present > 80 %");
if (found1 == false) {
console.log("posting turn on");
$.post('https://api.thingspeak.com/talkbacks/99999/commands.json?api_key=XXXXXXXXXXXXXXXX&command_string=TURN_ON&position=1');
found1 = true; // if found is 'true', don't send TALKBACK again
found2= false;
console.log("found 1 is true");
return;
}
}
else if (((classPrediction.localeCompare("Absent") == 0) && (prediction[i].probability.toFixed(2)*100) > 80)) {
console.log("Absent > 80 %");
if (found2 == false) {
console.log("posting turn off");
$.post('https://api.thingspeak.com/talkbacks/99999/commands.json?api_key=XXXXXXXXXXXXXXXX&command_string=TURN_OFF&position=1');
found2 = true; // if found is 'true', don't send TALKBACK again
found1 = false;
console.log("found 1 is false");
return;
}
}
classPrediction = prediction[i].className + ": " + prediction[i].probability.toFixed(2);
labelContainer.childNodes[i].innerHTML = classPrediction;
}
}
</script>
12. In the Java script code copied to Notepad, replace the URL field (shown in RED color below) with the shareable link you copied earlier in Teachable Machine.
13. In the same Java Script, replace the number "99999" (shown twice) with the Talkback ID (from ThingSpeak). Make sure the replacement is done in two places in the code.
14. In the same Java Script, replace the symbols "XXXXXXXXXXXXXXXX" with the Talkback API Key you made a note of earlier in ThingSpeak. Make sure the replacement is done in two places in the code.
15. We are now ready to test the Artificial Intelligence module on your laptop or desktop computer. Make sure the laptop or desktop computer has an in-built camera or webcam connected to it.
16. Open the "My AI Project.html" file in a web-browser.
17. When prompted by the browser to give permission to your camera, click "Yes".
18. You will see the camera's view displayed on your web-browser. Make sure the camera sees your face.
19. As soon as the AI module detects your face, it turns on the Buzzer.
20. Move your face away from the camera. You will notice the Buzzer stops buzzing.
21. Move your face in and out of the camera, and notice the buzzer start and stop buzzing respectively. In summary, you face turns on the buzzer, and the absence of your face stops the buzzer.