Blogs

Virtual Assistant @Drone

RESEARCH PROJECT

My quad copter is almost same with the others in the field, but it can be accessed through voice commands and with voice feed back with advanced AI.

The goal is to develop a open source quad virtual assistant to use in privet and make this quad assistant development available for everyone.

This assistant have to able to understand and communicate with the speaker, currently virtual assistants are very dumb and unable to communicate properly .

And good drones are very expensive not suitable for using at home or school.

A Open Source printable drone can build,used and developed by every one.

Controlling a robot using voice command and make robot think is still a challenge and lets be a part for solving those problems.

If you want to do something like reminders, health advice, read whats app messages and reply, goes to charging station automatically.

P
roject Components:

1.Raspberry pi 3
2. Single-board computer with wireless LAN and Bluetooth connectivity
3. Power module
5. Flight controller (Pixhawk)
6. Transmitter/Receiver (ppm enabled)
7. ESC 30 Amp
8. UBEC (5v)
9. Jumper wire(for connecting Pixhawk with raspberry Pi)
10. BLDC motor 1400 kv
11. Propeller (6inch)
12. for its smooth flight. (2 clock-2 counter clockwise)
13. GPS M8N
14. Lipo battery 3000mAh (orange)
15. Pi Camera module

My main focus is on a central AI which will control all the devices along with the drone and that system will be cloud based so that it can be accessed through any device in the any corner of the world.

The future drone will be include a robotic arm.

Emotion Checking ML | AI

RESEARCH PROJECT

Description:

Our Human face is having a mixed emotions so we are to demonstrate the probabilities of these emotions that we have.

What does Emotion Recognition mean?

Emotion recognition is a technique used in software that allows a program to "read" the emotions on a human face using advanced image processing. Companies have been experimenting with combining sophisticated algorithms with image processing techniques that have emerged in the past ten years to understand more about what an image or a video of a person's face tells us about how he/she is feeling and not just that but also showing the probabilities of mixed emotions a face could has.

Installations:

Install dependencies using requirements.txt

pip install -r requirements.txt

Usage:

The program will creat a window to display the scene capture by webcamera and a window representing the probabilities of detected emotions.

Demo

python real_time_video.py

You can just use this with the provided pretrained model i have included in the path written in the code file, i have choosen this specificaly since it scores the best accuracy, feel free to choose any but in this case you have to run the later file train_emotion_classifier

If you just want to run this demo, the following content can be skipped

  • Train

  • python train_emotion_classifier.py

Dataset:

I have used this dataset

Download it and put the csv in fer2013/fer2013/

-fer2013 emotion classification test accuracy: 66%

Credits

This work is inspired from this great work and the resources of Adrian Rosebrock github code helped me alot!.

Ongoing

Draw emotions faces next to the detected face.

Issues & Suggestions

If any issues and suggestions to me, you can create an issue.

If you like this work please help me by giving me some stars.


Pose Animation

RESEARCH PROJECT

Pose Animator takes a 2D vector illustration and animates its containing curves in real-time based on the recognition result from PoseNet and FaceMesh. It borrows the idea of skeleton-based animation from computer graphics and applies it to vector characters.

In skeletal animation a character is represented in two parts:

  1. a surface used to draw the character, and

  2. a hierarchical set of interconnected bones used to animate the surface.

In Pose Animator, the surface is defined by the 2D vector paths in the input SVG files. For the bone structure, Pose Animator provides a predefined rig (bone hierarchy) representation, designed based on the keypoints from PoseNet and FaceMesh. This bone structure’s initial pose is specified in the input SVG file, along with the character illustration, while the real time bone positions are updated by the recognition result from ML models.

// TODO: Add blog post link. For more details on its technical design please check out this blog post.

Demo 1: Camera feed

The camera demo animates a 2D avatar in real-time from a webcam video stream.

Demo 2: Static image

The static image demo shows the avatar positioned from a single image.

Build And Run

Install dependencies and prepare the build directory:

yarn

To watch files for changes, and launch a dev server:

yarn watch

Platform support

Demos are supported on Desktop Chrome and iOS Safari.

It should also run on Chrome on Android and potentially more Android mobile browsers though support has not been tested yet.

Animate your own design

  1. Download the sample skeleton SVG here.

  2. Create a new file in your vector graphics editor of choice. Copy the group named ‘skeleton’ from the above file into your working file. Note:

    • Do not add, remove or rename the joints (circles) in this group. Pose Animator relies on these named paths to read the skeleton’s initial position. Missing joints will cause errors.

    • However you can move the joints around to embed them into your illustration. See step 4.

  3. Create a new group and name it ‘illustration’, next to the ‘skeleton’ group. This is the group where you can put all the paths for your illustration.

    • Flatten all subgroups so that ‘illustration’ only contains path elements.

    • Composite paths are not supported at the moment.

    • The working file structure should look like this:

[Layer 1]

|---- skeleton

|---- illustration

|---- path 1

|---- path 2

|---- path 3

  1. Embed the sample skeleton in ‘skeleton’ group into your illustration by moving the joints around.

  2. Export the file as an SVG file.

  3. Open Pose Animator camera demo. Once everything loads, drop your SVG file into the browser tab. You should be able to see it come to life :D


Tiny GPS Tracker

COMPLETED PROJECT

This project uses TinyCircuits to create a tiny GPS tracking and data logging device. This tutorial is for any skill level - no coding, programming, or soldering required! Just follow the steps below and you can have your device working in minutes.This is really easy to make a tiny GPS tracking device which doesn't include any smart devices, works individually and perfectly fine.A tiny GPS tracker is useful in many aspects of our life, people can apply this on anywhere they want and now we can't imagine how it will make life easier because people are always doing cool stuffs a their own so it is impossible to predict all those outcomes.Keep playing and learning and developing things that is what we are supposed to do in hackaday.io.

Step 2: Software

This project uses the Arduino IDE.

You will also need to install the most recent version of the SPIFlash library. In your Arduino IDE, open your Library Manager (under the 'Sketch' tab) and search for 'SPIFlash'. Click 'Install' to install the library to your IDE, as shown above.

Step 3: Assembling the Boards

Start with the TinyDuino on the bottom of the stack. Add the USB TinyShield, then the Flash Memory TinyShield. The GPS TinyShield goes on top.

With your Mounting Kit, add spacers between the boards on the side opposite the connector. This will make sure your stack stays rigid and prevent your connectors from coming apart if pressure is placed on that side. If you're having difficulty placing them with bare fingers, I recommend a pair of tweezers to make the job easier.

Drop the screw through the holes and placed spacers and screw in the bolt on the other side to seal the deal. Finger tightening will be just fine to hold everything in place.

Plug in your Lithium Battery and your assembly is finished!

Step 4: Uploading the Sketch

Opening the .zip file downloaded from GitHub.

Click here to download the .zip file containing the sketch. Save the 'GPS_Tracker' sketch folder as pictured above to any destination you like. Double click on 'GPS_Tracker.ino' to open the IDE.

Ensure that the connection to your TinyDuino is configured properly, turn on your TinyDuino, and hit upload. Open the Serial Monitor to ensure that your device is outputting properly.

What the Serial Monitor should display after upload is complete.

Step 5: Transporting Your Device

For optimal GPS data readings, the sensor at the end of the antenna on top of the stack should be parallel with the ground. (Note that the coiling or bending of the antenna wire will not affect your readings.) This is best achieved by carrying the stack upright in some sort of containment. Pictured above is a potential setup that I took for a test drive - I have the stack upright in an anti-static plastic bag, which could be pinned to a backpack strap or the shoulder of your coat. You can accomplish this same effect in a number of creative ways. Maybe there's a tiny box that can accompany you on your journey!

Step 6: Device Operation

When you power on your device, it will take ten seconds for the GPS module to wake and begin configuring. It usually takes a few minutes for the module to precisely determine your position, so it is recommended that you stay still with the device for a minute or two to obtain the best accuracy. Note that factors such as cloud cover, large buildings, and large land or rock masses in close proximity can affect GPS readings.

While the Summit Metroparks Gorge Trail is beautiful, the large rock structures did have an effect on my GPS data.

The sketch for the device currently specifies that a data point is taken every ten seconds. If you wish, you can change the code as shown below to adjust that.

The value I am adjusting is the number of seconds in milliseconds.

To check if your device still has power, watch to see if the LED labeled 'P13' on your TinyDuino is blinking every ten seconds (or whatever delay you set it for). This signals that the device is writing to your Flash Memory TinyShield. If you're concerned about your device losing power on long trips, you can use a standard 1 Amp charging block or battery with a micro-USB cable and connect it to the USB TinyShield to recharge the lithium battery or power the device altogether.

Before powering down your device, make sure you've left enough time for your TinyDuino to write your last data point to memory!

Step 7: Reading the Data

When you first opened the Serial Monitor to verify the operation of your TinyDuino, you saw the following dialog pop up:

The dialog that appears upon turning your device on.

We will now interact with this dialog to retrieve the data from your Flash Memory TinyShield. Send 'y' as shown below to begin read mode:

You can also hit the "enter" key to accomplish this.

We will read the data from our device by sending '1'.

The data displayed here is for example only - your data will look much different.

We can copy and paste the block of strings into Notepad or another plain text editor like so:

Use the shortcut CTRL + C or CMD + C to copy this data.

Save that file as a .txt file. It's ready for the next step! As for your device, if you wish to erase that trip's data from its memory, send '2' to clear all data:

Make sure you've saved all your data before taking this action!

Now we get to the best part - visualizing your GPS data!

Step 8: Converting to Google Maps

For this task, we will be using the software on GPS-Visualizer.com. Huge thanks to their team for keeping this software free of charge, and for making the mapping process simple. (If you want to help keep the software free forever, you can donate here.)

On the home page, use the 'Choose File' button in 'Get started now!' box to select the .txt file that you saved earlier. You can select any output format from the drop-down menu, but I recommend Google Maps, as it is the most dynamic and informative format. Click "Map it" and watch the magic happen!

Yes, it is that easy. There are some advanced settings that you can use to smooth out your data points that they cover in a tutorial here on their site. Otherwise, you can save your results for later, and even share them on social media!

Home Automation


RESEARCHING PROJECT

Home automation module which will run over voice command. Virtual assistant will run the house, I remember J.A.R.V.I.S.

This is a home automation project where user will give voice commands to run his house sometimes more secure than single person can do.

Voice commands also includes scheduled date and time to turning on light or fan or whatever electronics you want. Virtual assistant, I have made it with a larger datasets and used a lot of sensors so when camera detects that no one in the room it could automatically turn off the light or the things that are set as default and user can give a command before leaving the room to not to turn off the AC etc.

Computer vision and Cloud computing takes a huge place in this project because we can not run the program in the small aurdino board. And camera feedback will keep the system always updated on what tasks it should do.

The most coolest thing which I like the most is the complete different alarm system, using a basic motor the Assistant will open curtain and turn on all the lights just to make you wake with a massive sound you need.

Home automation or domotics is building automation for a home, called a smart home or smart house.

But This project has changed the it used to be happen.

With a central AI which will have access to all users devices will understand more, which means it could be able to act like it has common sense.

Our current project is just using Aurdino uno board which will be easier to work with. Later on Raspberry Pi will be used in place of Aurdino that will boos up the speed and it could run multiple tasks at a time.

Everything is possible with proper knowledge, working with aurdino and other components may be challenging but if you have time to spend then you're welcome to this project.

The Project includes following knowledge:

  • Python

  • Computer Vision(Opencv)

  • Artificial Intelligence

  • Cloud Computing

  • API

If you're new to this type of projects then start from learning python and then others. And if you don't want to know how does it work then you can copy the source code and paste it_to get the work done.

ML on Raspberry pi

ON GOING PROJECT

3 Frameworks for Machine Learning on the Raspberry Pi

The revolution of AI is reaching new heights through new mediums. We’re all enjoying new tools on the edge, but what are they? What products frameworks will fuel the inventions of tomorrow?

If you’re unfamiliar with why Machine Learning is changing our lives, have a read here.

If you’re already excited about Machine Learning and you’re interested in utilizing it on devices like the Raspberry Pi, enjoy!

Simple object detection on the Raspberry Pi

I’ve implemented three different tools for detection on the Pi camera. While it’s a modern miracle that all three work, it’s important for creators to know “how well” because of #perfmatters.

Our three contenders are as follows:

  1. Vanilla Raspberry Pi 3 B+— No optimizations, but just using a TensorFlow framework on the device for simple recognition.

  2. Intel’s Neural Compute Stick 2 — Intel’s latest USB interface device for Neural Networks, boasting 8x perf over the first stick! Around $80 USD.

  3. Xnor.ai — A proprietary framework that reconfigures your model to run efficiently on smaller hardware. Xnor’s binary logic shrinks 32-bit floats to 1-bit operations, allowing you to optimize deep learning models for simple devices.

Let’s evaluate all three with simple object detection on a camera!

Vanilla Raspberry Pi 3 B+

A Raspberry Pi is like a small, wimpy, Linux machine for $40. It allows you to run high-level applications and code on devices like IoT made easy. Though it sounds like I can basically use laptop machine learning on the device, there’s one big gotcha. The RPi has an ARM processor, and that means we’ll need to recompile our framework, i.e. TensorFlow, to get everything running.

⚠️ While this is not hard, this is SLOW. Expect this to take a very… very… long time. This is pretty much the fate of anything compiled on the Raspberry Pi.

Setup

Here are all the steps I did, including setting up the Pi camera for object detection. I'm simply including this for posterity. Feel free to skip reading it.

Install pi, then camera, then edit the /boot/config.txt

Add disable_camera_led=1 to the bottom of the file and rebooting.

Best to disable screensaver mode, as some follow-up commands may take hours

sudo apt-get install xscreensaver

xscreensaver


Then disable screen saver in the “Display Mode” tab.

Now get Tensorflow Installed

sudo apt-get update

sudo apt-get dist-upgrade

sudo apt-get update

sudo apt-get install libatlas-base-dev

sudo apt-get install libjasper-dev libqtgui4 python3-pyqt5

pip3 install tensorflow

sudo apt-get install libjpeg-dev zlib1g-dev libxml2-dev libxslt1-dev

pip3 install pillow jupyter matplotlib cython

pip3 install lxml # this one takes a long time

pip3 install python-tk


OpenCV

sudo apt-get install libtiff5-dev libjasper-dev libpng12-dev

Sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev

sudo apt-get install libxvidcore-dev libx264-dev

sudo apt-get install qt4-dev-tools

pip3 install opencv-python


Install Protobuff

sudo apt-get install autoconf automake libtool curl


Then pull down protobuff and untar it.

https://github.com/protocolbuffers/protobuf/releases

Then cd in and then run the following command which might cause the computer to become unusable for the next 2+ hours. Use ctrl + alt + F1, to move to terminal only and release all UI RAM. Close x process with control + c if needed. You can then run the long-running command. Base username “pi” and password “raspberry”

make && make check


You can then install simply with

sudo make install

cd python

export LD_LIBRARY_PATH=../src/.libs

python3 setup.py build --cpp_implementation

python3 setup.py test --cpp_implementation

sudo python3 setup.py install --cpp_implementation

export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp

export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3

sudo ldconfig


Once this is done, you can clean up some install crud with sudo apt-get autoremove, delete the tar.gz download and then finally reboot with sudo reboot now which will return you to a windowed interface

Setup Tensorflow

mkdir tensorflow1 && cd tesorflow1

git clone --recurse-submodules \ https://github.com/tensorflow/models.git

modify ~/.bashrc to contain new env var named PYTHONPATH as such

export PYTHONPATH=$PYTHONPATH:/home/pi/tensorflow1/models/research:/home/pi/tensorflow1/models/research/slim

Now go to the zoo: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

We’ll take the ssdlite_mobilenet, which is the fastest! Wget the file and then tar -xzvf the tar.gz result and delete the archive once untarred. Do this in the object_detection folder in your local tensorflow1 folder. Now cd up to the research dir. Then run:

protoc object_detection/protos/*.proto --python_out=.


This converted the object detection protos files to python in the proto folder

Done Installing!!

Special thanks to Edje Electronics for sharing their wisdom on setup, an indispensable resource for my own setup and code.

Once I got Tensorflow running, I was able to run object recognition (with the provided sample code) on Mobilenet for 1 to 3 frames per second.

Vanilla Pi Results

For basic detection, 1 to 3 frames per second aren’t bad. Removing the GUI or lowering camera input quality speeds up detection. This means the tool could be an excellent detector for just simple detection. What a great baseline! Let’s see if we can make it better with the tools available.

Intel’s Neural Compute Stick 2

This concept excites me. For those of us without GPUs readily available, training on the edge instead of the cloud, and moving that intense speed to the Raspberry Pi is just exciting. I missed the original stick, the “Movidius”, but from this graph, it looks like I chose a great time to buy!

Setup

My Intel NCS2 arrived quickly and I enjoyed unboxing actual hardware for accelerating my training. That was probably the last moment I was excited.

Firstly, the USB takes a lot of space. You’ll want to get a cable to keep it away from the base.

That’s a little annoying but fine. The really annoying part was trying to get my NCS 2 working.

There are lots of tutorials for the NCS by third parties, and following them got me to a point where I thought the USB stick might be broken!

Everything I found on the NCS didn’t work (telling me the stick wasn’t plugged in!), and everything I found on NCS2 was pretty confusing. For a while, NCS2 didn’t even work on ARM processors!

After a lot of false-trails, I finally found and began compiling C++ examples (sorry Python) that only understood USB cameras (sorry PiCam). Compiling the examples was painful. Often the entire Raspberry Pi would become unusable, and I’d have to reboot.

locked up at 81% for 24 hours

The whole onboarding experience was more painful than recompiling Tensorflow on the raw Pi. Fortunately, I got everything working!

The result!? ???????

NC2 Stick Results

6 to 8 frames per second… ARE YOU SERIOUS!? After all that?

It must be a mistake, let me run the perfcheck project.

10 frames per second…

From videos on the original NCS on python I saw around 10fps.. where’s the 8x boost? Where’s the reason for $80 hardware attached to a $40 device? To say I was let down by Intel’s NCS2 is an understatement. The user experience and final results were frustrating, to put it lightly.

Xnor.ai is a self-contained software solution for deploying fast and accurate deep learning models to low-cost devices. As many discrete logic enthusiasts might have noticed, Xnor is the logical complement of the bitwise XOR operator. If that doesn’t mean anything to you, that’s fine. Just know that the people who created the YOLO algorithm are alluding to the use of the logical operator to compress complex 32-bit computations down to 1-bit by utilizing this inexpensive operation and keeping track of the CPU stack.

In theory, avoiding such complex calculations required by GPUs should speed up execution on edge devices. Let’s see if it works!

Setup

Setup was insanely easy. I had an object detection demo up and running in 5 minutes. 5 MINUTES!

The trick with Xnor.ai is that, much like the NCS2 Stick, the model is modified and optimized for the underlying hardware fabric. Unlike Intel’s haphazard setup, everything is wrapped in friendly Python (or C) code.

model = xnornet.Model.load_built_in()

That’s nice and simple.

But it means nothing if the performance isn’t there. Let’s load their object detection model.

Again, no complexity, they have one with no overlay, and one with. Since the others (except for perfcheck on NCS2) were with overlays, let’s use that.

Xnor.ai Results

JAW… DROPPING… PERFORMANCE. I not only get a stat on how fast inference could work, but I also get an overall FPS with my overlay that blew everything else out of the water.

OVER 12FPS and an inference speed over 34FPS!?

This amazing throughput is achieved with no extra hardware purchase!? I’d call Xnor the winner at this point, but it seems a little too obvious.

I was able to heat up my device and open a browser in the background to get it down to 8+ FPS, but even then, it’s a clear winner!

Xnor hype is real

The only negative I can give you on Xnor.ai is that I have no idea how much it costs. The Evaluation model has a limit of 13,500 inferences per startup.

While emailing them to get pricing, they are just breaking into non-commercial use, so they haven’t created a pricing system yet. Fortunately, the evaluation model would be fine for most hobbyists and prototypes.

In Summary:

If you need to take a variety of models into account, you might be just fine getting your Raspberry Pi setup from scratch. This would make it a great resource for testing new models and really customize your experience.

When you’re ready to ship, it’s no doubt that both the NCS2 and the Xnor.ai frameworks speed things up. It’s also no doubt that Xnor.ai outperformed the NCS2 in both onboarding and performance. I’m not sure what Xnor.ai’s pricing model is, but that would be the final factor in what is clearly a superior framework.