This is my make-through for the week's assignment. Grab a cup of coffee!
AI was the main Goal of the diploma for me, Always being fascinated by the concept of AI and machine learning, Making a robot or software that can interact with you and understand your needs, and always being afraid of the idea of robots overcome and replace humans.
Always wanted to learn AI but never had the guts to start it on my own!
With help of our instructor Mohamed Askar, I finally started the first step.
the first step of AI is how it gets the information, in this case we will use image or video detection and when detecting specific situation, it will perform certain task
We are using python. Fortuantelly, we have openCV.
OpenCV is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel.
OpenCV has some very useful tools that we can use.
We have to install any python IDE, I choosed Thonny, quite easy and very handy.
You have to download Thonny and install it then open it in standard mode.
from tools> Manage packages, search and install for those three packages:
OpenCV-python
numpy
matplotlib
After you have installed them it is time to write a simple code
we used numpy library as it contains randint function that generates random numbers
So this code generates a new random white black image every second and displays it by the imshow function of openCV
Using more OpenCV functions:
imread to use an image from your computer (prefereed in the same location as the thonny py file )
Canny to detect edges
GaussianBlur to smooth picture
We will be using Haar cascade as it is the most easy between the others such as ( CNN, HOG and others)
We need to include the haar cascade files in our thonny py directory
We will use some new functions such as :
cv2.CascadeClassifier to import cascade file
cv2.VideoCapture to start streaming using camera
cap.read to get the current frame
cv2.cvtColor to change the color mode of the frame
classifier.detectMultiScale to apply the classifier to the frame
cv2.circle to draw circle on the frame
cv2.rectangle to draw rectangle on the frame
cv2.imwrite to store the frame on your computer
Askar added a simple code to change the rectangle color when you move to a different quarter, that means you can switch between 4 different colors
Raspberry pi is quite different, if you tried to run the previous code it may run once in a hundred tries!
For raspberry, we will try a different process and install more libraries
Using picamera and picamera.array libraries
Using new functions such as:
PICamera to use RP camera
PiRGBArray to capture the frame
camera.capture_continuous that function makes the code run forever, what it does that it requests a new frame each time it finishes the previous one
truncate(0) to keep the process clean
Now open the terminal and run those lines one by one
sudo apt-get install libhdf5-dev libhdf5-serial-dev libhdf5-100
sudo apt-get install libqtgui4 libqtwebkit4 libqt4-test
sudo apt-get install libatlas-base-dev
sudo apt-get install libjasper-dev
wget https://bootstrap.pypa.io/get-pip.py
sudo python3 get-pip.py
sudo pip3 install opencv-contrib-python
sudo pip3 install opencv-python
Now, combining week 8 with what we learned so far:
We now can make smart emotional assist that looks at you when you move
Code will be as following
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import RPi.GPIO as GPIO
import time
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 15
rawCapture = PiRGBArray(camera, size=(640, 480))
eye_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11,GPIO.OUT)
GPIO.setup(13,GPIO.OUT)
servo1 = GPIO.PWM(11,50)
servo2 = GPIO.PWM(13,50)
servo1.start(0)
servo2.start(0)
time.sleep(0.1)
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
frame = frame.array
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
eyes = eye_cascade.detectMultiScale(gray, 1.3, 8)
for (x, y, w, h) in eyes:
center = (int(x + w / 2), int(y + h / 2))
if center[0] > 320 and center[1] >240:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
elif center[0] > 320 and center[1] <240:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
elif center[0] < 320 and center[1] < 240:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2)
else:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 255), 2)
print (center)
cv2.circle(frame, center, 1, (0, 255, 0), 2)
cv2.circle(frame, center, 1, (0, 255, 0), 2)
w = center[0]
h = center[1]
angle1 = (w/640)*180
angle2 = (h/480)*180
servo1.ChangeDutyCycle(2+(angle1/18))
servo2.ChangeDutyCycle(2+(angle2/18))
time.sleep(0.2)
servo1.ChangeDutyCycle(0)
servo2.ChangeDutyCycle(0)
time.sleep(0.3)
print (angle1,angle2)
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
rawCapture.truncate(0)
A video
What I learned this week is...
How to AI
How to use raspberry pi camera
How to use classifiers