This is my make-through for the week's assignment. Grab a cup of coffee!
Let's start the my Last (ten week) journey.
Deals with 5 New concepts:
Component and connection: ( Raspberry Pi, camera, and servo)
Software, Coding and Library: Python( Toney) programming and OpenCV library. In Toney (Tools-managed packages- search on the CV2- install)
Detection: Face Center Detection
Testing:
As we say that Simple explanation comes with a simplified example. This is the goal today.
Short Intro:
I will design and program a Device to be artificially intelligent (visually intelligent) by using Computer Vision technique. My Raspberry Pi takes a video input through camera, next process will detect, locate, and track a center of My Face, and based on the changing location of the center of Face the servo motor pointed to the moving center.
Recovery Points in documentation :
Processor (as brain): Using Raspberry Pi 4 board to run my computer vision code
Input & Output: I take input from live video feed (Camera) connected to the Raspberry Pi board and use servo motor as output component from.
Providing Wiring diagram of My components
Programming: Use OpenCV library and python programming to write your code, and it should do the following
Detection: Identify an center of face.
Localization: Calculate and store the center point of the face
Tracking: Detect and locate the center as it changes its position in space
Actuation: Control an output device based on the tracking data which is the change the angle of a servo motor.
https://www.dexterindustries.com/howto/installing-the-raspberry-pi-camera/
Control camera of my laptop: making video camera and making the image gray. Detect the eyes and detect center of eyes. color each eye center with different color.
Effects on the Images: taken (like black and white in random phase with size (640 X480). detect the edges. detect center of faces.
Problem: main problem install the CVT library (transform of color image into gray) .
Solved The Problem: Don't miss to Extract and Put in the same file of code haar_cascade restart My lab top then try again its work
Eye Detection, centering , Tracking, and localization in video: see following photo I divided camera into two half and each half detect the eye in different color (red and blue)
Last point to reach to detect and track the center of face with servo: see the following figure taken from data sheet of servo show the PWM period. we need some library see the first see code from line one to line eleven. I make comment in code to be easy in understanding. Warning, don't miss to put haar_cascade in same folder. see code in the last link below. it each time the rectangle color change as result of change of center point as result of my movement.
I communicated with some of my users to get more detail and their ideas. I am a good listener.
Problem: installing the library CV 2 ( during the installing the program loading then not respond) and CVT library
Potential Solution: Common solution restart the program Then its work and the library installed.
Eye Detection and centering and tracking
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import RPi.GPIO as GPIO
import tim
camera = PiCamera() # this part of capturing the video
camera.resolution = (640, 480)
camera.framerate = 15
rawCapture = PiRGBArray(camera, size=(640, 480))
eye_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') # using classfier to detect face
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11,GPIO.OUT) # indetfy the pins
GPIO.setup(13,GPIO.OUT)
servo1 = GPIO.PWM(11,50)
servo1.start(0)# using to servo-one with start angle zero
time.sleep(0.1)
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
frame = frame.array
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)# convert image to gray to make easy in processing
eyes = eye_cascade.detectMultiScale(gray, 1.3, 8)#function which detect the eyes in the video search about the value between the brackets it can increase or decrease the accuracy (after convert to gray to make easy processing)
for (x, y, w, h) in eyes:
center = (int(x + w / 2), int(y + h / 2)) # X and Y is the dimension of image and w and h is the dimension of the rectange Which draw around the face. the equation give the center of face.
if center[0] > 320 and center[1] >240: # condition of changing the color of rectangle according tho the position of face on the screen (screen divide into 4 parts)
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
elif center[0] > 320 and center[1] <240:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
elif center[0] < 320 and center[1] < 240:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2)
else:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 255), 2)
print (center)# print the center you will show under code
cv2.circle(frame, center, 1, (0, 255, 0), 2)
cv2.circle(frame, center, 1, (0, 255, 0), 2)
w = center[0]
h = center[1]
# control movement of servo
angle1 = (w/640)*180
angle2 = (h/480)*180
servo1.ChangeDutyCycle(2+(angle1/18))
time.sleep(0.2)
servo1.ChangeDutyCycle(0)
time.sleep(0.3)
print (angle1,angle2) #show the angle of servo see the orange arrow
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
rawCapture.truncate(0)
What I learned this week is...
I learnt to deal with second time with raspberry Pi and reach to this step. I thank Allah. I want to thank Our instructors
Learning the language (python)
Learning many function importance of using raspberry PI, which can use the computer vision such detection quality in factories (such as size, center, , and defects in the product)