Attach the USB camera to the meArm gripper. Make a copy of the Factracker Demo program from openCV. Modify the openCV program so that the images recorded are reduced in size by at least a factor of 2 in both dimensions right after they are aquired from the camera. If the code finds a face it will draw rectangles around the face and around the eyes. Take a look at the rectangles the code finds and draws on the screen. You are interested in the largest rectangle that is drawn. It is described by x1,y1, x2,y2 which are the coordinates of opposite corners. The center of the face is located at (x1+x2)/2 and (y1+y2)/2. If a face is found len(rects) is not zero. x1 = rects[0,0]; y1=rects[0,1]; x2=rects[0,2]; y2=rects[0,3]; or smilar.
You will want the meArm move the camera until the face is in the center of the image (size(image) gives you the dimensions. You will need to assign x,y of the image coordinates to x,y,z of the meArm.
You compute the difference between the center of the face and the center of the image and compute an offset you like the meArm to move.
Center = (x1+x2)/2 ... or similar
Offset = size(img)/2 - Center ... or similar
Before you adjust the servos you should make sure that the new positions is within the reachable range of the meArm. Otherwise your algorithm might be driving the arm into its endpositions. If the code does not work there might be simple mistakes: you want to move the a larger value in x direction but you move to a small value. You move the arm in y direction but you want to move along z. You can solve those errors with trial and error. Its important you make sure you do not position the meArm outside the range it can reach by testing that the desired position first (check meArm.py for isreachable function).
The distance you will want to move the face from the pixel offset in the image is not equivalent to the distance in mm of the meArm. Since the program is run in a loop it will simply try to adjust the position until the offset is zero and in that case no longer move the arm. You can multiply the offset with a gain if the arm does not move fast enough.
Below is Pete's pseudo-code for the Poor Man's FaceTracker:
#!/usr/bin/env python
'''
face detection using haar cascades
USAGE:
facedetect.py [--cascade <cascade_fn>] [--nested-cascade <cascade_fn>] [<video_source>]
'''
# Python 2/3 compatibility
from __future__ import print_function
import numpy as np
import cv2 as cv
# YOU NEED TO IMPORT YOUR PWMTEST LIBRARIES HERE
# things like time, Adafruit_MotorHAT, Adafruit_DCMotor
# If you use mearm functions you need to make sure you copied all mearm support files such as mearm.py and kinematics.py to the current project directory.
# local modules
# you will need to have video.py and common.py from the python examples directory copied to your working directory.
from video import create_capture
from common import clock, draw_str
# Lines starting with def are defining a 'function()'
# This is a good place to the parts of your code from the simple PWMTest
## mh = Adafruit_MotorHAT(addr=0x60)
## mh._pwm.setPWMFreq(50)
# Insert any necessary functions to run the PWMTest
def detect(img, cascade): # This is the function to detect the rects
rects = cascade.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30),
flags=cv.CASCADE_SCALE_IMAGE)
if len(rects) == 0:
return []
rects[:,2:] += rects[:,:2]
return rects
def draw_rects(img, rects, color): #This just draws the rectangles
for x1, y1, x2, y2 in rects:
cv.rectangle(img, (x1, y1), (x2, y2), color, 2)
if __name__ == '__main__': # This is the main program that is run
import sys, getopt
print(__doc__)
args, video_src = getopt.getopt(sys.argv[1:], '', ['cascade=', 'nested-cascade='])
try:
video_src = video_src[0]
except:
video_src = 0
args = dict(args) # This is to check if you're reading the comments, please delete
cascade_fn = args.get('--cascade', "data/haarcascades/haarcascade_frontalface_alt.xml")
nested_fn = args.get('--nested-cascade', "data/haarcascades/haarcascade_eye.xml")
cascade = cv.CascadeClassifier(cascade_fn)
nested = cv.CascadeClassifier(nested_fn)
cam = create_capture(video_src, fallback='synth:bg=samples/data/lena.jpg:noise=0.05')
# This is a good place to put the start position of your motor pulse width
## wx = 1500 # this is the variable pulse width for the x direction
## wy = 1500 # this is the variable pulse width for the y direction
# This would be a great place to set the gripper and shoulder servos in
# place if you need them engaged to hold the camera and shoulder up
while True: # This is the main loop
ret, img = cam.read()
# You need to insert your cv2.resize here
# I would strongly recommend setting the output to a 300x300px ish array
# cv2.resize works something like this
#cv2.resize(variable name, dimensions, x scale factor, y scale factor)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
gray = cv.equalizeHist(gray)
t = clock()
# this is main face detection fucntion
rects = detect(gray, cascade)
vis = img.copy()
draw_rects(vis, rects, (0, 255, 0))
wxmax = np.size(img, 0);
wymax = np.size(img, 1);
if len(rects) >=1:
x1=rects[0,0];
y1=rects[0,1];
x2=rects[0,2];
y2=rects[0,3];
if (x1 + x2)/2 < wxmax/2
# If you add or subtract pixels from the dim array center you make
# a buffer region in the center to stop the meArm from continuously
# trying to move the rectangle to the center
# Here is a good to put a boundary condition for the motor
# by using another if loop with a maximum or minimum of the
# pulse width for x "wx" then just use wx=wx +/- a number
wx = wx +/- 10
if wx > wxmax
wx = wxmax
if (x1 + x2)/2 > wxmax
wx = wx +/- 10
if wx < 0
wx=0
# here you should do the same thing but to the other direction
# but do the opposite sign: greater than or less than and add or subtract
# as well as a boundary condition on minimum or max on the servo
# Now you can do the same thing for the Y direction
do same for y direction
...
#Last you must set the servo pulse
set_servo_pulse(channel base,wx)
set_servo_pulse(channel elbow,wy)
dt = clock() - t
draw_str(vis, (20, 20), 'time: %.1f ms' % (dt*1000))
cv.imshow('facedetect', vis)
# if you want snapchat filters on facedetect you need more libraries installed
if cv.waitKey(5) == 27:
break
cv.destroyAllWindows()