Objective: Design a product that can autonomously plug in an electric vehicle without any user intervention (potentially turning the final product into a kit or open source project)
Goals:
Low cost
Reliable
As this is a rather challenging project, one of the core ideas is always solving the hardest problem first. This reduces waste in terms of cost and time, which are both extremely valuable in engineering. For this project, the two most challenging components are charge door detection, and kinematic control of a SCARA arm. For this reason, each subsequent prototype is focused on tackling only one of these problems at a time. Once one problem is close to being solved, the other problem is addressed. This is done back and forth until a robust solution is found.
The first prototype (C-1) was a cartesian robot, with the goal of building a quick and cheap frame to test rudimentary image processing. The second prototype (S-1) was a SCARA arm in order to begin testing methods of inverse kinematics and basic control schemes. The third prototype (C-2) was another cartesian robot, used to test a more unique magnet-based door detection method as a fallback technique in case optical solutions proved to be unsuitable. The fourth and current prototype (S-2) takes the learning from all previous designs and combines it into a robot that is nearing a fully functional prototype.
The initial thought was to use two ball screws/threaded rods with stepper motors to accurately maneuver the arm. An important consideration was that the device should not block the passenger doors from opening, so the device has to remain in a "parked" position when not in use. For this reason, a nontraditional method was considered by allowing the horizontal axis to extend and retract. Instead of the horizontal motor being stationary and the gantry moving, the motor moves with the rest of the assembly while the gantry is stationary. This can be better understood by the images below:
The most challenging part of this project was determining a method by which the device is able to locate and maneuver to the vehicle charge door, which this basic arm was used to test. The initial thought was to utilize image recognition. In order to quickly test the viability of this method, a readily available solution, PixyCam, was purchased. This device has an on-board processor that uses a hue based color tracking algorithm to determine the location of an object in a frame (in this case, a distinct marker on the charge door). The PixyCam then is able to provide the coordinates of the object to a microcontroller over I2C.
Upon testing in different brightness and color temperature environments, it was evident that the camera struggled to consistently track objects. In order to try and remedy this, a light color and brightness sensor along with a neutral white LED and RGB led (both COB) were added to the system to try to stabilize the environment's parameters in the camera's frame. Although this method was able to mitigate the issues significantly, it did not meet minimum reliability metrics.
In terms of the scraped together mechanical build, the bushings used had too much play and resulted in the system binding at the extreme extension due to the large cantilever (caused by the center of mass of the horizontal axis being far from the pivot). Furthermore, given the large cantilever, there were high stresses on the piece connecting the two stages. This was prototyped with PLA and very quickly cracked. A more ductile plastic such as ABS could have been used, but based on quick FEA, would still not be strong enough given this geometry. Trying to manufacture a part like this out of metal would have required less accessible manufacturing methods such as SLS 3D printing or CNC milling.
Since this entire system was far more regulated in terms of usage (i.e., designed for a very specific purpose) than industrial robotic arms, an unconventional approach could be used. Instead of using a motor at the pivot, linear actuators (with feedback either from an encoder or potentiometer at the arm pivots) could be used to manipulate the position of the arm. The benefit of linear actuators is their very high force capability due to an internal leadscrew (which gives a high gear ratio, and prevents back driving).
In order to influence the design of the arm, SolidWorks was used in conjunction with MatLab in order to determine how to get the most arm range with a given linear actuator extension (determined by the linear actuator's datasheet).
MatLab Code (Click to Expand)
close all;
%----------------------------------------------
%define all constants
h = 293;
ya = 450;
za = 300;
la = 150;
xa = 428:1:732;
a = za + la;
yb = 400;
zb = 350;
lb = 50;
xb = 428:1:732;
eb = 100;
b = zb + eb;
%----------------------------------------------
%Calculate all thetaA and thetaB given constraints of linear actuator
thetaA = acosd((xa.^2-ya^2-za^2)/(-2*ya*za));
thetaB = acosd((xb.^2-yb^2-zb^2)/(-2*yb*zb));
%----------------------------------------------
%Calculate the height of the second pivot (due to actuator 1)
ha = (za + la)*sind(thetaA);
%----------------------------------------------
%Calculate which values of thetaB would be theoretically required to keep
%the end effector at constant height
for i = 1 : length(thetaA)
thetaCheck = asind((h - ha)/b) + thetaA;
end
%----------------------------------------------
%Find the overlap between required thetaB values and required thetaB values
%(thetaCheck)
if(min(thetaCheck) < min(thetaB))
lower = thetaCheck;
upper = thetaB;
else
lower = thetaB;
upper = thetaCheck;
end
minFind = min(upper);
for i = 1 : length(lower)
if(lower(1, i) > minFind)
lowerBound = lower(1, i);
break;
end
end
if(max(thetaCheck) < max(thetaB))
lower = thetaCheck;
upper = thetaB;
else
lower = thetaB;
upper = thetaCheck;
end
maxFind = max(lower);
for j = length(upper) : -1: 1
if(upper(1, j) < maxFind)
upperBound = upper(1, j);
break;
end
end
%----------------------------------------------
%Find the maximum and minimum values for thetaA and thetaB that would
%result in the two extremes (most extended and most contracted)
thetaAmin = thetaA(1, i);
thetaAmax = thetaA(1, j);
thetaBmin = thetaCheck(1, i);
thetaBmax = thetaCheck(1, j);
%----------------------------------------------
%Calculate the total range that the arm is able to reach with the given
%constraints
extmax = b * cosd(thetaAmax - thetaBmax) - a * cosd(thetaAmax);
extmin = b * cosd(thetaAmin - thetaBmin) - a * cosd(thetaAmin);
totalext = extmax - extmin;
disp(totalext);
The same variables were used in SolidWorks (using formulas) for easy design changes.
All of the static parts were then manufactured using a waterjet and assembled.
Controlling this arm was challenging. The goal was to give the arm an (x, y) coordinate for the endpoint in mm (with respect to the lower pivot) and the arm would move to that position in a straight line from its initial position. In order to do this, a Matlab script was created that would generate two surfaces for the position and velocity of each linear actuator to get from point A to point B (note that this solution was selected because the kinematic relationship between the two actuator extensions and the x,y position of the endpoints is a system of nonlinear equations that could not be analytically solved). Each surface would represent the analog value from a potentiometer mounted to the pivot, corresponding to that joint's angle. The x and y axes of the plot represent the x and y axes of the endpoint. If, for example, the arm had to get from (x0, y0) to (x, y), a line would be drawn on the xy plane connecting those two points. This line would then be projected onto each of the two surfaces for both the position and velocity surfaces. This would represent the position and velocity profiles that both actuators would have to follow for the endpoint to move in a straight line.
MatLab Code for Position (Click to Expand)
clear;
load vars.mat;
syms thetaA thetaB x y;
A = 450;
B = 550;
x_eq = A*sind(thetaA) - B*sind(thetaA - thetaB);
y_eq = B*cosd(thetaA - thetaB) - A*cosd(thetaA);
xstart = 356;
xend = 400; %881
ystart = 467-5;
yend = 467+5;
zval_A = zeros(xend-xstart+1,yend-ystart+1);
zval_B = zeros(xend-xstart+1,yend-ystart+1);
xval = zeros(xend-xstart+1,1);
yval = zeros(yend-ystart+1,1);
for x = xstart:xend
for y = ystart:yend
[solA, solB] = vpasolve([x_eq == x,y_eq == y],[thetaA, thetaB]);
zval_A(x-xstart+1,y-ystart+1) = double(solA);
zval_B(x-xstart+1,y-ystart+1) = double(solB);
xval(x-xstart+1,1) = x;
yval(y-ystart+1,1) = y;
end
end
zval_A = zval_A + 360;
s = surf(yval,xval,zval_A);
hold on
t = surf(yval,xval,zval_B);
s.EdgeColor = 'none';
t.EdgeColor = 'none';
Upon inspection of the Matlab plot, it was clear that to get millimeter level precision on the endpoint, 0.1 degree precision would be needed from the potentiometer. Unfortunately, given the Arduino's built in ADC, this was impossible to achieve. This is because the Arduino Uno has a 10 bit ADC, which means it can read values from 0 to 1023. With the way the potentiometers were connected, they did not sweep over the entire range. The ADC range ended up being from 402 to 794, which would not provide the 0.1 degree precision needed.
The two alternatives were moving to an encoder on the motor and using interrupts in the software, or using a higher resolution external ADC. After considering the pros and cons of both, switching over to an encoder made the most sense. The encoder would allow for far higher precision given the heavy gearing used between the motor and the linear actuator. This would mean that the Matlab code needed to be changed in order to create surfaces that represent number of encoder clicks rather than angle.
Since using an encoder was never part of the original design, one had to be retrofitted to the linear actuators' gearboxes. This was done with a small wheel with alternating magnetic fields and a digital hall effects sensor. An interrupt was used on the Arduino to count encoder steps. At random actuation lengths, both the encoder clicks as well as the physical distance (measured with a tape measure) were recorded. Then, the least squares method was used to generate a first order equation to relate encoder clicks to actuation length (the least squares method is used to determine the coefficients of a polynomial of any degree given more data points than unknowns by minimizing error between the polynomial and the data points).
Another problem arose at this part of the build - the Arduino Uno has a very limited amount of SRAM. SRAM is the location where all variables are stored; on the Arduino Uno, this is limited to 2k bytes. This means that storing a large array with values from the Matlab script was not feasible. It was possible to use external RAM, but this would add unnecessary complexity to the system. Instead, code was written to allow the Arduino to determine the required extensions of both actuators at a given x,y position. Once again, this was very challenging given the nonlinear relationship between these parameters.
Eq 1 and 2 defined the relationship between joint angle and x,y position (used in the Matlab code to generate surfaces) while Eq 3 and 4 related these angles to actuators extensions A and B. Once the angles were found, determining extension was trivial. Given that Eqs 1 and 2 were a system of nonlinear equations where neither angles could be solved for, trying to calculate the angles from a given x,y position was challenging. After a number of attempts, Newton's method of solving a system of nonlinear equations was utilized (a snippet of which is provided below).
This was then implemented into the Arduino code along with the conversion from angle to actuator extension.
Arduino Code for Netwon's Method of Solving a System of Nonlinear Equations
void sys(float w, float h, float *APointer, float *BPointer){ //processes final value for extensions in mm
int iterations = 3; //more iterations means more accurate answer
float A = 1.57; //current best guess in rad
float B = 1.57;
float J11; //inverse Jacobian matrix values
float J12;
float J21;
float J22;
float F1; //solution to functions
float F2;
//Generate answer in terms of angle (rad)
for(int i = 0; i < iterations; i++){
J11 = sin(A - B)/(450*(cos(A)*sin(A - B) - cos(A - B)*sin(A))); //Populate jacobian matrix
J12 = -cos(A - B)/(450*(cos(A)*sin(A - B) - cos(A - B)*sin(A)));
J21 = -(9*sin(A) - 11*sin(A - B))/(4950*(cos(A)*sin(A - B) - cos(A - B)*sin(A)));
J22 = (9*cos(A) - 11*cos(A - B))/(4950*(cos(A)*sin(A - B) - cos(A - B)*sin(A)));
F1 = 450*sin(A) - 550*sin(A - B) - h;
F2 = 550*cos(A - B) - 450*cos(A) - w;
A = A - (J11*F1 + J12*F2);
B = B - (J21*F1 + J12*F2);
}
//Value is now in terms of angle, convert this to extension:
A = sqrt(-275400*cos(A) + 210681 + 90000);
B = sqrt(-265600*cos(B) + 160000 + 110224);
*APointer = A;
*BPointer = B;
}
Now that there was code to calculate the extensions at a single x,y position, code for generating a path had to be written. The input parameters for this would be the initial x,y position and the desired x,y position. In order to do this, the Arduino discretized the line from (x0, y0) to (x,y) into smaller pieces, the size of which depends on predefined a precision value. Then, for each segmented x,y point, the extension of both actuators was calculated using Newton's method.
Arduino Code for Actuator Extension Path Generation
void path(int x0, int y0, int x, int y){ //generate path
//BREAK DOWN LINE INTO SEGMENTS
float prec = 1; //Greater precision means more granularity (each step is 1/prec)
float distanceX = x - x0; //Find distance of change in x
float distanceY = y - y0;
if(distanceX > distanceY){ //larger distance used to determine step size
numSteps = ceil(distanceX*prec);
}
else{
numSteps = ceil(distanceY*prec);
}
float stepSizeX = distanceX/(float)numSteps; //Step between points in X dir
float stepSizeY = distanceY/(float)numSteps; //Step between points in Y dir
//CONVERT FROM SEGMENTS ALONG LINE TO EXTENSION OF A AND B
float xInc = x0;
float yInc = y0;
float A;
float B;
sys(xInc, yInc, &A, &B);
A = convertCounts(A);
B = convertCounts(B);
Apos[0] = A;
Bpos[0] = B;
for(int i = 1; i <= numSteps; i++){
xInc = xInc + stepSizeX; //Determine current position of interest
yInc = yInc + stepSizeY;
sys(xInc, yInc, &A, &B); //Calculate A and B extensions for that position
A = convertCounts(A);
B = convertCounts(B); //THIS NEEDS TO BE UPDATED LATER WITH THE B FORMULA
Apos[i] = A; //Store them to a variable
Bpos[i] = B;
}
}
The idea is that each of these points would be sent to another function that handles the movement of the actuators at a predetermined time interval such that each point sent is slightly beyond the current extension of the actuator; this would ensure that the actuator is constantly moving forward. Both actuators would receive the next point at the same time so that they would always be in sync with each other, allowing for a linear motion from point (x0, y0) to (x,y). In order to determine how fast each actuator should move, a PID controller is implemented.
As a brief explanation of PID controllers, it is a control system that can be used to change the input to a plant, or device, to achieve a desired value. A PID controller operates by trying to minimize error, labeled as e(t) in the diagram above. Error is found by taking the difference between the desired state and the feedback signal. There are three gains, kp, ki, and kd, that are used to determine how much emphasis the controller should put on the value of the error, the integral of the error, and the derivative of the error (note that the error, integral, and derivative would be discrete for a real system). After multiplying each of these parameters by their respective gains, the values are added together and the signal is sent to the plant.
In this case of this system, each actuator got its own PID controller. The desired state was the extension value calculated above (converted to encoder clicks), the feedback signal was the actual number of encoder clicks, and the plant was the linear actuator motor (to which a PWM signal was sent through an H-bridge motor driver).
The values of kp, ki, and kd were determined though physical system testing since creating an accurate model to simulate it in Simulink would be a more challenging solution. A step response (a signal from 0 to 1 for desired value) was used to tune the controller. The Ziegler–Nichols method was used to find starting values for these three gains. This process entails increasing kp (keeping the other two gains at 0) until steady oscillation is achieved. Then, by analyzing the response and using the table below, starting values were determined.
Then, the values were tuned using the table below.
This entire system would be challenging to test on its own, so the first step was to test the system just given an initial and final x,y position. This meant that the two linear actuators would not move in tandem with one another and the motion of the endpoint would not be linear. Note that the PID controller was used in this case. The results of this test are shown below:
Checking the serial monitor output from the Arduino, both actuators successfully reached the calculated encoder clicks. Additionally, a tape measure proved that the arm ended up close to the desired position. The system was able to achieve +/- 2 mm, which although was not precise enough for this application, helped prove the functionality of the inverse kinematics method and control scheme. For these reasons, the goal of this prototype was achieved and it was time to put focus back on door detection.
The binding linear motion problems from the original design were addressed here. In order to reduce play in the bearings, precision linear motion rails and linear bearings were used. To fix the binding issue, the 2:1 rule was applied.
The 2:1 rule describes the idea that the distance from the bearing to the cantilevered load should be no more than two times the distance between bearings along the rail. In addition, it is important to ensure that the bearings closest to the load are fixed while the remaining two are floating. This ensures that any misalignment in the rails (due to parallelness and flatness) doesn't cause binding. With these ideas in mind, the following design was created with three degrees of freedom (rotation would be added later in the design process).
This design was then manufactured for testing. The goal was to confirm that the charge door could be located and opened; the success of this operation proved that the new method used to locate the charge door was effective.
Instead of relying on optical inputs, a magnet could be embedded into the vehicle charge door and a hall effects sensor could be connected to a microcontroller. The range at which this system would detect the location of the charge door is significantly less than the camera, so an ultrasonic (distance) sensor would be needed to approach the door. In this configuration, the device would sweep across the body of the vehicle, keeping a constant distance away using the ultrasonic sensor. Once a magnetic field is detected, the hall effects sensor data would be used to pinpoint the magnet location. In order to make this more effective, the Infineon TLV493D was used. This sensor is able to provide the magnitude (an analog value) of a magnetic field in all three axes, allowing the microcontroller to determine which direction to move the device in.
The method of door detection proved to be effective, as shown in the video below.
This prototype took the learnings from all previous designs and combined them together to build a nearly fully functional prototype.
The arm is capable of reaching a charge door within a 1 meter radius of the shoulder joint, giving drivers a considerable amount of allowable error when parking the vehicle.
Mechanically, the robot has a number of improvements over previous designs. First, stepper motors are used, making control much easier. Stepper motors require a controller to command each step, which allows them to operate in an open loop configuration. Therefore, if 30 degrees of rotation is desired, for example, the corresponding number of steps are issued instead of needing to apply power to the motor and constantly measuring the position (as is the case with DC motors). This calculation is also trivial as stepper motors have a set number of steps per revolution (in this case, 200 steps).
Gearboxes were added to the motors as well in order to support the theoretical worst case loading on the arm. Although there was conflicting information regarding the maximum insertion force for the J1772 standard, 80N was found to be the worst case. This force was assumed to be applied at 1 meter away given the reach of the arm (this assumes the arm is fully outstretched to one side), giving 80 Nm as a design requirement.
These motor-gearbox assemblies were placed on the main body of the frame as opposed to on the arm itself to both reduce the strain on the arm (allowing for a lighter arm with less machined metal parts) as well as move the center of mass of the robot close to the base.
In order to move each joint, T5 timing belts were used. Timing belts allow for precise motion while being lightweight, cost effective, and easy to implement. T5 was chosen due to its wide availability and maximum force characteristics. The shoulder joint was most challenging to design as it required multiple concentric shafts to allow the elbow and wrist joints to rotate while also being rigid enough (since the entire arm's weight is pivoted at this point).
Most of the arm was designed using only 3D printed PLA parts, but a number of parts were machined at a local makerspace, which required a few weeks of learning about CNC machining.
The Z axis is placed on the end effector instead of moving the entire arm up and down from the base frame. This is because the Z axis motor could be very small and light (since it used a lead screw, and therefore had large mechanical advantage) if it was placed on the end effector, but would need to be much larger if it needed to move the entire arm. Furthermore, this axis is not belt driven like the other three joints because this would have added an additional concentric shaft at the shoulder, which did not outweigh the cost of having it directly drive the Z axis on the end effector.
Electrically, the majority of the computing is done using a raspberry pi while motor control is done by an arduino. This because the pi has a much more powerful processor and more advanced libraries, but the arduino is far better suited for interfacing with hardware (especially since it already has advanced libraries for stepper control, whereas the pi only has rudimentary options). The arduino and pi communicate over i2c using a logic level converter, since the two devices operate and two different voltages (5V and 3.3V, respectively). The three main motors use a DM556T driver, whereas the smaller motor uses an A4988 driver. The larger motors use a 48V power supply, and the smaller motor uses a 12V supply.
An important note with this system is the use of twisted pairs. Any time a signal wire is used on this robot (other than short runs on the breadboard), it is twisted with its respective ground wire. This is critical since there is a lot of electrical noise due to the presence of an AC power supply and inductive loads. Alternatively, shielded cable could have been used, but this option was more readily available. This works by essentially allowing the noise to "cancel" itself out since both wires in the pair get equally affected by the surrounding EMF.
The software is arguably the most complicated part of this build. First, the path is discretized between the given initial and final cartesian point. Then, at each point, the raspberry pi conducts inverse kinematics using the Levenberg–Marquardt algorithm, which is an improvement on the numerical solver used in previous prototypes. When the solution is far from the current guess, this method behaves similarly to gradient descent, whereas it acts similarly to the Gauss-Newton method when the solution is close to the current guess, giving it the benefit of both methods. This method is especially suitable since this code relies on reference frames and rotation/translation matrices for kinematics. After all this, a 4xn matrix is produced giving the angle of each of the four motors over x discrete points along the path. After this, a reference frame correction is conducted due to the position of the limit switches on the arm. Essentially, each joint has its own reference frame and this must be converted into the global reference frame.
Once the inverse kinematics is calculated, it must be converted to something useful for the arduino. Since each motor is coupled due to the concentric shafts and configuration of timing belts, the required output angle of each gearbox shaft is calculated at each step. Then, the angle at the stepper motor shaft is calculated by multiplying each individual row of the matrix by the respective gearbox ratio. Finally, the stepper motor shaft angle is converted to steps.
Now that the steps at each motor are calculated, the pi sends the data to the arduino over i2c. Since these values are longs and i2c only supports transmission of bytes, each long is deconstructed by the pi, send to the arduino, and reconstructed into longs by the arduino.
Finally, the pi sends a "run" command over i2c to request the arduino to move the arm. Similarly, the arduino handles homing the robot using a homing command from the pi.
The next challenging portion of the build is the use of a camera. First, a phone camera was used to capture images of the charge door and test image recognition methods. Although methods that would work with these images wouldn't necessarily work with the camera on the robot, it would allow for testing without the vehicle present.
The most successful method of finding the charge door involved applying various filters, then using canny edge detection, followed by contour detection, and finally a bounding box. The bounding box helps determine the center of the charge door. This proved to be successful in detecting the center of the door.
In order to integrate a camera into the system, a camera calibration had to first be performed. This is the process of calculating the camera's camera matrix, which includes intrinsic, extrinsic, and distortion coefficients according to the pinhole camera model, in order to correct any inaccuracies introduced by camera sensor and lens. This is done by using a black and white checkerboard grid with known dimensions since it is easy for the camera to pick up on the corners. The known grid dimensions can be compared to the image of the grid captured by the camera at multiple angles to conduct this calibration.
For initial testing with both the camera and image processing, it was assumed that the charge door is perpendicular to the robot, as this allows for simplified calculations. This will be improved further along in the project. The camera was set up at varying distances from the charge door, and the height and width of the bounding box (in pixels) was recorded as well as the actual distance, measured with a tape measure from the end effector's reference frame. Then, a best fit line was created in order to get a relationship between bounding box size and distance to the charge door.
Once the distance from the charge door is known, a similar calculation can be done to determine the physical distance a pixel represents in the frame at a given distance. This is critical for determine how far to actually move the arm in the plane perpendicular to the vehicle. At this point, the x, y, and z position of the center of the charge door is known is reasonable accuracy. A simple coordinate system transformation is then needed for the robot to know where the charge door is relative to its global reference frame.
Putting all of this work together allows the robot to then move to the charge door. In the videos below, the arm is moving to the point at which it needs to press the charge door to open it.
The next steps include reducing play in the arm by replacing plastic timing pulleys with metals ones (the keyway cutouts in the 3D printed pulleys deform over time due to cyclic loading) and increasing the robustness of the charge door detection system using a complimentary time of flight sensor.
Starting with the reduction of play in the arm, an evident problem with the original design is the belt system. The original intent was for the motors to remain stationary in the base, and for the motion to be carried out to the end effector through a series of timing belts on concentric shafts. Given the forces on the end effector, however, this ended up not being an ideal solution. Since the belt system was integral to the design, it made sense to start the mechanical design from scratch, carrying forward the lessons learned from previous iterations.
This version of the design is mechanically far simpler, but comes with the drawback that the motors on the final two joints move with the arm, and therefore center of mass calculations are far more critical to ensure that the robot does not tip over. With this iteration also came an opportunity to address wire management. The previous version of the robot did not include harnesses or wire restraints, which resulted in a lot of tangles. This version employs wire chains, tiedown points, and is fully connectorized for modular repairs. With the addition of connectors, it made sense to introduce a custom PCB.
This PCB includes four motor controllers and an array of MOSFETs acting as H-bridges, one full bridge per motor. The board is currently being manufactured in order to be tested.