I began my first summer internship at Komatsu Mining Corp. shortly after completing my freshman year at Rose-Hulman (2017). I worked as an Intern in the Advanced Automation section of the engineering department and worked on a very interesting project.
My main project over the summer of 2017 was a research project using an IFM 2D/3D LiDAR (Light Detection and Ranging) and Infrared (IR) Illuminator pair (left) to integrate a 2D camera feed and a point cloud of 3D data in order to determine the number of objects the sensor could see and the sizes of these objects.
The project was heavily based on vision processing. Without filtering out the background, the objects I wanted to detect couldn't be seen very well through the 2D Camera feed. I used a vision processing software I was familiar with called RoboRealm in order to filter out the exact objects I wanted to count and measure. The objects I wanted to measure were never exactly parallel to the angle of the camera, so in my calculations for size of each object, I had to not only take into account distance from the sensor, but also the angle relative to the sensor.
The steps I had to take were relatively straightforward, but took much time to execute. First, I had to use many different filters, edge detection modules, and digital lighting adjustments in order to make the objects I wanted to measure more defined. I would make these objects into trackable, measurable blobs. These blobs could be counted to in order to find the number of objects detected. Additionally, I wrote a JavaScript program to find the sizes of these objects based on their blob and bounding box sizes. An example is shown on the right. The lengths and widths I measured were in pixels, so I used a Least Square Regression Line algorithm to find a calibration factor to convert pixels directly to centimeters based on the object's distance from the sensor. To find the distance from the sensor, it was just a matter of pulling a range measurement from the 3D point cloud of the LiDAR.
If any object was directly parallel to the angle of the sensor, this would be a sufficient measurement.
However, if any object was not directly parallel to the sensor (which was most often the case), the size had to be compensated for via the angle of inclination of the object relative to the sensor.
To measure this, I could use 3D data from the LiDAR to determine distances at certain x- and y- coordinates to find a Linear Fit line for slope of the object relative to the sensor. Then, using two arbitrary points on the Linear Fit line, I could find the angle of the object to the horizontal and use that angle and some trigonometry to compensate for the different angle of the object.
To the left is the front panel of a LabVIEW program I wrote to convert pixel sizes to size in centimeters almost instantly. The Pixel Sizes array is taken straight from RoboRealm, and the Distance, X-Inputs, and Y-Inputs are grabbed from the LiDAR 3D data. The full calculation is hard-coded into the software. As soon as the user hits "Run," any objects sensed will not only modify the length of the array in order to count number of objects, but will also convert any size measurements from pixels to centimeters, compensating both for distance from and angle to the sensor.