Working in Professor Kenji Shimada's CERLAB, I built an automated inspection routine for industrial window frames. My work involved creating a software stack that processes point cloud and image data using computer vision to precisely identify defects. The project's goal was to streamline manufacturing by replacing manual inspections with a faster, more reliable system that minimizes human error.
The images below show the window frame samples that I worked with to develop the defect detection algorithmn.
Window Frame Samples
Window Frame Samples with Defects Highlighted
These images show the defects that occur on the foam/insulation (center region of sample). The objective is to identify these very defects using computer vision algorithms.
Surface Cavity Defects
Surface Irregularities
Raw data was captured using a ZIVID 2+ MR60 mounted on a UR5e robot arm, resulting in point clouds that contained significant background noise. To prepare this data for analysis, I developed a pre-processing pipeline to remove outliers, segment the background, and separate the two samples. (One is a pass sample and the other is a fail sample with defects)
Point Cloud Pre-Processing Pipeline
The pre-processing pipeline is shown in the adjacent flowchart. Leveraging K-means clustering, the pipeline exploits the distinct planes of the samples and background to efficiently isolate them. After segmentation, the point clouds are reoriented and ready for defect identification.
Defect Identification using Curvature - Flowchart
The next step after segmenting the samples from the raw point cloud is to identify the regions on the surfaces that contain the defects. To identify defects, my system computes the curvature of each point. This is based on the principle that defective regions have a high curvature, while undamaged surfaces are relatively flat.
The final output visualizes these findings: red indicates a high-curvature, defective area, and blue signifies a flat, low-curvature surface. The sample on top clearly shows a defective sample, while the one below is a clean "pass."
Output Defect Visualization I
Output Defect Visualization II
Building on the point cloud analysis, I am now developing a parallel pipeline for RGB data to identify defects. My next steps will focus on sensor fusion, integrating data from both point cloud and RGB sources. This approach will leverage the strengths of each data type to generate more accurate and comprehensive defect maps, ultimately leading to a more robust and reliable inspection system.