PhD Thesis

Surveying Goal Extraction from 3D Point Clouds to Support Bridge Inspection

I am tesing the technical feasibility of using laser scanners to collect geometric data of bridges to support geometric defect detection and efficient bridge management. Laser scanners are 3D imaging systems, which can collect highly accurate and dense 3D point clouds of a bridge in minutes. Combined with computer vision and a Bridge Information Models (BIM) depicting as-designed condition of a bridge, I am trying to automatically recognize objects such as beams, columns from the 3D point clouds, and developing reasoning mechanisms to support automated 3D data interpretation. The basic idea it to develop computer interpretable representations of "surveying goals", such as "minimum vertical distance between the super structure and the road under the bridge is minimum vertical under clearance of a bridge", and algorithms for transforming computer interpretable surveying goal queries into a sequence of operations, such as "find all points belong to the superstructure bottom surface and the road surface", "extract the road surface and the superstructure bottom surface", "calculate minimum vertical distance between them". Various existing techniques, such as object recognition techniques, geometric reasoning mechanisms, geometric feature extraction techniques can enable automatic execution of the sequence of operations generated by my approach, and generate surveying goal values automatically.

What does that mean? Why is that important? Currently, bridge inspectors have to manually process point clouds to extract surfaces, edges etc., and calculate a piece of information they are interested in (how much is the minimum vertical under clearance? where does it occur?). That manual approach takes a lot of time, and is error-prone. With my approach, bridge inspectors only interact with components and features in a bridge information model to define what they are interested in, then the computer can report the surveying goal result. The representations developed by me is like a macro language of bridge inspectors, and my algorithms serve as compilers to compile them into operations which can be automatically executed, but right now are executed manually by bridge inspectors. Please look at these two pictures, the left one is a mesh model of a bridge composed of just points and triangles, how boring is manually selecting individual points and triangles to extract geometric features for calculating a surveying goal? Look at the bridge information model at the right side, bridge inspectors can interact with components rather than points and surfaces, so they can specify things like "select all columns" while defining what they are looking for in laser scanned data! BIM (building information modeling) is maturing these days, I expect that such bridge information model will be widely used in practice, and use it for querying laser scanned data can be one great benefit brought by BIM!

Automated BIM supported bridge inspection, that is the future!