We are actively recruiting PhD students, see details in the Join Us Tab.
A major branch of our lab's research involves Structure from Motion (SfM). SfM is a photogrammetric method for the reconstruction of a 3D scene from a series of 2D images. In our lab we use SfM to analyze civil infrastructure like roads, buildings, and pipes; using the 3D data to make more informed decisions when conducting maintenance, scouting locations, and allocating labor. Furthermore, our research integrates thermal cameras, allowing for infrared data to be fused with our 3D models.
Our lab is currently equipped with both a mobile platform (drone) and a stationary camera array. The drone-based system enables flexible data collection from various perspectives and is particularly useful for capturing dynamic objects or scenes that require movement. In contrast, the stationary camera array is designed for high-precision 3D reconstruction of static objects or environments.
Together, these two systems allow us to perform comprehensive 3D reconstruction for a wide range of applications, accommodating both moving and non-moving targets. This versatility enhances our ability to conduct advanced research in areas such as dynamic scene analysis, structural monitoring, and object modeling.
Moving object reconstruction provides additional possibilities beyond static scene modeling. This system employs three synchronized FLIR Grasshopper 3 mono cameras to capture moving objects from multiple viewpoints with high temporal precision. By applying multi-view geometry and calibrated camera models, it supports accurate 3D reconstruction, object tracking, and motion analysis in dynamic environments.
The continuous 3D point cloud provides a foundation for AI-based perception pipelines, where machine learning models perform 3D point cloud segmentation and object tracking across time. The temporal consistency of the reconstructed point clouds allows AI algorithms to learn motion and structural patterns, supporting reliable tracking, behavior analysis, and dynamic scene understanding.
3D point cloud reconstruction of a stone drop event, illustrating the spatial distribution and geometry of the stone captured during motion.
After moving objects are captured and reconstructed using synchronized multi-view imaging, it becomes possible to track individual particles across continuous 3D point cloud frames. This capability transforms the point cloud from a static geometric representation into a dynamic, time-aware model of the scene.
Building on this foundation, the project uses temporally consistent 3D point clouds to track top-surface sand movement over time. By following particle trajectories directly in 3D space, the system enables detailed analysis of displacement, velocity, and flow patterns on the sand surface. This approach provides insight into granular motion, erosion processes, and surface deformation that are difficult to quantify using conventional 2D image analysis.
Furthermore, the continuous 3D tracking framework creates new opportunities for advanced analysis. The reconstructed motion data can be used to study interactions between particles, identify regions of instability, and support AI-driven learning of motion behaviors. These capabilities open pathways toward predictive modeling of surface evolution, integration with physics-based simulations, and intelligent monitoring of dynamic environments.
Real Camera setup use Flir Grasshopper 3 and Jetson Orin Nano
Image motion analysis results: (a) original image; (b) point tracking between two images; (c) displacement visualization.
3D points-based displacement (time progresses from left to right)
Accurate deformation monitoring requires not only high-resolution 3D reconstruction, but also a clear understanding of the uncertainty associated with each reconstructed point. In image-based 3D reconstruction systems, uncertainty arises from multiple sources, including camera sensor noise, feature detection variability, illumination changes, and geometric sensitivity during triangulation. If these uncertainties are not explicitly quantified, small but critical deformations may be obscured or misinterpreted during time-series point cloud comparison。
In this work, uncertainty quantification is integrated directly into the 3D reconstruction pipeline to support reliable time-domain deformation analysis. By repeatedly reconstructing identical scenes under fixed camera configurations, the system isolates intrinsic reconstruction variability caused by sensing and processing noise. Point-wise variance is then computed across hundreds to thousands of repeated reconstructions, providing a quantitative measure of spatial uncertainty in all three dimensions. This approach enables the distinction between true physical deformation and reconstruction-induced variation, which is essential for precision-focused monitoring applications such as retaining wall and slope stability assessment.
Furthermore, uncertainty analysis is performed within a consistent global coordinate frame, avoiding post-registration techniques such as ICP that can mask real deformations. By maintaining fixed camera parameters and applying robust triangulation methods, the resulting point clouds remain directly comparable across time. The quantified uncertainty is subsequently leveraged in point cloud comparison and deformation analysis, ensuring that detected changes exceed the inherent noise level of the reconstruction system. This uncertainty-aware framework significantly enhances the reliability and interpretability of time-series 3D point cloud analysis in both laboratory and real-world field environments.
Reference image of the retaining wall
Reconstructed 3D point cloud of the retaining wall
In a real-world field experiment, a synchronized camera array was deployed approximately 8.5 meters from a retaining wall, with a one-meter baseline between cameras. This setup enabled the reconstruction of a surface area of about five square meters.
Despite the increased sensing distance and outdoor environmental variability, the system maintained sub-millimeter precision across 1,000 reconstructions. Slightly higher variance compared to controlled indoor experiments was primarily caused by sensor noise, lighting changes, and minor matching inconsistencies. The central region of the reconstruction exhibited higher stability due to greater image overlap, while depth precision remained spatially consistent across the surface. Overall, the results demonstrate that the camera array system delivers robust and reliable 3D reconstruction performance under real-world field conditions.