Joint Kinematics through Image Matching

Fluoroscopic Analysis of Knee Joint Kinematics Feedback

Participants

  • Faculty: Dr. Elisa Barney Smith, Dr. Michele Sabick, Dr. Amit Jain

  • Students: Renu Ramanatha, Nazia Sarang, Charles Scott

Collaborators

  • Center for Orthopedic & Biomechanics Research

Description

Female athletes sustain anterior cruciate ligament (ACL) injuries at rates from three to seven times higher than male athletes in the same sports. Several studies have recently suggested that differences between the genders in the mechanics of landing from jumps may result in increased ACL loads in female athletes. To date, no studies have quantified the internal kinematics of the knee joint during landings in athletes of either gender. Because the ACL connects the femur and tibia at the knee joint, relative motion between these two bones during landing may predispose the ligament to injury. Accurate bony motion data cannot be collected using standard non-invasive motion capture techniques. By combining computer vision and image processing algorithms we can develop a minimally invasive technique to quantify joint motion in live human subjects. This technique matches 3-D joint images of human joints with 2-D video fluoroscopy (video X-ray) image sequences to track the motions of bones in 3-D at a joint very accurately. This will enable researchers to collect accurate, three-dimensional kinematic data of bones and joints in vivo and to accurately quantify how bones in a joint move relative to one another during dynamic activities, to quantify joint motions in people with movement or skeletal abnormalities, such as running, jumping, and cutting, which are of particular interest in the study of ACL injury mechanisms in athletes, and to study both normal and pathologic motion in a wide range of skeletal joints. Knowledge of the exact spatial position of the two joint bones will allow biomedical researchers to develop techniques to diagnose the extent of joint injuries.

The data for this project consists of sets of CT images representing a 3-D volume, and 2-D fluoroscopy image sequences of the same joint. The CT image has already been processed to extract a 3-D solid model of the bones. The procedure is to:

  • Segment the 3-D CT volume to separate the two bones into 2 different 3-D CT volumes

  • Match the 3-D solid models to the bones in the segmented CT ‘images’

  • Use projection software developed for this research to produce 2-D simulated fluoroscopy images through a ray-tracing process.

  • Use simulated annealing or another optimization procedure to iteratively adjust the pose of each bone model in 6 degrees of freedom (3 position and 3 orientation) until features extracted from a projection of the implant model matches features detected in the fluoroscopy image. From this the exact spatial position of the two joint bones is known.

Parts of all these steps have been developed through MS and undergraduate research projects.They still need to be expanded for robustness and integrated to form a complete system. Some other portions of the project that remain include:

Choosing better features for image matching – Currently edges of the Flouroscopic and projected CT images are matched in both gray scale and bilevel to determine quality of fit. This leaves out a large portion of the image information. It is thought that some filled bone images/features may help orient the search into the right portion of the feature space, then the exact match could be done with edges for detailed match. Also getting a model of the overall orientation of these long bones in 2D and making some preliminary matches to bone orientation in 3D would assist in the fine matching search.

Separating overlapping X-ray components – There is interest in using this methodology for canine ACL research. Canines are less cooperative in moving in prescribed motion. Image capture from canines can occur from them walking on a treadmill in front of the camera. The quality of the image is good, but the two legs will pass by each other. The two images need to be segmented from each other to be useful. This involves image segmentation and tracking.

Bone Coordinate System matching – Once the 2D and 3D images are matched, the resulting data has to be interpreted. For this to happen the 3D bones need to be given a coordinate system and that coordinate system has to be mapped to real-space. A coordinate system has been defined. Features on the 3D bone image need to be extracted to map the image to the coordinate system. Often the coordinate system is defined using features of the bone that will not be available within the field of view of our 3D images. This too must be compensated for.

Image of the 3-D coordinate system of a knee

Tracking – Once the 2D images can be reliably found relative to the 3D image, the outputs must be tracked. This has two components. The first is tracking in imaging coordinate space. If the elevation, azimuth and rotation are determined in one frame, there should not be much change by the next frame. The hypothesis on where the search algorithm should begin its search should be influenced by the previous frames. Then once the bones are mapped to bone coordinate space, these must be tracked so the dynamics of the motion can be analyzed.

Feedback Magnetic Resonance Image-Matching System

This project is a similar project to the Flouroscopic matching problem. This project is aimed at migrating the Fluroscopic/CT approach to images from Magnetic Resonance (MRI). MRI are preferable because the MR imaging technique is less dangerous to the subjects due to CT & Fluoroscopy modalities use of x-rays.

For MRI image registration, a sequence of MRI images are taken at the same location in 3D space while the subject moves the joint in the frame. The MR images are essentially a slice of a 3D space. Instead of matching this “2D” image to the 3D volume through a projective model, a model using slices is used. A similar search algorithm can be used. Open project components include

Segmenting MR Images – both the “2D” slice and the 3D volume need to be segmented to extract just the bone portion. The flesh portion is expected to have significant deformation during the motion activity and can’t be used for matching. MR Images are very good at producing high contrast in soft tissue, and are less well suited to hard tissue. Some imaging pulse parameters have been found that can produce adequate contrast, but segmentation is still needed. The resolution is also lower in these images than they are for Fluoroscopic images.

Selecting features – The images sliced from the 3D MRI and the 2D MRI need to be compared to determine when a suitable match is present. Features and metrics for the quality of match need to be determined.

Publications

  • Nazia Sarang, “Biomedical matching GUI development,” Masters Project, Computer Science, Boise State University, May 2012.

  • Renu Ramanatha, “A Parallel Implementation for analysis of CT/Fluoroscopy image registration“, Masters Thesis, Computer Science, Boise State University, December 2009.

  • Charles Scott and Elisa H. Barney Smith, “An Unsupervised Fluoroscopic Analysis of Knee Joint Kinematics,” Proc. 19th IEEE Symposium on Computer-Based Medical Imaging, Salt Lake City, Utah, June 2006, pp. 377-380.