The current prototype of the SIGMA consists of three subsystems, accomplishing various tasks in the image acquisition process. The subsystems are as follows:
SPAD Focusing
Rastering and Data Acquisition
Image Processing
Each subsystem is controlled by the Raspberry Pi 4 (RPi) using Python, and motion is implemented through the use of a 3D printer, due to its precise control. The current functional component list is shown below. As a whole, the inclusion of automation through the RPi allows for intuitive and consistent operation of the system, providing significant benefits to ease of use for clinical staff.
Figure: Imagivo biopsy imager concept. Focusing on the head of the printer, the fiber optic cable is hard-mounted and secured in place via lens tubing from ThorLabs. The photon collection and computational components are physically separated from the printer to allow for isolation and weight reduction. The stage and head move in tandem to achieve translation in the xy-plane, while the head is able to move independently in the z-axis.
To ensure that the SPAD is achieving a consistently focused image, as the size of a specimen can vary quite drastically, the distance between the end of the fiber optic diode and the top of the specimen will be set at a consistent distance. The ideal distance between the sample and the fiber optic cable to maintain focus was calculated to be 5.8 mm.
The mechanism that will be used to adjust the focal distance is the y-axis movement of the 3D printer, coupled with a laser diode and photoresistor, all controlled by the RPi. The laser diode and photoresistor hang from the 3D printer head at a set distance determined through phantom testing. When a specimen is placed on the stage, the system can be initialized by the surgeon. and the diode is turned on to trigger a signal from the photoresistor, creating a “tripwire”. The 3D printer head is then lowered to a point at which the tripwire is broken through interference with the specimen, at which point the head is stopped. The device is now ready for imaging.
The light source used is a red-light LED adapted with an excitation filter to narrow the bandwidth to a frequency range of 770 and 790 nm. This is due to the excitation wavelength of the Cetuximab IRDye-800CW fluorophore (~780 nm) that is already typically used in clinical settings. Light from the source is absorbed by the fluorophore in the sample, which is then emitted out towards the camera at approximately 800 nm. The SPAD is equipped with an emission filter to block all light over a frequency of 810 nm. A motorized aperture is placed in front of the SPAD to adjust between optimized numerical apertures for closed and open aperture images used for normalization. The signal outputted by the SPAD via its TTL port is an analog signal which has a voltage higher than what can be accepted by the Raspberry Pi. A voltage comparator circuit was designed to mitigate this issue and convert the signal to a binary digital signal used for counting via GPIO edge detection. The data is sent to the RPi from the SPAD to be stored in an image array on the onboard SD card for later analysis.
In order to properly image each pixel, the approach utilized is the rastering movement using photon-based imaging. In the photon-based imaging, the time spent on a single pixel is set and the 3D printer head will sit over each pixel for that given amount of time (25 milliseconds). After reaching this time threshold, the head will move to the next pixel and repeat the process to continue acquiring photons at each pixel. This will then be repeated for the closed aperture image. Before rastering begins, the 3D printer head will be set to move to a corner of the imaging area.
After collecting pixel data from the specimen and storing it on the SD card, the RPi runs the analysis portion of the code. This code was originally written using MATLAB, so the MATLAB compiler runtime is used to bypass this issue and allow compatibility with Python.
The surgeon using the system will be able to alternate between a probability heat map and a depth map. The fluorescence signal may also be integrated with the white-light image to aid data visualization as an image overlay as mentioned previously. The depth map allows for the visualization of how the signal obtained varies with margin width. The wider the margin, the less ‘fluorescence’ is visualized. The probability heat map provides the surgeon with a way to visualize the chance that a margin is positive, close, or clear.
The ratio metric measure thresholds for the depth and heat maps will be obtained using lookup tables defined by functions. These functions will be characterized by the signal obtained applying the dual aperture normalization technique to phantoms. As all the information about the fluorescence is known when making phantoms, and the optical properties of these phantoms are set to be equal to that expected of the excised tissue, the trend of the ratio metric measure with depth can be determined. In making the depth map, adequate modifications that correlate to the relationship between the ratio metric measure and depth are made to the matrix that represents the color channels of the fluorescence image. In the case of the heat map, the threshold and color scaling of the fluorescence signal are set using this relationship between the ratio metric measure and depth.