My background includes more than one decade of work in academia made up of interdisciplinary research work in radiation therapy, medical imaging, computer vision, and control systems. My research mainly focused on topics such as uncertainty quantification, robust planning, machine learning, high performance computation techniques, and treatment planning optimization.
Prior to medical physics residency at Emily Couric Clinical Cancer Center, I worked as a professional research staff in radiation oncology department at University of Virginia. The research was focus on different aspects of modeling and simulating uncertainties, and robust planning techniques in external beam radiation therapy. In 2018, this project was awarded an R01 NIH grant.
Since simulation of different uncertainty sources is a computationally demanding process, we developed a stand-alone GPU-accelerated software called radiation therapy robustness analyzer (RTRA) that quickly simulates prevalent uncertainty sources. This software works in two different modes , i.e. graphical user interface (GUI) mode and scripted mode. In the GUI mode, the simulation can be run after selecting pertinent DICOM/Pinnacle files, ROIs to analyze, uncertainty model, and virtual treatment simulation parameters. In the scripted mode, the software is interfaced with pinnacle treatment planning system where it automatically generates plan robustness report for a given treatment plan and simulation parameters.
Delineation error-detection is a tedious process. Such errors propagate throughout the radiation treatment and could impact the treatment outcome. KBQC extensively employs machine learning techniques to learn from historical data of previously treated patients in order to identify the gross errors in regions of interests. There are two methods that out group have applied one employ statistical anomaly detection method, and the other one utilizes a classification-based anomaly detection scheme.
The radiation therapy community needs more evidential data/analysis to fully trust in auto-segmentation methods. Because of this need, we developed a novel method to inter-compare the manual and auto-segmented delineation in the presence of other uncertainties that occur in the radiation treatment process. For a population of prostate cancer, our method shows the effect of delineation uncertainty due to auto-segmentation on the probabilistic dosimetric and biological indices. The method helps to increase the confidence of using auto-segmentation algorithm which eventually saves hours of manual, tedious and error-prone contouring process in the clinics.
Numerous authors have investigated differences between auto-segmented and manually drawn contours, mainly investigating the accuracy through comparison of contour-based similarity coefficients. These similarity analyses are intended to reveal how the auto-segmented ROIs compared with the manual gold standard, however, they do not assess the adequacy of the auto-segmented ROIs for the radiation therapy treatment. Dice similarity coefficient (DSC) is one of the most commonly used metrics to assess the quality of segmented ROIs. However, DSC lacks the spatial information, and therefore infinitely many configurations with different dosimetric outcomes could result in the same DSC. In this paper, the authors’ intention was to analyze auto-segmentation algorithm based on dosimetric and biological metrics without considering any other shape similarity metrics. For reference, DSC analysis is in the supplemental material.
The research is mainly focused on the enhancement of existing external beam inverse treatment planning techniques - often referred to as external beam automated treatment planning techniques - used as a part of the medical procedure of radiotherapy. A radiation treatment plan is collaboratively designed by a team consisting of radiation oncologists, radiation therapists, medical physicists and medical dosimetrists. The procedure generally starts with capturing a patient’s organs and tumor(s) using Computed Tomography (CT) or multi-modality image matching techniques. After delineating the target volumes and organs at risk, a radiation oncologist chooses an appropriate state-of-the-art technique such as Intensity Modulated Radiation Therapy (IMRT) or Volumetric Modulated Arc Therapy (VMAT) to treat the patient. The selected technique determines the settings of a sophisticated computer controlled device called a Linear Accelerator (Linac) to deliver high-energy X-rays to the region of a patient’s tumor in order to destroy cancer cells while sparing the surrounding normal tissue.
In prevailing inverse treatment planning systems (TPS), it could take several hours for the dosimetrists to find a viable set of parameters that enforces the prescribed clinical protocol for difficult cases. To expedite the process and also to make it more intuitive for the planners, the Reduced Order Constrained Optimization (ROCO) paradigm was developed for IMRT at Rensselaer Polytechnic Institute (RPI) with the collaboration of researchers at Memorial Sloan-Kettering Cancer Center (MSKCC). Using dimensionality reduction techniques from machine learning, the new paradigm enables the dosimetrists to quickly devise clinically relevant IMRT treatment plans with less intervention. In this research work, we pursued the following objectives:
we have vailidated the process in the newly employed TPS at MSKCC, namely Eclipse from Varian Medical Systems. The idea is to make IMRT ROCO available for dosimetrists by providing a user-friendly graphical user interface that facilitates the planner interactions with the TPS via the Eclipse Application Programming Interface. To harness all the available computational resources of multi-core CPUs, highly vectorized and threaded linear algebra functionalities have been added to IMRT ROCO. Faster and more robust design enabled verifying the hypothesis that IMRT ROCO significantly enhances planning speed in a clinical setup. [Talk and poster on ROCO's integration in Eclipse TPS.]
Developed a new iterative voxel-based ROCO method to automatically accommodate clinical constraints in an IMRT planning problem. Control system theory has been used to design PID controller that automatically navigates the solution space until convergence.
From May 2009 to September 2013, I had been working on interdisciplinary research projects involving an agent-based framework in different robotics and space situational awareness applications at ECE departemnt of University of Wyoming under Prof. John McInroy supervision. Each project seeks to harness the strength of Multi-agent systems to enhance the quality of acquired sensory information collaboratively collected by the system components. The improvement is mainly achieved by designing planning schemes that consider the effect of key contributing elements in the overall system performance. Path planning, patrolling, collaborative control schemes, combinatorial optimization problems and robust sensor allocation have been indispensable parts of the research work.
In this work, we proposed a new sensor planning scheme (ILP-Greedy) to distribute measurement tasks among some moving cameras. The planning employs a new systematic method that finds a near optimal solution for the corresponding sensor allocation problem expressed by a max-min combinatorial optimization program. It combines convex and greedy optimization methods to achieve solutions quickly with a certificate of optimality, even when the number of unknowns are in the millions. Therefore, the most distinct advantage of the proposed method over the other methods is that it can be applied to any large-scale problem with several objects and cameras. The following animation illustrates the result of the planning algorithm applied to a fairly small size Space Situational Awareness application within Low Orbit Earth (LEO). In this simulation, eight Resident Space Objects (RSOs) are collectively characterized by 3 observer satellites. The Qjk matrix denotes the cumulative observation qualities of two different sides of the RSOs .
We have developed a robust optimization-based method to design orbits on which the sensory perception of the desired physical quantities are maximized. The method provides a convenient way to incorporate various constraints imposed by many spacecraft missions such as collision avoidance, co-orbital configuration, altitude and frozen orbit constraints along with Sun-Synchronous orbit. The study has specifically investigated designing orbits for constrained visual sensor planning applications as the case study. For this purpose, the key elements to form an image in such vision systems are considered and effective factors are taken into account to define a metric for perception quality. The effectiveness of the proposed method is confirmed for several scenarios on low and medium Earth orbits as well as a challenging Space-Based Space Surveillance program application.
A new multi-purpose planning scheme that effectively solves patrolling and constrained sensor planning problems for large-scale multi-robot systems. This need arises when many mobile robots are performing a patrolling mission while simultaneously gathering visual data from predefined objects. The proposed technique is a sequential scheme which allows compromises between trajectory length and observation quality. A new way of iteratively modifying all the robots trajectories until it is possible to collectively obtain quality images of all sides of all objects is developed. The robots can be moving in full six dimensional space, and both visual diffraction and occlusion are considered. Each robot is equipped with pan/tilt cameras, and all the camera angles needed to capture these images as the robots move along their trajectories are calculated using a new, fast algorithm. Utilizing this method as a part of the proposed Multi-objective planning scheme, the perception quality is substantially improved. Part of this work is published in a paper at CASE 2013.
A non-myopic planning scheme that robustly maximizes the quality of the acquired information in an uncertain multi-camera multi-target vision system. To devise the robust plan, the probabilistic uncertainties associated with the system states are propagated through a non-linear quality metric utilizing the Unscented Transform. The metric considers different contributing factors that affect the quality of the observations for Pan-Tilt-Zoom cameras, such as the resolving ability as a function of distance, occlusion and the observation quality of different sides of the targets. The robust planning algorithm is formulated as a Mixed Integer Second Order Cone Program which employs the propagated statistics of the perception qualities at different time samples. Exploiting the proposed formulation, the trade-off between robustness and performance can be controlled by the confidence value parameter. This adds the capability of reaching suitable compromises to maximize observation quality despite the system uncertainties. The analyses develops strategies to determine the best possible robust plans even for large-scale complex systems. Part of this work is published in a paper at CASE 2013.