S. M. Kamrul Hasan

Imaging Scientist


PhD Candidate

Research Assistant

Biomedical Modeling, Visualization & Image-guided Navigation (BMVIGN) Lab
Center for Imaging ScienceRochester Institute of TechnologyRochester, New York, USA

OFFICE LOCATION


Center for Imaging Science,Rochester Institute of Technology, Building 76,54 Lomb Memorial,Rochester, NY 14623

CONTACT


Phone: (585) 764-2570Current E-mail: sh3190@rit.edu

That is me through a Sobel Edge detector Filter

News

  • [10/13/2020] My Paper got accepted at SPIE Medical Imaging (MI) 2021, San Diego, California

  • [09/22/2019] Got an Invitation as a reviewer for NeurIPS 2020 Workshop

  • [08/21/2019] Received MICCAI Travel Award as a part of NSF grant from University of Georgia

  • [08/17/2019] Started Research Internship at IBM, Almaden Research Center, San Jose, California

  • [07/20/2019] Presented my paper at EMBC 2020, Montreal, Canada

  • [04/10/2019] Paper got accepted at EMBC 2020

  • [04/04/2019] Presented paper at ISBI 2020, Iowa city, Iowa

  • [02/19/2019] Got an Invitation as a reviewer for MICCAI 2020

  • [02/19/2019] Give my oral presentation at SPIE Medical Imaging (MI) 2020, Houston, Texas

  • [02/17/2019] Paper got accepted at ISBI 2020

  • [11/22/2019] U-NetPlus paper has been selected as an oral presentation at RIT Graduate Showcase, 2019

  • [10/15/2019] Our Paper got accepted at SPIE Medical Imaging (MI) 2020

  • [04/10/2019] Our Paper got accepted at EMBC 2019

Co-op Campaign Poster 8.5x11_Awareness_Kamrul_F2.pdf

Biography

I finished my PhD from Chester F. Carlson Center for Imaging Science at Rochester Institute of Technology (RIT), Rochester, NY under the direction of my advisor, Dr. Cristian Linte and funded by both NSF and NIH grants. My PhD Thesis was titled “From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications”. I worked as an AI Research Intern at Philips Research in Cambridge, Massachusetts where I designed an extremely optimized object detection framework for object detection of COVID-19 features in ultrasound images (Ultrasound scans of the lung image) captured by Lumify portable Ultrasound probe. I worked for IBM Research in California as a Machine Learning Research Intern, where I've worked on deep neural network pruning/optimization for better explainable AI.

Research Interests

My research focuses broadly on developing and optimizing machine learning models for analyzing multi-modal images to enable more accurate automatic semantic and instance segmentation, 4D deformable registration, object detection, video object motion estimation, out-of-distribution (uncertainty) estimation, as well as video inpainting. I have profound knowledge of optimized label-efficient machine learning-based imaging problems. I have strong hands-on expertise in semi-/self-/un-supervised learning, representation learning, deep generative models, probabilistic Bayesian Monte Carlo models, and posterior estimation models.

Particular Research Interests

  • Semi-Supervised Learning

  • Self-Training

  • Multi-Task Learning

  • Disentangled Representation Learning

  • Deep Learning

  • Machine Learning

  • Augmented Reality

Education

Collaborators:

  • Dr. Suzanne M. Shontz (University of Kansas)

  • Dr. Niels Otani (Rochester Institute of Technology)

Research and Publications

Segmentation and removal of surgical instruments for background scene visualization from Endoscopic / Laparoscopic video

Segmentation and removal of surgical instruments for background scene visualization from Endoscopic / Laparoscopic video


S. M. Kamrul Hasan, Richard A. Simon and Cristian A. Linte

Surgical tool segmentation is becoming imperative to provide detailed information during intra-operative execution. These tools can obscure surgeons’ dexterity control due to narrow working space, and visual field-of-view, which increases the risk of complications resulting from tissue injuries (e.g. tissue scars and tears). This paper demonstrates a novel application of segmenting and removing surgical instruments from laparoscopic/endoscopic video using digital inpainting algorithms. To segment the surgical instruments, we use a modified U-Net architecture (U-NetPlus) composed of a pre-trained VGG11 or VGG16 encoder and redesigned decoder. The decoder is modified by replacing the transposed convolution operation with an up-sampling operation based on nearest-neighbor (NN) interpolation. This modification removes the artifacts generated by the transposed convolution, and, furthermore, these new interpolation weights require no learning for upsampling operation. The tool removal algorithms use tool segmentation mask and either instrument-free reference frames or previous instrument containing frames to fill in (inpaint) the instrument segmentation mask. We have demonstrated the performance of the proposed surgical tool segmentation/removal algorithms on a robotic instrument dataset from the MICCAI 2015 EndoVis Challenge. We also showed successful performance of the tool removal algorithm from synthetically generated surgical instruments containing videos obtained by embedding a moving surgical tool into surgical tool-free videos. Our application successfully segments and removes the surgical tool to unveil the background tissue view otherwise obstructed by the tool, producing visually comparable results to the ground truth.

L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MRI

S. M. Kamrul Hasan and Cristian A. Linte

In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated our framework on the ACDC dataset featuring one healthy and four pathology groups imaged throughout the cardiac cycle. Our technique achieved Dice scores of 96.80% (LV blood-pool), 93.33% (RV blood-pool) and 90.0% (LV Myocardium) with five-fold cross-validation and yielded similar clinical parameters as those estimated from the ground truth segmentation data. Based on these results, this technique has the potential to become an efficient and competitive cardiac image segmentation tool that may be used for cardiac computer-aided diagnosis, planning and guidance applications.




International Symposium on Biomedical Imaging (ISBI 2020)

A Regularized Network for Improved Cardiac Ventricles Segmentation on Breath-Hold Cine MRI


S. M. Kamrul Hasan and Cristian A. Linte

In this work, we implement a fully convolutional segmenter featuring both a learned group structure and a regularized weight-pruner to reduce the high computational cost in volumetric image segmentation. We validated the framework on the ACDC dataset and achieved accurate segmentation, leading to mean Dice scores of 96.80% (LV blood-pool), 93.33% (RV blood-pool), 90.0% (LV Myocardium) and yielded similar clinical parameters as those estimated from the ground-truth segmentation data.

SPIE Medical Imaging (2020)


CondenseUNet: a memory-efficient condensely-connected architecture for bi-ventricular blood pool and myocardium segmentation


S. M. Kamrul Hasan and Cristian A. Linte

With the advent of Cardiac Cine Magnetic Resonance (CMR) Imaging, there has been a paradigm shift in medical technology, thanks to its capability of imaging different structures within the heart without ionizing radiation. However, it is very challenging to conduct pre-operative planning of minimally invasive cardiac procedures without accurate segmentation and identification of the left ventricle (LV), right ventricle (RV) blood-pool, and LV-myocardium. Manual segmentation of those structures, nevertheless, is time-consuming and often prone to error and biased outcomes. Hence, automatic and computationally efficient segmentation techniques are paramount. In this work, we propose a novel memory-efficient Convolutional Neural Network (CNN) architecture as a modification of both CondenseNet, as well as DenseNet for ventricular blood-pool segmentation by introducing a bottleneck block and an upsampling path. Our experiments show that the proposed architecture runs on the Automated Cardiac Diagnosis Challenge (ACDC) dataset using half (50%) the memory requirement of DenseNet and one-twelfth (∼ 8%) of the memory requirements of U-Net, while still maintaining excellent accuracy of cardiac segmentation. We validated the framework on the ACDC dataset featuring one healthy and four pathology groups whose heart images were acquired throughout the cardiac cycle and achieved the mean dice scores of 96.78% (LV blood-pool), 93.46% (RV blood-pool) and 90.1% (LV-Myocardium). These results are promising and promote the proposed methods as a competitive tool for cardiac image segmentation and clinical parameter estimation that has the potential to provide fast and accurate results, as needed for pre-procedural planning and / or pre-operative applications.

Toward Quantification and Visualization of Active Stress Waves for Myocardial Biomechanical Function Assessment


Niels F Otani , Dylan Dang , Christopher Beam , Fariba Mohammadi , Brian Wentz , S M Kamrul Hasan , Suzanne M Shontz , Karl Q Schwarz , Sabu Thomas and Cristian A Linte

Estimating and visualizing myocardial active stress wave patterns is crucial to understanding the mechanical activity of the heart and provides a potential non-invasive method to assess myocardial function. These patterns can be reconstructed by analyzing 2D and/or 3D tissue displacement data acquired using medical imaging. Here we describe an application that utilizes a 3D finite element formulation to reconstruct active stress from displacement data. As a proof of concept, a simple cubic mesh was used to represent a myocardial tissue “sample” consisting of a 10 x 10 x 10 lattice of nodes featuring different fiber directions that rotate with depth, mimicking cardiac transverse isotropy. In the forward model, tissue deformation was generated using a test wave with active stresses that mimic the myocardial contractile forces. The generated deformation field was used as input to an inverse model designed to reconstruct the original active stress distribution. We numerically simulated malfunctioning tissue regions (experiencing limited contractility and hence active stress) within the healthy tissue. We also assessed model sensitivity by adding noise to the deformation field generated using the forward model. The difference image between the original and reconstructed active stress distribution suggests that the model accurately estimates active stress from tissue deformation data with a high signal-to-noise ratio.





Computing in Cardiology 2019
RIT Graduate Showcase 2019

U-NetPlus: A Fully Convolutional Architecture for Semantic and Instance Segmentation of Surgical Instruments


S. M. Kamrul Hasan and Cristian A. Linte

Conventional therapy approaches limit surgeons' dexterity control due to limited field-of-view. With the advent of robot-assisted surgery, there has been a paradigm shift in medical technology with more precise control over surgery. However, it is very challenging to track the position of the surgical instruments in a surgical scene, and accurate detection and identification of surgical tools are paramount. Deep learning-based semantic segmentation in frames of surgery videos has the potential to facilitate this task. We aimed to modify the U-Net architecture by introducing a pre-trained encoder and re-design the decoder part, by replacing the transposed convolution operation with an upsampling operation based on nearest-neighbor (NN) interpolation. To further improve performance, we also employ a very fast and flexible data augmentation technique. We trained the framework on 8 x 225 frame sequences of robotic surgical videos available through the MICCAI 2017 EndoVis Challenge dataset and tested it on 8 x 75 frame and 2 x 300 frame videos. Using our U-NetPlus architecture, we report a 90.20% DICE for binary segmentation, 76.26% DICE for instrument part segmentation, and 46.07% for instrument recognition (i.e., all instruments), outperforming the results of previous techniques implemented and tested on these data.

IEEE Engineering in Medicine and Biology Society (EMBC 2019)


U-NetPlus: A Modified Encoder-Decoder U-Net Architecture for Semantic and Instance Segmentation of Surgical Instruments from Laparoscopic Images


S. M. Kamrul Hasan and Cristian A. Linte

With the advent of robot-assisted surgery, there has been a paradigm shift in medical technology for minimally invasive surgery. However, it is very challenging to track the position of the surgical instruments in a surgical scene, and accurate detection & identification of surgical tools is paramount. Deep learning-based semantic segmentation in frames of surgery videos has the potential to facilitate this task. In this work, we modify the U-Net architecture by introducing a pre-trained encoder and re-design the decoder part, by replacing the transposed convolution operation with an upsampling operation based on nearest-neighbor (NN) interpolation. To further improve performance, we also employ a very fast and flexible data augmentation technique. We trained the framework on 8 x 225 frame sequences of robotic surgical videos available through the MICCAI 2017 EndoVis Challenge dataset and tested it on 8 x 75 frame and 2 x 300 frame videos. Using our U-NetPlus architecture, we report a 90.20\% DICE for binary segmentation, 76.26% DICE for instrument part segmentation, and 46.07% for instrument type (i.e., all instruments) segmentation, outperforming the results of previous techniques implemented and tested on these data.

Proc IEEE Western NY Image Signal Process Workshop (WNYISPW 2018)


A Modified U-Net Convolutional Network Featuring a Nearest-neighbor Re-sampling-based Elastic-Transformation for Brain Tissue Characterization and Segmentation


S. M. Kamrul Hasan and Cristian A. Linte

Brain tumor detection through Magnetic Resonance Imaging (MRI) is a very challenging task even in today's modern medical image processing research. Expert Neuro-radiologists diagnose even glioblastoma types deadly brain cancer using manual segmentation which is tedious and even not accurate that much. Deep learning models like U-net deep convolution neural networks have been widely used in biomedical image segmentation. Though this model works better on BRATS 2015 dataset by using pixel-wise segmentation map of the input image like an auto-encoder which assures best segmentation accuracy, but it is not correct for all the cases. So, I have planned to improve this U-net model by replacing the de-convolution part with the upsampled by Nearest-neighbor algorithm and also by using elastic transformation for increasing the training dataset to make the model more robust on Low graded tumor. I had trained my NNRET U-net model on BRATS 2017 dataset and got a better performance than the state of the art classic U-net model.

Selected Publications

1. S. M. Kamrul Hasan and C. A. Linte, "L-CO-Net: Learned Condensation-Optimization Network for Clinical Parameter Estimation from Cardiac Cine MR", 42nd International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2020.

2. S. M. Kamrul Hasan and C. A. Linte, "A Regularized Network for Improved Cardiac Ventricles Segmentation on Breath-Hold Cine MRI" ISBI 2020.

3. S. M. Kamrul Hasan and C. A. Linte, "Condense-UNet: A Memory Efficient Encoder-Decoder Architecture for Bi-Ventricle and LV-Myocardium Segmentation and Quantification," SPIE Medical Imaging 2020.

4. Niels F Otani , Dylan Dang , Christopher Beam , Fariba Mohammadi , Brian Wentz , S M Kamrul Hasan , Suzanne M Shontz , Karl Q Schwarz , Sabu Thomas and Cristian A Linte, "Toward Quantification and Visualization of Active Stress Waves for Myocardial Biomechanical Function Assessment" CinC 2019.

5. S. M. Kamrul Hasan and C. A. Linte, "U-NetPlus: A Modified Encoder-Decoder U-Net Architecture for Semantic and Instance Segmentation of Surgical Instruments from Laparoscopic Images," 41st International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019.

6. S. M. Kamrul Hasan and C. A. Linte, "A Modified U-Net Convolutional Network Featuring a Nearest-neighbor Re-sampling-based Elastic-Transformation for Brain Tissue Characterization and Segmentation," IEEE Western New York Image and Signal Processing Workshop (WNYISPW), 2018.

Projects

1. Finding the Cortical thickness using the help of Freesurfer tool.


2. A Fully Automated Touchless Brain Tumor Segmentation with a Fusion of Three Distinct Algorithms

Abstract:

Brain tumor detection through Magnetic Resonance Imaging (MRI) is a very challenging task even in today’s modern medical image processing research. To form images of the soft tissue of the human body, surgeons use MRI analysis. They segment the images manually by partitioning into two distinct regions which is time-consuming and may have errors. Hence, it is a must to be accurate the segmentation of the MRI images. Earlier, many researchers used a variety of algorithms to segment the MRI images. However, this proposal is motivated by the current need of modeling improved watershed algorithm with three steps of de-noising filtering, support vector machine classifier to classify the tumor and designing scale-invariant feature transform algorithm where the optimized features were selected and then to give a status checking by finding the area of the segmented region. Traditional over segmentation problem could be minimized by our improved algorithm. This status checking would help as a prescription for the surgeons to take necessary steps based on the severity of the tumor. Our proposed effort has two main thrusts: 1) aimed at segmenting brain tumor with a perfect calculation of tumor area for pre-diagnosis purpose and for that we will evaluate our system using BRATS2012, BRATS2017 dataset. 2) Implement a complete package that would make a revolution in the medical science by making the whole system without any human interaction i.e. a Touchless system we use another algorithm to match the segmented portion with the input image.


3. RITCAM: A Smart Baby Monitor with Biometric Data [GitHub]

Abstract:

RITCAM is a "smart" baby monitor that goes beyond traditional live audio and video feeds to provide biometric information such as the infant's heart rate, breathing, and temperature. RITCAM also uses pose recognition algorithms to identify the infant's position, so that an alert can be sent if the device detects an unsafe sleeping position. This additional information provides parents with peace of mind and enables them to keep better track of their infant's health. RITCAM does not require any physical contact with the infant to gather this information. All data is extracted from imagery and audio collected from the device's RGB camera, infrared camera, infrared depth sensor, and microphone.

4. Generation of synthetic remote-sensing scenes Using DIRSIG [GitHub]

Abstract:

In this experiment, I used FOXBAT scene to create a scene using DIRSIG. I logged in remotely to the CIS DIRS servers and then used the DIRSIG4 simulator to open the scene. It pops up a window with 6 different parameters to change. To create the scene, I clicked the scene and then opened up a foxbat scene already exists in the server. I then used the “atmosphere condition“ button to change the button. Then I upload the .atm file from the path. I used the atmosphere that was collected on Jun 23, 1993. Then I kept the temperature at 240K and saved the file by changing the name of the file to my own directory. Then clicked the “Imaging Platform” button to change the camera setting, pixel pitch settings and also changed the channel settings to RGB. We can change the GSD (ground sample distance) by changing the altitude, pixel pitch from this button. In the “Platform motion” section, we can change the azimuth angle, zenith angle and even the target location and rotation can be changed from this section. After changing all those stuffs we have to again save this file also. In the last section “Data collection”, we can change the time and date of the data collection to match it up with the data and then run the simulator to create a scene.

5. Change detection of High Resolution Landsat Image Using Both Supervised & Unsupervised Classifiers [GitHub]

6. Apparent Temperature measurement from the senor reaching radiance in the LWIR spectral region [GitHub]

Abstract:

  • I had used the emissive portion of the Big equation for finding the sensor reaching radiance. Due to the flat terrain, I omitted the background and down welling radiance term. From the Planck equation, I estimated the area under the curve (integrate over the bandpass) for the Planck equation across the 8-14 μm spectral region for determining the observed radiances. After that, I found the integrated sensor reaching radiance from the big equation. Then the apparent temperature was calculated using the error calculated temperature from the below equation and then compared it with the given sensitivity of the sensor.

Courses Completed

  • MATH.782 Mathematics for Deep Learning (Spring 2020)

  • IMGS.682 Deep Learning for Vision (Fall 2018)

  • IMGS.730 Magnetic Resonance Imaging (Spring 2019)

  • IMGS.682 Image Processing & Computer Vision (Spring 2018)

  • IMGS.722 Remote Sensing (Spring 2018)

  • IMGS.633 Optics for Imaging (Spring 2018)

  • IMGS.620 Human Visual System (Fall 2017)

  • IMGS.619 Radiometry (Fall 2017)

  • IMGS.616 Fourier Method in Imaging (Fall 2017)

  • IMGS.613 Probability, Noise, & System Modeling (Spring 2019)

Awards & Honors

KUET Dean’s Honors List

    • Second Year- 1st and 2nd semester

    • Third Year- 1st and 2nd semester

    • Fourth Year- 1st and 2nd semester


Professional Memberships

  • Authorized IAENG Member (International Association of Engineers), May 2016 – Present

  • Associate Member of IEB (Institute of Engineers, Bangladesh), June 2016 – Present

  • IEEE Member (Institute of Electrical and Electronic Engineers), January – December 2012

RIT Campus Activities

  • Administrative Officer at RIT DSA (RIT Doctoral Student Association) -- August 2018 – Present

Awards

Runner-up best paper award at 2018 Western New York Image & Signal Processing Workshop

Interested topic discussed in detail:

1. create a script: touch filename2. open the script: open filename3. open using vim:4. vim: "i" for insert any text":wq" save and exit":q" quit":q!" discard change"0" to go to the first word of line5. cat command: cat test.txtcat >>test.txt << EOF --- write something inside file6. Run bash script: ./filename7. echo: echo "$varname"
1. Everytime I change any file do:git commit -a2. git pushDone
Run Jupyter Notebook on Remote server from localhost:
  1. Run on the `remote-machine`:

  • tmux

  • cd //path/to/my/code

  • jupyter notebook --no-browser --port=8898


  1. Run on the `local-machine`:

  • ssh -N -f -L 127.0.0.1:8898:127.0.0.1:8898 jer@remote-machine


  1. Type this in the browser on your `local-machine`:


  1. How to kill a port of localhost from remote server:

  • jupyter notebook list

  • fuser 8900/tcp. #e.g. 8900

  • kill PID