Publications

1. Published Paper: SPIE 2020

https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11423/1142313/Electromyography-signal-analysis-with-real-time-support-vector-machine-%7c/10.1117/12.2558962.short?tab=ArticleLink


Mohammad Manzur Murshid, Hassan S. Salehi

ABSTRACT

Modern Prosthetics for the rehabilitation of hand amputees has been improved recently, but the value of prosthetics for the freedom of movement and adoption in the daily life of amputees is still substandard. The Electromyography (EMG) signals are generated by human muscle systems when there are any movements and muscular activities. These signals are detected over different areas from the skin surfaces and each movement corresponds to a specific activation pattern of several muscles. In this research, multi-channel EMG measurements were performed with electrodes placed on involved arm muscles. Since deltoid, bicep brachii, pectoris major, and flexor digitorium muscles can almost independently move human arm with adjustable contraction forces, the surface EMG signals from these muscles were utilized to recognize different arm movements. The EMG signals were digitally recorded and processed using digital filters, feature extraction methods, and classification algorithms. For feature extraction, the envelopes were extracted from the signal waveforms. To reflect the moving average activities, the root means squares (RMS) operation and normalization were successively utilized as initial signal processing method. Afterward, an activation vector containing normalized RMS signals was obtained in realtime. For machine learning, the activation vectors were utilized to train a real-time support vector machine (SVM) classifier to recognize different muscle EMG signals and their respective motion commands. A detailed analysis using SVM reveals more than 98% accuracy for recognition and successful classification of different motion commands after training. The effectiveness of the proposed method was verified through several experiments.

2. Published Paper: SPIE 2019

https://www.spiedigitallibrary.org/conference-proceedings-of-spie/10857/108570H/Deep-learning-based-quantitative-analysis-of-dental-caries-using-optical/10.1117/12.2510076.short?SSO=1


Hassan S. Salehi, Mina Mahdian, Mohammad M. Murshid, Stefan Judex, Aditya Tadinada,

ABSTRACT

The conventional approach for diagnosing dental caries is clinical examination and supplemented by radiographs. However, studies based on the clinical and radiographic examination methods often show low sensitivity and high specificity. Machine learning and deep learning techniques can be used to enhance optical coherence tomography (OCT) to more accurately identify diseased and damaged tissue. In this paper, we present a novel approach combining OCT imaging modality and deep convolutional neural network (CNN) for the detection of occlusal carious lesions. A total of 51 extracted human permanent teeth were collected and categorized into three groups: Non-carious teeth, caries extending into enamel, and caries extending into dentin. In data acquisition and ex-vivo OCT imaging, the samples were imaged using spectral-domain OCT system operating at 1300nm center wavelength with a scan rate of 5.5-76kHz, and axial resolution of 5.5μm in air. To acquire images with minimum inhomogeneity, imaging was performed multiple times at different points. For deep learning, OCT images of extracted human carious and non-carious teeth were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN model employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer. The sensitivity and specificity of distinguishing between carious and non-carious lesions were found to be ~99% and 100%, respectively. This proposed deep learning-based OCT method can reliably classify the oral tissues with various densities, and could be extremely valuable in early dental caries detection.

3. Published Paper: ICUAS 2019

Perceptual Ability Advancement of a Humanoid with Limited Sensors via Data Transmission from an Aerial Robot


Kiwon Sohn, Mohammad Manzur. Murshid.

ABSTRACT

This paper presents an approach to increase the perceptual space of a ground robot (humanoid which has limited sensory abilities) via its accompanying aerial robot's data transmission. First, the robot which has only 2D camera sensor used the hip-sway motion to predict the target's 3D position and orientation in its sensor coordinate. The pose data is refined with a iterative method using the ego-motion estimation of the humanoid. Then, the neighboring drone which is equipped with 3D camera system is controlled to detect the same target object. The measured pose of the target object in the aerial robot's perceptual space is combined with the object's calculated coordinate (which is constructed from the humanoid's 2D sensor). The fusion process enabled the localization of the humanoid in the drone's collected 3D point cloud data of the task environments. The combined coordinates are then used for motion-planning of the humanoid for its safe and effective task manipulation in the field. Through experiments with a small-sized humanoid robot and a drone, the presented approach is tested and evaluated in a mock-up of the task field.

4. Published Paper: ICUAS 2019


Kiwon Sohn, Mohammad Manzur. Murshid.

ABSTRACT

In this paper, an approach to increase perception space of a ground standing robot via data transmission from an aerial robot is presented. Since the collected point cloud data from two robots have different coordinates, a rigid registration based approach is applied to align the coordinate of the aerial robot’s perception space to the ground robot’s one. First, the static markers which are attached to the base position of the ground robot are used to compute the transformation. However, they did not work properly when the markers are accidentally invisible due to its kinematic limits. To solve the issue, the dynamic markers which are attached to both fixed (base) and movable links of the ground robot are also studied. To account for kinematic changes of dynamic markers, forward kinematics of the ground robot’s limb is iteratively updated and applied to the rigid transformation computation. The both approaches were tested and evaluated through experiments with a full-sized armed robot and a drone in a mock-up of task field.