The Speakers

 

JaSai M. Kimsey

JaSai is a student at Eastern Michigan University studying computer science and statistics. He will deliver the talk  Introduction Into Randomness in Visual Classification Neural Networks.

Abstract: The talk discusses the foundation of computing and machine learning, particularly the use of mathematical algorithms to solve complex problems using computers. It introduces neural networks as a subset of machine learning and describes how they work, including the process of training the model to recognize patterns in the data. The talk then introduces the concept of randomness in visual classification neural networks and its study of the probability of a machine recognizing visual stimuli such as images and videos correctly. 

 

 

                                                             Andrew Ekstrom 

Andrew is an Adjunct Lecturer at OCC, MCC, Oakland University and a student at EMU and OU.

The title of his talk is How we fool ourselves with the models we make.

Abstract: Whenever we create a statistical model, we usually have many variables to shift through. Traditional “feature engineering” can help us remove unneeded variables. However, they may find a spurious correlation and label an unimportant variable as important. Since spurious correlations are figments of the imagination of the algorithm, we have to worry about chasing after or using “ghosts” in our models. We can help minimize the number of ghosts we chase by taking the same data many times while changing how we partition data into training, and testing data sets. What we will see, is that by changing the random seed, we can get models that have quite different metrics we want to optimize for and we may not find that the same variables are used every time. This gives us multiple “opinions” about what is really important for the model. It also suggests reasons why experiments and data analysis are not as reproducible as we hope.  

 

In this talk, we will look at how we can help assure that the variables we use are, in fact, the “most important” while minimizing the potential of a spurious correlation(s) fooling us into chasing after ghosts. We will use a “Bayesian” idea to choose what variables we should be looking at. And, we will see that “tuning” an algorithm too quickly may only make things worse.



 

                           Rron Tenezhdolli  



Rron, who is currently pursuing computer science at EMU, will be presenting a talk titled Identifying the presence of pneumonia in X-ray images.


Abstract: Rron utilized various neural network structures, including mobile net, CNN, and dense layers, to achieve an optimal classification of X-ray images as either indicative of pneumonia or indicating a healthy patient. The accuracy of his network's results consistently exceeded 96%. 


By installing the deep learning software onto a mobile phone, he has made it possible for doctors to utilize it for the diagnosis of patients' x-ray images. 



 

 

   Will Reece 


Will is a graduate student in the Math & Stats Department at EMU.

Abstract: The talk Machine Learning Tools for interpreting DART data presents an innovative project that aims to use novel machine learning techniques to address large experimental datasets generated by a mass spectrometer. The data consist of mass spectra generated by the analysis of numerous materials from faculty and student researchers in Chemistry. The spectra contain additional information that cannot readily be observed directly, and the processing and analysis of these data sets might provide additional insight into many questions about these samples. The talk highlights the challenges of interpreting raw mass spectrometry data using classical statistical methods and the need for a more efficient approach. The project seeks to leverage machine learning techniques to extract valuable information from these large .txt files in a short amount of time. 

 

 

                            Tareq Khan



Tareq is a professor in the Engineering Department at EMU. His talk Title is An Intelligent Baby Monitor with Automatic Sleeping Posture Detection and Notification

Abstract: Artificial intelligence (AI) has brought lots of excitement to our day-to-day lives. Some examples are spam email detection, language translation, etc. Baby monitoring devices are being used to send video data of the baby to the caregiver’s smartphone. However, the automatic understanding of the data was not implemented in most of these devices. In this research, AI and image processing techniques were developed to automatically recognize unwanted situations that the baby was in. The monitoring device automatically detected: (a) whether the baby’s face was covered due to sleeping on the stomach; (b) whether the baby threw off the blanket from the body; (c) whether

the baby was moving frequently; (d) whether the baby’s eyes were opened due to awakening. The device sent notifications and generated alerts to the caregiver’s smartphone whenever one or more of these situations occurred. Thus, the caregivers were not required to monitor the baby at regular intervals. They were notified when their attention was required. The device was developed using NVIDIA’s Jetson Nano microcontroller. A night vision camera and Wi-Fi connectivity were interfaced. Deep learning models for pose detection, face and landmark detection were implemented in the microcontroller. A prototype of the monitoring device and the smartphone app were developed and tested successfully for different scenarios. Compared with general baby monitors, the proposed device gives more peace of mind to the caregivers by automatically detecting unwanted situations.


 

Ovidiu Calin

Ovidiu is a Professor in the  Mathematics & Statistics Department at EMU. His talk is titled Pre-chat GPT engine: Using ML to construct Byron-type poetry.

Abstract: The approach used in this study, employing a neural network to generate poetry, is similar to the approach used in the development of language models such as GPT (Generative Pre-trained Transformer) by OpenAI. Both methods rely on training a model on a large corpus of text and using it to generate new text that resembles the input text. However, the specific architecture and training methodology used in this study differ from those used in GPT. 

The presented architecture contains a neural network model that uses an RNN with 70 LSTM cells and a dense layer with 50 cells. The input is done through 70 input variables and the output uses a softmax activation function. The activation function in the dense layer is a ReLU function, and the learning algorithm used is the Adam minimization algorithm. The training is done in batches of 80 at a time and trains for 150 epochs, taking about 7 hours in total. The loss function used is the categorical cross-entropy since this is a multi-class classification problem with 63 classes, and each class corresponds to a character. During training, the network minimizes the cross-entropy cost function.