Machine Learning application in Planetary Sciences

Planetary surfaces, like Mars, have recently gotten much attention, thank to mission like the Mars Reconnaissance Orbiter (MRO), which provide amazing high definition images from the surface.  One of the instruments currently on the MRO is the High Resolution Imaging Science Experiment (HiRISE) camera, which provides images of the order of 2 Gb.

These images are constantly used to monitor different areas of mars, however, most of  this monitoring is still mostly done by human experts. As the mission continues, and we gather more data, we need automatic algorithms that step up and are capable o detecting both dynamic changes in the surface (change detection) and significant features (feature detection).

To do the feature detection, we are currently using Convolutional Neural Networks (CNN), which are a state of the art classifier particularly well suited to be used in images. 

To do the change detection, we are using Generative Models, which model an image pair as a probabilistic distribution, and with the use of priors and meta information regarding the image (lighting, inclination of the camera, sun orientation) we can model the different changes that the surface can undergo in any given time. 

Figure: In this figure, a Deep Net is detecting the rootless cones in the image, red squares mean correctly classified cones

Use of Machine Learning and Statistical Learning for Human Activity Recognition

Recognizing human activities in an enclosed environment is, nowadays, one of the most interesting and profitable tasks available for modern Machine Learning. Companies like Amazon, Microsoft and Google are devoting more and more resources each day to bring home what is also called the "Internet of Things"

The availability of quality and robust information of people's activities with minimum sensor interference would allow us to monitor disabled and elderly persons, to improve their way of living by means of intelligent appliances and an interconnected network of transportation and grocery supply.

Someone confined to their beds, would need constant monitoring of a human worker, limiting in such way their independence, by monitoring them, and their tastes, we can allow them to interact in a more pervasive way with the world outside their bed. By recommending them TV shows, by allowing them to buy online, by creating smart wheelchairs that will come close to the bed when they intend to do so. Essentially, by making our everyday tasks seem as seamless as we normally do for them.

In order to detect all of these variables, intelligent systems rely on a combination of Machine Learning techniques and dedicated hardware, like computer vision to detect whether the person is currently in the room, or  a microphone to make their communication with the environment seem as natural as possible.

During my research, I focused in the 4W1H paradigm, which relies on only gathering minimal information of the environment (thus making intrusion as small as possible) while making that information enough to infer the being of the user in the room. The 4W1H is essentially a data gather paradigm for small enclosed rooms.

Once we have 4W1H feature sets with labeled information of the activity at hand, we can apply a plethora of Machine Learning tools to classify, group and detect human activity.

In my case, I used Self Organizing Maps, along with clustering techniques to discover information regarding each of the tasks, in such a way, we can group the tasks by similarity rather than by hard labeling. This approach justifies itself because many tasks hard labels, are ambiguous, and in many cases very similar to each other.

I've also delved a bit in finding good ways to correlate pose and activities in such way that we can use simple and cheap sensors to obtain more information about the environment.

Relevant Papers
  1. Leon Palafox and Hideki Hashimoto. 4W1H and particle swarm optimization for human activity recognition. Journal of Advanced Computational Intelligence and Intelligent Informatics, 15(7):793-799, 2011.
  2. Wee-Hong Ong, Leon Palafox, and Takafumi Koseki. Investigation of Feature Extraction for Unsupervised Learning in Human Activity Detection. Bulletin of Networking, Computing, , Systems, and Software, 2(1):30-35, 2013. [ http ]
  3. Wee-Hong Ong, Takafumi Koseki, and Leon Palafox. An Unsupervised Approach for Human Activity Detection and Recognition. International Journal of Simulation-Systems, Science & Technology, 14(5), 2013.
  4. Wee-hong Ong, Leon Palafox, and Takafumi Koseki. An Incremental Approach of Clustering for Human Activity Discovery. IEEJ Transactions on Electronics, Informations and Systems, 134(11):1-6, 2014.
  5. Leon Palafox and Hashimoto Hideki. A movement profile detection system using self organized maps in the intelligent space. In Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO, pages 114-118. IEEE, 2009. [ DOI ]
  6. Leon Palafox and Hideki Hashimoto. A compressive sensing approach to the 4W1H architecture. In Proceedings of the IEEE International Conference on Industrial Technology, pages 1599-1603. IEEE, 2010. [ DOI ]
  7. Leon Palafox and Hideki Hashimoto. Human Action Recognition using 4W1H and Particle Swarm Optimization Clustering. In Sensors (Peterborough, NH), pages 369-373. IEEE, 2010. [ DOI ]
  8. Leon Palafox and Hideki Hashimoto. Human action recognition using wavelet signal analysis as an input in 4W1H. In Industrial Informatics (INDIN), 2010 8th IEEE International Conference on, pages 679-684. IEEE, 2010.
  9. Peshala G Jayasekara, Leon Palafox, T Sasaki, H Hashimoto, and Beom H Lee. Simultaneous localization assistance for multiple mobile robots using particle filter based target tracking. In 2010 Fifth International Conference on Information and Automation for Sustainability, pages 469-474. IEEE, 2010. [ DOI | http ]
  10. Leon Palafox, Laszlo a. Jeni, Hideki Hashimoto, and Beom H. Lee. 5W1H as a human activity recognition paradigm in the iSpace. In 2011 8th Asian Control Conference (ASCC), pages 712-718. IEEE, 2011.
  11. Leon Palafox, Laszlo a. Jeni, and Hideki Hashimoto. Using conditional random fields to validate observations in a 4W1H paradigm. In 4th International Conference on Human System Interaction, HSI 2011, pages 80-84. IEEE, 2011. [ DOI ]
  12. Wee Hong Ong, Takafumi Koseki, and Leon Palafox. Unsupervised human activity detection with skeleton data from RGB-D sensor. In Proceedings - 5th International Conference on Computational Intelligence, Communication Systems, and Networks, CICSyN 2013, pages 30-35. IEEE, 2013. [ DOI ]

Discovery of Gene Interactions using Machine Learning.

Genes in the human body are the responsible of creating proteins, and thus, every mechanism in the human body is governed by genes. The genes are activated or suppressed by other genes in the human body. When you get hurt and you start bleeding, your kidneys generate signaling proteins that will stimulate the creation of new blood cells. People with some blood diseases have a hard time controlling those genes. 

We do not know many of these gene interactions. If we knew them, any number of diseases, like cancer, could be cured overnight. To underscore its importance, multiple organizations have started efforts like the DREAM competition, that seek to find the state of the art in this particular technique.

During my research, I focused mostly on small networks, since they are easier to model, and we could use more sophisticated inference techniques to test its viability in the real world scenarios of millions of genes interacting at the same time.

Among the tools I used, was clustering techniques, like Mixture Models, while improving on them using the recently developed infinite mixture Models, where we use Dirichlet Processes to find the correct number of clusters needed to find the best candidates.

Relevant Papers

  1. Leon Palafox, H. Iba, and N. Noman. Reverse Engineering of Gene Regulatory Networks using Dissipative Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation, 17(4):1-1, 2012. [ DOI | http ]
  2. Nasimul Noman, Leon Palafox, and Hitoshi Iba. Evolving genetic networks for synthetic biology. New Generation Computing, 31(2):71-88, 2013. [ DOI ]Vishal Soam, Leon Palafox, and Hitoshi Iba. Multi-objective portfolio optimization and rebalancing using genetic algorithms with local search. In 2012 IEEE Congress on Evolutionary Computation, CEC 2012, 2012. [ DOI ]
  3. Leon Palafox and Hitoshi Iba. On the use of Population Based Incremental Learning to do Reverse Engineering on Gene Regulatory Networks. In Evolutionary Computation (CEC), 2012 IEEE Congress on, 2012. [ DOI ]
  4. Leon Palafox and Hitoshi Iba. Gene Regulatory Network Reverse Engineering using Population Based Incremental Learning and K-means. In Genetic and Evolutionary Computation Conference, 2012.
  5. N Noman, Leon Palafox, and Hitoshi Iba. On Model Selection Criteria in Reverse Engineering Gene Networks Using RNN Model. In Convergence and Hybrid Information Technology, pages 155-164. Springer Berlin/Heidelberg, 2012.
  6. Leon Palafox, Nasimul Noman, and Hitoshi Iba. Study on the Use of Evolutionary Techniques for Inference in Gene Regulatory Networks. In Natural Computing and Beyond, pages 82-92. Springer Japan, 2013. [ http ]
  7. Leon Palafox. Extending Population Based Incremental Learning using Dirichlet Processes. In Information Sciences, pages 1686-1693. IEEE, 2013.
  8. Nasimul Noman, Leon Palafox, and Hitoshi Iba. Reconstruction of Gene Regulatory Networks from Gene Expression Data Using Decoupled Recurrent Neural Network Model. In Natural Computing and Beyond SE - 8, volume 6, pages 93-103. Springer Japan, 2013. [ DOI | http ]
  9. Nasimul Noman, Leon Palafox, and Hitoshi Iba. Inferring Genetic Networks with a Recurrent Neural Network Model Using Differential Evolution. In Nikola Kasabov, editor, Springer Handbook of Bio-/Neuroinformatics SE - 22, pages 355-373. Springer Berlin Heidelberg, 2014. [ DOI | http ]

Detection of human evoked potential on EEG using LASSO for its use in a Brain Machine Interface.

The way we can make inferences about the working of the brain is by reading the electric impulses from the neurons. There are more complex ways to do so, but the most straight forward is with the use of EEG systems, which work by pluging electrodes in the brain, and reading the electric impulses. This is a noisy and unreliable techniques, since the SNR is too low, and finding good descriptors from the raw signal is often a difficult task. The signal from the EEG, is as well, non stationary by its very nature, which makes it hard to use conventional classification techniques, when we want to infer particular tasks.

In order to avoid this, we used post processing techniques, like spectrogram analysis, to find stationary fetaures that would be correlated in some fashion with different tasks. And by training the subjects for an extensive period of time, they were able to modulate particular frequency ranges of their EEG. 

We used this obtained modulation capability to train a system that would enhance and help them to achieve BMI tasks, such as dropping a virtual ball in a screen. To do this, we trained the system using LASSO, which is a form of classification focused on finding the best descriptors. we also found that usually the descriptors that the users found more easy to modulate where those located in the motor cortex.

Left: Power Spectrogram in the alpha frequencies that show heavy user modulation in the motor cortex, Right: Subject using these alpha wave modulation to control a dot in the screen.