Research

Computational Neuroscience

When I was pursuing my Doctoral degree at NAIST, I worked on a computational approach to investigate thermal neuromodulation for epilepsy. I am generally interested in modelling mental and brain disorders and see if we can use the power of computation to understand them and look for solutions if there are any. Other fields of computational neuroscience are equally interesting such as the understanding of consciousness, memory, and the decision making process of the rational brain.

Machine Learning Applications

While Google and others have already demonstrated big leaps in machine learning research, I believe that some if not every problem may need a tailor-fit solution such as in bioinformatics and other complex fields of research. Nevertheless, I believe that machine learning is the future tool of engineering if not already. At NAIST, every laboratory allots significant amount of time every year for students to master foundations of Machine Learning through reports and programming. Reference textbooks include PRML by Bishop, Machine Learning by Murphy, and ESL by Friedman.

Dynamical Systems Modelling

What's so nice with differential equations is that it can model anything - predator-prey dynamics and the like, Hodgkin-Huxley neurons, and space exploration even. And even if the results are not accurate, they give you valuable insights about the problem. And oh, bifurcation studies - definitely hard but fascinatingly insightful.

Complex Systems Simulation

When the problem gets more complex and differential equations may not be suitable, you can always start from the building block - the agents. Agent-based simulation can basically investigate more complex dynamics using simplified rules on the agents moving in its local environment. Name it - traffic systems, pedestrian dynamics, social interactions, landslide, protein diffusion, galaxies?

Evolutionary Computing

I have always been fascinated with Genetic Algorithms and other evolutionary computational methods. While they lack solid proof that they work all the time, practically they seem to which may involve a lot of fine tuning. They are even used to optimize parameters of machine learning algorithms. Not all optimization problems are convex or anywhere near convex so as of the moment, we don't have the capability to perform such optimization and turn to them instead. I wonder, is this part of the evolutionary process?