Research Spotlight: William Harrison

Dr. William Harrison is a Research Engineer at NIST working in robotics, simulation, virtual reality, and artificial intelligence. Prior to working at NIST, he received his Ph. D. in Mechanical Engineering from the University of Michigan and worked for two years at Fanuc Robotics America. In this research spotlight, Dr. Harrison gives an overview of his current work followed by a brief question and answer session with NIST staff. For further questions or discussion, Dr. Harrison is available by email at william.harrison@nist.gov or through the AI4ManufacturingRobotics Slack channel.

Overview


As industrial technologies progress, methodologies for training and measuring the systems that make up a manufacturing process need to become more sophisticated, intelligent, and agile. Methodologies for measuring meta characteristics such as agility and intelligence will become more useful as exhaustive system testing becomes less practical. Intelligence is needed not only in robot control but also in the robot testing methodology. Intelligent testing could enable higher confidence in robot behavior by actively searching for edge-case scenarios.


Our work centers on exploring how to use intelligent models to both train and assess industrial robot intelligence and agility in simulation. One possible approach is the generator to discriminator relationship used in Generative Adversarial Networks (GANs). An intelligent environment could potentially compete against an intelligent robot control system in much the same way as a GAN. In this case, the intelligent environment learns every time the robot successfully completes its task, and the robot control system learns every time it faces a new environment. We hypothesize that this may be possible by mixing a reinforcement learning robot training approach with a GAN approach to generating new environments.


There are plenty of open questions to be answered here as well as new expertise needed to answer them. The first of which concerns understanding the nature of robot training in 3D using Reinforcement Learning (RL). The second open question is how an intelligent environment(or any deliberately controlled environment) might affect RL. The third question is how this intelligent environment might be used to measure the robot control system. Currently, the members of the subgroup are working simultaneously on understanding GANs, RL as it pertains to robotics, and procedural generation as a bridge from deliberately controlled environments to intelligent ones.


Dr. William Harrison

Q and A Session


To give readers a bit of background about yourself, do you mind sharing how you got your start in AI and robotics research?

My graduate school work was in manufacturing with simulation and real processes mixed together (we called this virtual fusion). We had some robots as part of our setup then. When I graduated, I went to work for Fanuc Robotics and then after working for there I came to the robot agility group at NIST where I got into industrial robotics.

What about on the AI side?

Our group has always been focused on intelligent robotics and in the last 3-4 years the explosion of deep learning has put a big spotlight on it. Our team leader at the time, Craig Schlenoff, gave me the opportunity to incorporate more artificial intelligence within my simulation work because it really fits my research interests going forward.

You mentioned you're currently working on intelligent and adaptive simulated environments, what drew you to this research topic in particular?

I’ve always been interested in robotic artificial intelligence, but something else I’m also interested in is video games and virtual environments that allow you to exist within them. Moreover, what I’m really interested in is an environment that’s constantly trying to understand, challenge, and entertain you. So a natural application for that for a robotic system (in particular an industrial robotic system) is to have a simulation environment that is constantly trying to challenge the robot and if the robot is capable of learning then you have this competitive environment that feeds on itself and hopefully results in something completely new. But the main inspiration comes from how can we apply this to gaming and entertainment as well.

Were there any challenges or quirks that you found when it comes to working on industry-specific challenges in simulation?

Interestingly enough, when I first started doing research in manufacturing engineering, I remember a pretty high ranked person saying that 3D simulation is dead and that there’s no reason to do it. That was probably in the early 2010s. Probably a year after that, all the simulation software was doing 3D simulation (not all CAD software was even doing all 3D at that point). 12 to 18 months after that, there was a big switch over to 3D simulation. I think that’s always been an issue with industry; there’s a part of industry that feels like there’s nothing left that needs to change. And then there’s a part of industry that understands the problem and understands that new technology is a way to solve that. I think that’s one of the quirks of dealing with simulation and technology in manufacturing is the difference in timelines in the way that people think. There are robots that have been on the line for 10-15 years, but at the same time you want to be on the cutting edge of what’s happening.

Nowadays there's an enormous amount of AI and deep learning papers being published regularly. What's your approach to navigating this literature and dealing with all the potential noise when working on the AI part of your research?

Luckily we’re engineers first, not scientists, so I can focus on the problem that we’re trying to solve and start from there as far as the literature that is applicable. But to be honest this is something that keeps me up at night, the idea that I missed not just one paper, but an entire subfield that’s completely applicable to my area but I just don’t know the right query words to find it. And I’m afraid I might turn in a paper and find not only that someone else already did this, but that my approach was dumb. That’s a fear I definitely had with the current paper I'm working on because not only is stuff being published quickly, but also the breadth of material is very wide. Even though the internet and google scholar are very useful, without the right query terms you won’t touch on the area that you need. Especially from our perspective since we’re fairly new to the field. The best you can do is to do a decent literature review before you get deep into anything. Hopefully by the time you try to write your article and you do your literature review again, nothing too crazy has happened to make your work obsolete.

The paper you're currently working on deals with analyzing the latent spaces of autoencoders; are there any insights or surprises that you ran into while working on this paper that might be interesting to newer researchers?

I feel like I don't have enough experience in the area to give advice to someone just starting out. But the crux of what we looked at is using the latent space of an autoencoder. The latent space refers to when an autoencoder looks at a piece of data and reproduces that data by encoding it into some n-dimensional vector. For our research we looked at those vectors and saw how they’re situated near each other in order to understand what high-level concepts the model "understands". I think this might be a viable way to try to get inside the “brain” of a model. Just looking at the weights and biases often times is not a tractable way of understanding what a model understands (there are other areas of AI that are trying to address the predictability, and understandability of models too, of course).

To close things off, I want to ask you about your outlook on the use of simulations for robotics. There's always been talk about the reality gap when it comes to simulating robotic systems and quite a lot of work to try to address it, especially recently. How optimistic are you about the role of simulations for robotics research in the near future?

I’m fairly optimistic about it. In the case of a reinforcement learning model, it seems unlikely that you could always train a model completely in the real world. Bridging that gap will be necessary for those types of applications. I think its obviously quite a challenge but I think it’s a separate challenge from figuring out what you can do with a simulation environment as far as advancing the capabilities of a model.