Robotics
Robots and Cognitive Systems
Work Robotics
Well I'm pleased to write that since my last update, and since about 2012, I've been involved in robotics here at GE Global Research (so a different gig than when I posted my last update). So kids, learning on your own can pay off with new opportunities ;-). Anyway, starting around 2012 I was involved in helping with a proposal for the VA to automate perioperative care, and winning that proposal allowed our group to go out and start hiring roboticists. For me, it's a way to ramp up interest in embodied cognition - the idea that our cognitive processes are constrained by the body and world we are in - which I hope is one way for solving some of the NP-hard conundrums of AI. Current work since about 2016 has been on designing new cognitive architecture for our internal experiments - and this has been a lot more focused on the kind of three layer architecture and surprise-based learning I talk about below a little bit. But of course we're also adding some new stuff, and my real focus here is on an explicit attention model - that is identification of the robot of things in the current context that may be important to it (e.g. may influence the success or failure of its current plans, maintenance goals, etc.), and linking that to how it makes decisions about what to do, both in a physical and mental sense (e.g., use an actuator to change the world, or replanning). It's interesting work (for me anyway) and I'm hoping much of it can be published at some point.
My current work on the home robotics front has pretty much been around learning ROS, particularly exploiting roslisp, and trying to find versions of linux to run on my linux box I built (from relatively low end intel core2 components) back aound 2006 (so I really need to convince the wife to let me upgrade, perhaps to a nifty new (at the time of this writing) Nvidia Xavier which looks like the robotics development box to beat at this point: 6/18). I'm less concerned with building up architecture the way I was back in 2008 (my last update here) and more looking specifically at autonomous decision making around perception - particularly the extent to which we have to represent percepts semantically. This is tied into my day job now, as you will see from the text above, the main difference being that the GE work deals with a semantic representation, and the free-time work questions the need to always do that while still (hopefully) having something that can be reasoned over semantically (because that's the representation of our plans - and our "inner voice" for that matter).
[from 2008:] Recently I've been getting more interested and involved in robotics technologies. I think part of this is just to try to get a break from the semantic paradigm I've spent most of my professional career working within, and because at least to some extent I've become disillusioned with it: I don't think we're going to understand intelligence by building semantics in, but rather by watching semantics develop.
I've been researching various robotic architectures, though not a lot has changed since I helped develop a robotics exploration class at U Rochester in the early 90s. The traditional sense-plan-act loop beloved of AI in the 70s and 80s is pretty much dead now, replaced by 1) Brooksian reactive/"behavioral" architecture (which is basically sense-act - no KR so no planning), 2) the "three-layer" architecture that augments a reactive layer (to allow real-time response) with a mission/planning top layer and a binding layer in between that essentially enables and disables the reactive behaviors.
3) On a more "cognitive" branch, there's been some attempts at using traditional production system type cognitive architecture such as Soar and ACT-R for robotics.
Most intriguing to my mind is work by Wei-Min Shen at USC ISI - on robots that learn from their interaction with the environment, and thus locomotive skills can be learned from arbitrary hardware configurations. This is getting close to the kind of self-development of semantics I'm interested in as well, and I hope they are successful in growing the applications of this technology.
In order to do some of this exploration on my own time (rather than burden my employer with the kind of training pretty much any robotics engineer should be graduating with), I'm in the process of putting together some basic robotics development tools at home (See the Home Robotics link for more).
Last Update: 6/7/18