I am a research scientist at Interactions, a startup backed by VCs including Softbank. Interactions acquired AT&T's speech and natural language technologies, and AT&T's Watson group (where I was in) transferred from AT&T Labs Research to Interactions in December 2014 (refer to Interactions Press Release for details).

 My past and current work are in the following areas:

  • Dialog-based systems
  • Natural language processing
  • Context-aware autonomous agents
  • APIs between dialog/NLP systems and application/web-services

While I continue to work on these areas at the new startup, my research in these areas is based on my work in the following three unique places with great people:

My work at AT&T Lab Research is introduced at this link (this link should be available until the end of 2015 and maybe more...we still have strategic ties with AT&T). Note that the content below mainly introduces my work prior to joining AT&T. Someday, I hope your virtual agent will collect all of my information from multiple sources from every places, cyber and physical, where I left my digital footprints and present you an integrated view of who I am, which is also my goal. :)

At IHMC, I was a research scientist at the Institute for Human and Machine Cognition (IHMC), a research institute of the Florida University System. I had special opportunities working with Dr. James Allen and Dr. Jeffrey Bradshaw in many interesting projects including CALO best known for Apple's Siri that was spun off from the CALO effort. As part of CALO, We developed a system called PLOW that let users teach complex tasks in a natural way by show & tell (often, with a single example). You may check demo videos here. We received an outstanding paper award in AAAI 2007 and our work with PLOW was very well received within the CALO project.

Before joining IHMC, I studied and received my Ph.D. in Computer Science from University of Southern California under the supervision of Prof. Milind Tambe. My advisor Dr. Tambe is Helen N. and Emmett H. Jones Professor in Engineering at USC and the director of Teamcore research group that has done excellent work and created a wonderful research network among its current members and alumni.

Below is a list of my research topics at IHMC and USC with selected publications. For further information, you can contact me by email (hjung@research.att.com).


Task Learning and Execution

To assist people by performing various tasks on their behalf in dynamic uncertain environments, a system should have the ability to learn how to perform many of the tasks and reason about learned task models to make them more robust and flexible. A novel technique developed by an IHMC team (in which I am one of core researchers) enables a system (called PLOW) to learn executable task models from a single collaborative learning session that consists of demonstration, dialogue-based play-by-play explanation and mixed-initiative human-system interaction.

Unlike other learn-by-demonstration or statistical learning approaches that require numerous training examples, the PLOW task learning system requires only a single example with modest amount of user interaction, and its formal evaluation in the DARPA CALO project has shown great performance and strong user preference over other approaches. In this research project, I led the development of four major components: (i) knowledge representation for task models; (ii) task reasoning/construction modules; (iii) a GUI-based intelligent user interface for task learning; and (iv) a goal-driven task execution agent.

  • Hyuckchul Jung, James Allen, William de Beaumont, Nate Blaylock, George Ferguson, Lucian Galescu, Mary Swift, Going Beyond PBD: A Play-by-Play and Mixed-initiative ApproachNo Code Required: Giving Users Tools to transform the Web, edited by A. Cypher, M. Dontcheva, T. Lau, and J. Nichols. Morgan Kaufmann Publishers, 2010
  • Nate Blaylock, James Allen, William de Beaumont, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, Learning Collaborative Tasks on Textual InterfacesProceedings of the International FLAIRS Conference: Special Track on AI, Cognitive Semantics, and Computational Linguistics, 2010
  • Hyuckchul Jung, James Allen, Nathanael Chambers, Lucian Galescu, Mary Swift, William Taysom, Utilizing Natural Language for One-Shot Task LearningJournal of Logic and Computation, 18(3): 475-493, Oxford University Press, 2008
  • James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, William Taysom, PLOW: A Collaborative Task Learning AgentProceedings of the AAAI Conference on Artificial Intelligence, 2007 (AAAI-07 Outstanding Paper Award)


NLP-based Clinical Documents Analysis

Comparative Effectiveness Research (CER) is very critical in improving the quality and cost-efficiency of health care. However, much of the data that can be used for CER is unavailable without time consuming and costly data abstraction. While various NLP techniques are being used to enhance data collection, most current techniques for information extraction from clinical texts use shallow understanding techniques. In contrast, our deep natural language understanding system extracts rich semantic information from narrative text records and builds logical forms that contain ontology types as well as linguistic features. Ontology- and pattern-based extraction rules are used on the logical forms.

In this NIH-funded project, I focused on techniques to automatically construct the medical history of patients from narrative clinical texts, and I was also actively involved in building OWL-based framework for information extraction.


Temporal Reasoning based on Semantics and Statistics

Temporal information extraction and reasoning can enable AI reasoning systems to infer relations between events and the scopes/boundaries of events. As part of a joint research effort with Univ. of Rochester to learn rich task activities in our daily lives (e.g., creating my own tea blend), we built a reasoning system that computes temporal relation between events that are described by users in natural language. Our technique is based on Markov Logic Networks (MLN) that unifies logical and statistical AI. We annotated user demonstration for cooking activities with their speech and captured actions by RFID tags and Kinect. The overall goal of the project was to build an intelligent assistant that learns, assists and teaches everyday tasks (e.g., reminding an Alzheimer patient of what to do next after observing him/her on several occasions). Temporal relation information along with additional semantic interpretation of user utterances will play critical roles in activity recognition (e.g., providing the agent with hierarchical action structures, temporal ordering of actions, action boundaries, etc.). In this project, I developed a system for (temporal) information extraction and temporal reasoning about event relation, using an open-source MLN software package called Alchemy. A paper below was for the part of annotating user demonstration with multimodal inputs.

  • Mary Swift, George Ferguson, Lucian Galescu, Yi Chu, Craig Harman, Hyuckchul Jung, Ian Perea, Young Chol Song, James Allen, Henry Kautz, A multimodal corpus for integrated language and actionProceedings of the International Workshop on Multimodal Corpora for Machine Learning, 2012


Dialogue-based Geo-location

While GPS technology and its devices have become part of our daily lives, the technology is heavily dependent on signals from satellites the reliability of which can be compromised. To supplement GPS-based geo-location, we developed techniques to perform geo-location using human-provided description (in text or speech) of environment. By using natural language description of surroundings as well as geo-databases, our technique can pinpoint locations and serve as a backup or an extra information source to GPS. In this project, I developed core geo-location techniques based on Particle Filtering, an efficient method widely used in mobile robot localization. Briefly speaking, our particle filtering approach models a vehicle’s position as a state (i.e., a particle), and its action model computes next states taking into account vehicle’s movement. Sensors are inputs from user’s natural language description that contains geo referents (e.g., “I’m passing Romana street”, “I see a Bank of America loan office on my left”, etc.). This research was done as part of a DARPA seedling project.


Decision Theoretic Autonomy Control

With ever increasing computational power, more and more autonomous systems play an important role in critical domains (e.g., health care, disaster rescue, finance, etc.), often interacting with humans. To maximize the utility of such autonomous systems and guarantee their safety in a dynamic and uncertain environment, it is essential to control and adjust their autonomy depending on situations. Here, policies are an effective means to dynamically regulate the behavior of a system without changing code or requiring the cooperation of the components being governed. 

I have developed decision theoretic methods to control the range of regulated actions by adjusting policies. In a given situation (e.g., action failure or deteriorating performance), the autonomy control system dynamically builds decision networks (belief networks extended with special nodes for actions/utilities) based on adjustment options, capabilities/conditions required for the options, and their cost. Next, the utility of each choice (e.g., increases or decreases in permissions and obligations, acquisition of capabilities, etc.) is computed to select the best option. This autonomy control system has been integrated with an industrial-strength service-oriented KAoS agent framework and demonstrated in realistic large-scale multi-human/robot projects funded by US Navy. 

  • Matthew J. Johnson, Koji Intlekofer, Jr., Hyuckchul Jung, Jeffrey M. Bradshaw, James Allen, Niranjan Suri, and Marco Carvalho, Coordinated Operations in Mixed Teams of Humans and RobotsProceedings of the International Conference on Distributed Human-Machine Systems, 2008
  • Jeffrey M. Bradshaw, Hyuckchul Jung, Shri Kulkarni, Matthew Johnson, Paul Feltovich, James Allen, Larry Bunch, Nathanael Chambers, Lucian Galescu, Renia Jeffers, Niranjan Suri, William Taysom, and Andrzej Uszok, Kaa: Policy-based Explorations of a Richer Model for Adjustable AutonomyProceedings of the International Joint Conference on Autonomous Agents and Multi Agent Systems, 2005


Multiagent Coordination, Conflict Resolution, Resource Allocation, & Negotiation (Ph.D. Thesis Work at USC)

I developed the techniques for fast conflict resolution. They were based on two well-established formal frameworks: Distributed Constraint Satisfaction Problems (DCSP) and Distributed Partially Observable Markov Decision Processes (distributed POMDP). I formalized a conflict resolution problem as a DCSP problem and developed new efficient conflict resolution strategies in which agents share and exploit local information communicated between them. Rigorous experiments showed that such cooperative strategies significantly improve the performance in conflict resolution convergence.

Distributed POMDP is an appropriate formal framework to model strategy performance in DCSP since it has distributed agents and the local information in DCSP can be modeled as observations in a distributed POMDP model. In my thesis, the performance of a DCSP strategy was estimated by evaluating a distributed POMDP to which the given DCSP strategy was mapped. The evaluation of mapped policies makes it feasible to predict the right DCSP strategy in a given problem setting. While my thesis presented preliminary results, it was the first attempt to apply such distributed POMD based models to performance analysis of conflict resolution in large-scale multi-agent systems.

Apart from the conflict resolution work, I also worked on an abstract formalization of distributed resource allocation that is expressive enough to represent many aspects of multi-agent systems including dynamism. The formalization can be applied to many real-world applications such as disaster rescue, hospital scheduling and distributed sensor networks. I also developed an argumentation-based conflict resolution system in which agents negotiate by providing arguments (justifications or elaborations) in support of their proposals to one another. The argumentation system was integrated with helicopter pilot agents in the ModSAF high-fidelity battlefield simulator.