Edward Tse - Areas of Invention

[Home] > Selected Research Projects and Publications  

Tse, E. (2007) Multimodal Co-located Interaction. PhD Dissertation, Department of Computer Science, University of Calgary, Calgary, Alberta, Canada, December. [pdf]

Tse, E., Greenberg, S., Shen, C., Forlines, C. and Kodama, R. (2008)
Exploring True Multi-User Multimodal Interaction over a Digital Table. Proceedings of DIS '08 (Feb 25-27, Cape Town, Africa), In Press.


Tse, E., Hancock, M. and Greenberg, S. (2007) Speech-Filtered Bubble Ray: Improving Target Acquisition on Display Walls. Proceedings of ICMI '07 (Nov 12-15, Nagoya, Japan), ACM Press, 307-314. [pdf].[acm].[video]


Tse, E., Greenberg, S., Shen, C., Barnwell, J., Shipman, S. and Leigh, D. (2007) Multimodal Split View Tabletop Interaction Over Existing Applications. Proceedings of Tabletop'07 - 2nd IEEE Tabletop Workshop (Oct 10-12, Rhode Island, USA), 129-136. [pdf].[ieee]


Tse, E., Shen, C., Greenberg, S. and Forlines, C. (2007) How Pairs Interact Over a Multimodal Digital Table. Proceedings of ACM CHI Conference on Human Factors in Computing Systems, (April 27-May 3, San Jose, USA), ACM Press, 215-218. [pdf].[acm]




Tse, E., Greenberg, S., Shen C. (2006) GSI Demo: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications. Proceedings of the International Conference on Multimodal Interfaces, (November 2, 2006, Banff, Canada), 76-83. [pdf].[acm].[video]




Tse, E., Greenberg, S., Shen, C. and Forlines, C. (2006) Multimodal Multiplayer Tabletop Gaming. Proceedings Third International Workshop on Pervasive Gaming Applications (PerGames'06), in conjunction with 4th Intl. Conference on Pervasive Computing, 139-148 [pdf].[pergames] Acceptance Rate < 30%

Recieved the Best Paper Award at Pervasive Games 2006!

Tse, E., Greenberg, S., Shen, C. and Forlines, C. (2007) Multimodal Multiplayer Tabletop Gaming. In ACM CIE Computers in Entertainment, June, ACM Press, 139-148. [pdf].[acm] (this is a reprint rewarded to the best papers at Pervasive Games)

Tse, E., Greenberg, S., Shen C. (2006) Exploring Interaction with Multi User Speech and Whole Handed Gestures on a Digital Table. Demonstration in the Extended Abstracts of ACM UIST 2006, (Oct 15, Montreux, Switzerland), 75-76. [pdf]

Tse, E. (2006) Multimodal Co-located Interaction. Doctoral Consortium in the Extended Abstracts of ACM UIST 2006, (Oct 15, Montreux, Switzerland), 39-42 [pdf]

Tse, E., Shen, C., Greenberg, S. and Forlines, C. (2006) Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop. Proceedings of Advanced Visual Interfaces (AVI'06), May 23-26, Venezia, Italy, ACM Press, 336 - 343. [pdf].[acm] Acceptance Rate < 25%

Tse, E., Multimodal Co-located Interaction. Candidacy Examination Package, University of Calgary, Alberta, Canada. [pdf]

Tse, E., Greenberg, S., Shen, C. (2006) Motivating Multimodal Interaction Around a Digital Table. Video Proceedings of ACM CSCW 2006, (Nov 4, Banff, Canada), ACM Press, 150-151. [pdf].[video]

Tse, E., Greenberg, S., Shen, C. (2006) Multi User Multimodal Interaction over Existing Single User Applications. Demonstration in the Extended Abstracts of ACM CSCW 2006, Nov 4, Banff, Canada, ACM Press, 111-112. [pdf]


Please visit the PeopleVis web site for more information about this project.


Tse, E. (2004) The Single Display Groupware Toolkit. MSc Thesis, Department of Computer Science, University of Calgary, Calgary, Alberta, Canada, November. [pdf]


Diaz-Marino, R., Tse, E. and Greenberg. S. (2004) The Grouplab DiamondTouch™ Toolkit.Video Proceedings of the ACM CSCW Conference on Computer Supported Cooperative Work. (November 6-10, Chicago, Illinois). ACM Press. [video]

Diaz-Marino, R.A., Tse, E, and Greenberg, S. (2003) Programming for Multiple Touches and Multiple Users: A Toolkit for the DiamondTouch™ Hardware. Companion Proceedings of ACM UIST'03 Conference on User Interface Software and Technology. [pdf].[video]


Greenberg, S., and Tse, E. (2006) The SDG Toolkit in Action. Video Proceednings of ACM CSCW 2006, Banff, Canada, ACM Press. [pdf].[video]

Tse, E. and Greenberg, S. (2004) Rapidly Prototyping Single Display Groupware through the SDGToolkit. Proc Fifth Australasian User Interface Conference, Volume 28 in the CRPIT Conferences in Research and Practice in Information Technology Series, (Dunedin, NZ January), Australian Computer Society Inc., 101-110. [pdf].[acm].[video]

Tse, E. and Greenberg. S. (2004) SDG Toolkit.Video Proceedings of the ACM CSCW Conference on Computer Supported Cooperative Work. (November 6-10, Chicago, Illinois). ACM Press.Video and abstract, duration 3:55. [video]

Tse, E., Histon, J., Scott, S. and Greenberg, S. (2004). Avoiding Interference: How People Use Spatial Separation and Partitioning in SDG Workspaces.Proceedings of the ACM CSCW'04 Conference on Computer Supported Cooperative Work, (Nov 6-10, Chicago, Illinois), ACM Press, 252-261. [pdf].[acm] Acceptance Rate < 30%

Tse, E. and Greenberg, S. (2002) SDGToolkit: A Toolkit for Rapidly Prototyping Single Display Groupware. Poster in ACM CSCW '2002 Conference on Computer Supported Cooperative Work, ACM Press, 173-174. [pdf].[poster]

Multimodal Co-located Interaction

This PhD dissertation summarizes the multimodal co-located research below and provides background about related work in the field of multimodal and co-located interaction. This thesis contains expanded contents that could not fit within the confines of a paper page limit.

Comments

 

 

The Designers' Environment

True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system: parallel work, mode switches, personal and group territories, and joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.

Comments


Speech Filtered Bubble Ray

We present the speech-filtered bubble ray that uses speech to transform a dense target space into a sparse one. Our strategy builds on what people already do: people pointing to distant objects in a physical workspace typically disambiguate their choice through speech. For example, a person could point to a stack of books and say “the green one”. Gesture indicates the approximate location for the search, and speech ‘filters’ unrelated books from the search. In a controlled evaluation, people were faster and preferred using the speech-filtered bubble ray over the standard bubble ray and ray casting approach.

Comments

Multimodal Split View Tabletop Interaction


While digital tables can be used with existing applications, they are typically limited by the one user per computer assumption of current operating systems. In this paper, we explore multimodal split view interaction – a tabletop whose surface is split into two adjacent projected views – that leverages how people can interact with three types of existing applications in this setting. Independent applications let people see and work on separate systems. Shared screens let people see a twinned view of a single user application. True groupware lets people work in parallel over large digital workspaces. Atop these, we add multimodal speech and gesture interaction capability to enhance interpersonal awareness during loosely coupled work.

Comments

How Pairs Use a Multimodal Digital Table

This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Also, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.

Video

Press Articles

Comments



GSI Demo: Gesture and Speech Interaction by Demonstration

Wrappers around existing single user applications are easily created using a multimodal training tool called GSI: Demonstration, a Gesture and Speech Interactive Demonstration system. Continuous gestures can be trained by saying “computer, when I do [one finger gesture], you do [mouse drag]”. Similarly, discrete speech commands can be trained by saying “computer, when I say [layer bars], you do [keyboard and mouse macro]”. This allows wrappers around existing single user commercial applications to be rapidly prototyped without the hassle of manual end user programming.

Video

Press Articles

Comments

Download GSI Demo

Multimodal Multi Player Tabletop Gaming

A digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and
share a common view, (b) rich whole handed gesture input normally only seen when handling physical objects, (c) the ability to monitor how others use space and access objects on the surface, and (d) the ability to communicate to each other and interact atop the surface via gestures and verbal utterances

Video

Press Articles

Comments












Enabling Interaction with Single User Applications through Speech and Gestures on a Multi-User Tabletop

Through case studies of two quite different geospatial systems – Google Earth and Warcraft III – we show the new functionalities, feasibility and limitations of leveraging single-user applications within a multi user, multimodal tabletop. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.

Video

Press Articles

Comments








The Shape of Conversation

The Shape of Conversation is an art project done in collaboration with Holly Simon, an alumni from the Alberta College of Art and Design. This project features simultaneous voice feature recognition from two people. It converts people's voices into a canvas of color. You'll see in the videos that people do pretty crazy things to make different colours appear on the screen.

Video

Comments



The Smart Technologies Digital Vision Technology Toolkit

Using the princples of the SDG Toolkit I created a tool to rapidly prototype multi touch applications on the Smart Technologies Digital Vision Technology Smartboards. This toolkit supports multiple touches and recognizes the size of each touch point.

Download DViT Toolkit

See DViT Toolkit Examples

Comments





The Diamond Touch™ Toolkit

Rob Diaz-Marino used the principles of the SDG Toolkit to develop a tool to rapidly prototype applications on the Grouplab DiamondTouch™ tabletop hardware.

Download DT Toolkit

See Diamond Touch Examples

Video

Comments











The Single Display Groupware Toolkit

Today’s personal computers are designed with the assumption that one person interacts with the display at a time. Thus developers face considerable hurdles if they wish to develop applications for multiple people. Our solution is the Single DIsplay Groupware (SDG) Toolkit, a toolkit for rapidly prototyping applications that are aware of multiple people and their common interactions.

Download SDG Toolkit

See SDG Toolkit Examples

Comments