Telerobots provide new heights of remote communication beyond video conferencing by allowing movement of a robot in the local space. However, a large part of communication lies beyond voice and facial expressions. We set out to enhance telerobotic video conferencing by making a robotic arm-and-hand manipulator which can be implemented on existing telerobot platforms! The manipulator, which replicates a remote user's arm motion, is capable of tangible interactions, expressive gestures, and physical referencing which are three of the primary social behaviors missing in the current telepresence experience. Put simply, we wanted to allow a remote worker to use more of their natural body language for communication and social connectedness.
The novelty in this work is the robust nature of the manipulator which allows for more natural tangible interactions combined with a motion tracking control system which enables more subconscious gesture replication. We implemented several iterations of the manipulator onto the Anybots QB 2.0 platform with a custom control system and worked towards shared control to further enhance realistic human-robot interactions.
Maya Dunlap
As technology design and educational experiences rapidly evolve, they often neglect one key element – touch. Current trends forecast the increase of haptic (touch) integration into consumer technologies, healthcare, and telerobotics, yet many technologies designed to date are heavily focused on visual and auditory capabilities. Similarly, many educational experiences have transitioned to digital/online platforms and are also heavily shaped through vision and auditory senses without prioritizing touch.
To this end, we in the CHROME Lab created wearable haptic devices that are incorporated into a variety of different settings utilizing the insights and intuitions of individuals with sensory disabilities or impairments. We leverage the knowledge of Blind and DeafBlind communities to infuse greater realism into the devices we design for recreating core building blocks of communication in remote settings. Additionally, we've begun to expand the application of haptics into short learning modules deployed in engineering courses to: 1) introduce burgeoning engineers to the concepts of haptics and inclusive design approaches, and 2) to bring tangibility into the classroom itself.
Bryan MacGavin
Traditional surgical treatments for intracranial lesions often include a craniotomy for resection, which requires a large surgical incision. Recent advancements in technology are now enabling minimally invasive approaches, one of which is called Laser Interstitial Thermal Therapy (LITT). In LITT, a laser probe is inserted into a tumor, and it is essentially heated from the inside-out, destroying it. Currently, LITT surgical platforms are restricted to only deploying the laser probe along a straight pathway. This limits treatment capabilities, often leaving edges of tumors that are large or geometrically complex, untreated. In this work, we have designed the Port Delivery Cannula System (PDCS) for tools that are not inherently steerable for LITT and other neurosurgical procedures. The PDCS is MRI-compatible, biocompatible, thermometry compatible, and seamlessly integrates within an FDA approved LITT surgical platform.
----------------------------------------
This work was supported by Medtronic External Research Program (ERP) under Grant No. 310856.
N. Agwu
Vibrations in touch screens are an accepted part of everyday life for most of us. But what if those vibrations could do more? While many advanced haptic devices are under development, touchscreens are one of the most readily available platforms. However, the haptics available in mobile touchscreens can often be underwhelming, carrying frustrating limitations. We decided to leverage a leap forward in vibration-based haptics, Apple's new CoreHaptics API, and investigate its potential for a multi-finger vibration experience. We conducted a perceptual user study that investigates multi-finger identification and exploration strategies and a laser doppler vibrometry study, uncovering the challenges of developing consistent, high-quality vibration feedback across hardware platforms that vary in actuation principle, screen size, and external attachments. This led us to important recommendations on new ways to utilize these capabilities.
J. Pasquesi, J. L. Gorlewicz
The field of haptics opens many doors for individuals with disabilities, helps add context to visual cues, helps most individuals learn more quickly, and much more. It's wide use also impacts areas that we often would not even think about. One of those areas is computer networking.
In the CHROME lab, and in collaboration with Dr. Flavio Esposito's lab, we are working to apply the unique properties of vibration to networking concepts. This experiment has started with a very simple testbed so that it can be easily replicated. We are also exploring the use of vibration to aid in teaching networking topics to individuals with Visual Impairments. Through this project, we hope to aid both the field of networking, and the individuals who work within the field and could benefit from greater accessibility.
F. Esposito, PhD; J. Pasquesi
A. Sangiorgi; G. Schembra; F. Esposito, PhD; J.L. Gorlewicz, PhD
Most touchscreens in commercial use today are limited by the type of stimulatory feedback they provide. For example, smartphones, tablets, and other commonly used touchscreens are mostly single touch – that is, they can receive input from only one point of contact (i.e., a finger) at any given time. Many screens have the “pinch and zoom” feature, which allows the user to use two fingers to cause the screen to expand or contract, but it can be argued that this is more of an extension of single touch than a true form of multitouch. Furthermore, the ability of these screens to provide tactile, or haptic, feedback is strictly limited to vibration. Often, this vibration will cover the entire screen, and by extension this forces the screen to be limited to single touch. Our project aims to create a screen that can provide multitouch in a way that allows localization and an ability to create sensations that are mutually distinguishable from each other. To do this, we use a method called modal superposition, where we leverage the combinations of modal shapes to create localized haptic effects on screen.
K. Katumu; H. Lee
“Surface Haptics” refers to the study of touch interactions on flat, featureless surfaces such as touchscreens. Modern touchscreen devices can display a wealth of visual information, but they give us very little “touch” information. This has significant implications for blind and visually-impaired individuals, and as touchscreen devices become more and more ubiquitous, accessibility becomes a key concern. Recent innovations in the field of surface haptics have given us tools to control how a surface feels beneath one’s fingertip in real-time, but ongoing research is needed to determine which of these tools (or which combination of tools) is best-suited for conveying information via touch.
Here in the CHROME lab at Saint Louis University we are developing a new kind of surface haptic display by merging two surface-rendering technologies: electroadhesion and ultrasonic friction-modulation.Our device has the ability to increase the friction of its interaction surface (making it feel sticky) using electrostatic forces. It can also decrease this friction below the nominal level (making it feel slippery) using ultrasonic vibrations. Furthermore, switching between the two extremes in real time allows us to render virtual shapes that feel vastly different from their surroundings. We hope that this hybrid approach will offer a vivid touch experience for both sighted and visually-impaired users, allowing us to put the “touch” back into “touchscreen”.
K. Deprow; S. Lambert