Human-Computer Interaction and Interactive Technologies

Leader of the group: Prof. Dr. Jürgen Steimle

- Homepage -

 

Vision and Research Strategy

The research group investigates fundamental research questions that lie on the boundary between Human-Computer Interaction and Ubiquitous Computing. Central aspects of Mark Weiser’s vision of Ubiquitous Computing have become reality. However, the rigid and rectangular nature of today’s user interfaces is limiting in several ways: it restricts not only the embedding of user interfaces in the physical environment, but also mobile use, interaction, and customization. Our research focuses on future forms of interfaces, which are deformable, elastic, and support multi-modal interactions. We strive to advance the state-of-the-art by 1) developing new sensor and display surfaces, by 2) developing and empirically assessing novel interaction techniques, and by 3) contributing new methods for easy, fast and inexpensive fabrication of such interfaces.

 

 

Research Areas and Achievements

Flexible Interactive Touch and Display Surfaces

Conventional touch sensors and displays are mass-produced and quite restricted in their shape; typically they are rectangular, planar, and rigid. This limits the objects and locations where sensors and displays can be deployed. In our recent research, we have developed new thin-film touch sensors and displays that are deformable and can be easily customized in shape.

With PrintScreen (ACM UIST 2014) we have introduced a new perspective on displays, based on digital fabrication: instead of buying an off-the-shelf display, the designer of an embedded system can create a custom digital design, which meets the specific demands. Using one of the two fabrication processes contributed in the paper, he or she can then print the touch-sensitive display on a flexible substrate, quickly and with inexpensive equipment. The displays are around 100 microns thick, can be deformed, rolled, or folded, and printed on various materials. With PrintSense (ACM CHI 2014), we have further contributed a sensing technique to capture multimodal input on a flexible substrate. With a single layer of conductive ink, the sensor is capturing multi-touch input, several levels of touch pressure, hovering, and deformation of the sheet itself. This makes highly customized sensors printable on commodity inkjet printers.

While digital fabrication offers a high degree of design flexibility and ensures replicability, it lacks the directness of simply taking a pair of scissors and cutting a sheet of paper into a specific shape. To unleash the power of such direct physical customizability, we have contributed a multi-touch sensor which can be cut to quite many different shapes and yet remains functional (ACM UIST 2014).

On-skin Input

Today’s body worn devices, such as smartwatches or head-mounted displays, are characterized by a very small input surface. This makes interaction very challenging. We are studying how human skin can be used as a complimentary input surface for subtle, direct, and versatile control of wearable devices.

In an informative empirical study, we investigated how people would like to use skin for input (ACM CHI 2014). The study revealed a dual character of skin input: while common multi-touch gestures from touch screens were successfully transferred to skin, people also made use of the rich multimodal affordances that skin provides for input (e.g., shearing, pushing, pulling, twisting). This can enable new expressive ways of skin input.

In follow-up work we have developed iSkin, a soft-matter sensor surface for touch input on human skin (ACM CHI 2015). The sensor is stretchable by more than 30 % and can be customized in shape, hence it can be worn at various body locations. In order to meet the aesthetic demands of the user, it can be visually customized. Based on this sensor, we presented new types of body worn devices: skin stickers, wrappable input surfaces, and elastic extensions for existing body-worn devices.