Eyes-Free Touch Controls

Touch screens are powerful things. The ability to turn a blank canvas of shiny smooth glass into anything you want at will is a designer’s dream. It’s not surprising that touch screens are rapidly taking over many of our everyday interactions with devices. If we are to believe Corning – the manufacturer of the Gorilla Glass used by Apple – touch screens will soon be even more ubiquitous:

For all their flexibility, however, there are some problems that plague touch screens. As Bret Victor noted, these glossy interfaces are literally less expressive than a sandwich. They lack the tangible and tactile properties of real objects and real interfaces. This is not really a big deal when you are browsing eBay on the couch with your iPad, but looking at Corning’s vision of the future, there’s a lot of other stuff going on:

In many ways, touch screens are slowly taking over more traditional ways of controlling various things. The stove in the video might be a bit of a long shot (although it would be arguably easier to clean), but looking at the interior of the Tesla Roadster S (pictured below), a touch screen dashboard is not the future, it’s the present.

The big problem with touch screens, however, is that they inherently demand more attention than the controls they are slowly replacing. The flexibility that a touch sensitive smooth pane of glass with a flexible visual representation brings, also means that if you take away the visual representation, what remains is just that: a smooth pane of glass. This means that many of the touch controls that are commonly used, such as sliders, buttons and so on, are much harder to use without looking at them, as they lack the tactile properties of the physical counterparts they are based on. As a result, touch screens are of limited use in situations where users are doing something that requires visual attention, such as driving or cooking.

For my Master’s thesis, I studied various ways to make touch screens easier to use in an eyes-free context. I analysed the different phases in human-touch screen interaction and identified different points in the interaction where the nature of touch screen controls causes problems. I then looked at various existing solutions to these problems, such as eliminating the need for interaction to start in a specific place using gestures and changing the nature of the interaction so that feedback can be made available earlier on, before mistakes are made that are hard to recover from.

I then designed an alternative solution using rich, multi-layered semantic tactile feedback. I built a functional prototype by strapping a cheap voice-coil linear actuator on to an iPad and controlling it through the iPad’s headphone port (seen in the picture to the right). This way, the iPad could generate dynamic feedback in response to touch movement to allow for the perception of virtual on-screen textures, ridges and other surface features.

The prototype was evaluated in a laboratory setting. Using an eye tracker, visual distraction was monitored while participants were performing a simple task with or without the rich feedback. The results showed that participants glanced down at the touch controls significantly less when the tactile feedback was present and that it was preferred by the majority of participants. Unfortunately, further details of this project are confidential.