Tags

, , , , , ,

BBC Ian Hardy explores the future of natural interaction in a series of short reports. The most thought-provoking of the three is Using the brain power to control tech. This report looks at technologies that use focus of attention to control on-screen elements. One technology tracks eye movements to determine where the eye fixation is and select an action to apply to the corresponding onscreen element. For instance, with a photo carrousel displayed, lateral eye movements make the carrousel turn whereas fixations on one image bring up the image enlarged. Another technology uses brain signals to control an onscreen element position along the vertical axis. Brain signals are captured using a EEG headset. Increasing activity takes the element up; decreasing activity takes it down. It is interesting to note that, in both cases, extensive learning is needed as control needs to be fully internalized, that is, one needs to learn to use eye movements or concentration for interaction purposes.

The next level of touch screens shows evolutions of haptic technologies that provide richer sensorial experiences. Richer as in bigger when touch interaction happens on large and very large displays, both vertical and horizontal.  Richer as in embedded in everyday objects. ZIK headphones replace side buttons with caresses. Up/down caresses control the volume up or down. Right/left caresses take to the previous or next piece. Richer as more diverse. Artificial muscle CEO Dirk Schapeler uses a great analogy to describe where vibrating technologies are in their evolution (transcript):

Yesterday’s vibratic technology is comparable, for example, to a door bell…. You always have…. like….. the same ring…. You can ring longer …. shorter …. always the same. Now, our technology allows you (something) comparable to a speaker …. to do the whole frequency range which gives you a more diverse and richer haptic effect.

Gesturing towards the future of interaction expands on the “body as the control” experience introduced by Microsoft Kinetic. Two observations. Most of the technologies presented involve a single agent even though the size and configuration of the displays are natural support to multi-agent interaction. I would have loved to see more of these  complex settings. One of the technologies introduces a mediator between the agent and the information space in the form of traditional UI controls, i.e. a cursor and a joystick, that breaks the natural interaction flow quite fundamentally.

Advertisements