The role of the body in interaction design '12

From CSISWiki
Jump to: navigation, search


Contents

Ubiquitous Computing, Tangible Computing, Bodily Interaction (Eimear)

Graphical User Interface

The first attempt at considering the human in the human computer interaction was with the graphical user interface (GUI).

  • Ivan Sutherland in Sketchpad,1963 MIT PhD thesis talked about manipulation of objects using a light-pen. This included grabbing objects, moving them and changing their size.
This was the first talk of direct manipulation of graphical object where the visible objects on the screen could be directly manipulated using a pointing device.
  • Alan Kay proposed the idea of overlapping windows in his 1969 PhD.
  • In 1975 David Canfield Smith coined the term 'icons' in his thesis.
  • 1984 Apple Macintosh – the first computer with a complete GUI, followed by Microsoft Windows in 1985.
Apple Macintosh 1984

Physical Interface

From the GUI to physically augmenting our real world movements within the computer, the invention of the mouse was the first device to consider the human body within the interaction.

  • In 1965 Doug Englebart created the mouse at Stanford Research Laboratory. He saw its invention as a cheap replacement for light pens that had been in use since 1954.
  • He demonstrated many of the current uses of the mouse in 1968 with the mouse made famous as a practical input device by Xerox PARC in the 1970's.
  • The mouse was the first input device to augment our real world physical actions within the computer.
1968 Mouse Demonstration
Doug Englebart on his Invention

Ubiquitous Computing

In a paper, entitled 'The World is not a Desktop', Mark Weiser proposed his reflections on the need for understanding and designing new ways of interacting with the new technology, refering to this he coined the phrase “Ubiquitous Computing”. According to Weiser a good tool is an invisible tool. 'By invisible, I mean that the tool does not intrude on your consciousness; you focus on the task, not the tool. Eyeglasses are a good tool - you look at the world, not the eyeglasses. The blind man tapping the cane feels the street, not the cane. Of course, tools are not invisible in themselves, but as part of a context of use. With enough practice we can make many apparently difficult things disappear: my fingers know video editing commands that my conscious mind has long forgotten. But good tools enhance invisibility.'(Weiser, M. 1993)

Weiser’s work has inspired many researchers all over the world to investigate the potential of new technologies and new forms of interaction, both in academia and industry.

  • Ubiquitous computing is all around us, in everyday objects and most importantly it is invisible
  • Commercial development of novel technologies and appliances and the creation of new software and hardware along with technical and technological research has allowed for developments in
miniaturisation, sensing technologies, mobile computing, wireless connectivity, novel input and output devices, wearable/ultra-light devices and of course The Cloud.
  • Computational power is moving away from the desktop and towards an increasingly seamless integration with the physical world.
Imaginary Interfaces
Evolution of the Computer

Wearable Computing

Head Mouted Display
Multi Touch Music Wall
Wrist Device
Touch Typing
Voice Following Dress

Bodily Interaction / Tangible Computing

Our bodies are the foundation for the way in which we experience and interact with our surroundings. We use various sensory feedbacks, such as our five senses; smell, sight, touch, hearing and taste, to determine an adequate response to our surrounding environment. Computers today remain accustomed to limited bodily interaction but this is changing and there is an increasing interest in applying these senses to the world of interactive design.

In 1986, Bill Buxton described a human interacting with a computer as a being with one well-developed eye, a long right arm, uniform-length fingers, ears, however, lacking legs, and a sense of smell or touch. He argued for a greater involvement of the body in Human-Computer Interaction, '(…) when compared to other human operated machinery (such as the automobile), today’s computer systems make extremely poor use of the potential of the human’s sensory and motor systems'.(Buxton, B. 1986)

An interactive experience can be much more tangible when a user can experience by moving around freely in a physical space rather than exploring a system through an index finger. We need to embraces the body’s role in the interaction with technology, this includes minor bodily involvement, like using a computer mouse, to interfaces that require utilization of the whole body. Our experience and perception of the world is always rooted in the body and is always action oriented.

A tangible user interface allows a user to interact with digital information through the physical environment.

The Reactable
Drum Machine
Tangible Media Group - OnObject
Tangible Pixels

Links

Articles

Resources

Slides

Gestures, Body Language, Gesture-Based Interaction (Patrick)

Gesture-Based Interaction

Interfaces are the medium that shape and define our digital interactions. The keyboard and mouse are still one of the primary ways in which we interact with technology. As a result, our experience is mediated by hardware designed to regiment our actions. The ways in which we share, play and socialise online exist within this rigid framework. However, an increasing range of sensors and computing devices mean that this current standard faces a significant challenge. The diverse scenarios and requirements of future computer interaction call for more innovative modes of engagement. Gesture-based interaction is a promising candidate among them.

“A gesture is a motion of the body that contains information. Waving goodbye is a gesture. Pressing a key on a keyboard is not a gesture because the motion of a finger on its way to hitting a key is neither observed nor significant. All that matters is which key was pressed.” Kurtenbach and Hulteen (1990)

Mark Weiser suggests that computer systems should adapt to us and not the other way around. One of his principles of ubicomp technology was that ‘the more you can do by intuition the smarter you are; the computer should extend your unconscious.’ By using gestures, we can design interfaces that build on and extend the ways in which we already interact with the world around us. These gestures are intimately related to communication and make possible a much richer language of interactions.

Early Examples

Theremin

The theremin is an early example of an instrument that is performed using physical gestures. The musician moves their hands in front of two antennas with one hand controlling the pitch and the other controlling the volume. “The Theremin is successful because there is a direct mapping of hand motion to continuous feedback, enabling the user to quickly build a mental model of how to use the device.” Buxton (2011)

Videoplace

Myron Krueger, the the creator of Videoplace, was unhappy with several aspects of traditional computers. He resented both having to sit down to use them and having to use a keyboard to interact with them. But in particular, he resented the fact that the computer seemed to deny the existence of his body. Videoplace is a system intended to respond to these concerns. It uses a combination of projectors, video cameras, and other hardware to place users within an interactive environment. The movements of users form silhouettes that provide a sense of presence while interacting with onscreen objects and other users. Evident in the installation are early signs of gestures like pinch zooming which we have become familiar with today.

Very Nervous System

Very Nervous System is an interactive art installation by David Rokeby. It has the aim of translating physical gestures into real-time interactive sound environments. A computer observes the physical gestures of human bodies through a video camera. It translates them into improvised music directly related to the qualities of the movements themselves in real-time. This creates a direct and visceral relationship between body, sound, space and technology. Rokeby describes how after spending some time interacting with Very Nervous System, which strongly reinforces a sense of connection with the surrounding environment, he feels linked to the world around him even by simply walking down the street. In his view, the qualities of the interface have a lasting effect on the way we perceive the world around us.

Recent Examples

Multi-Touch

Multi-touch devices enable users to interact with systems using more than one finger at a time. These devices are also able to accommodate multiple users at the same time, which is useful for large scale scenarios like interactive walls or tabletops.

Spatial Operating Environment

Within this environment, the user is able to combine gestures and spatial location to interact with on screen objects. One of the company’s founders was an advisor on the film Minority Report.

The Kinect Breakthrough

The Kinect is a motion sensing input device by Microsoft originally for the Xbox 360. It enables users to interact with the Xbox through a gesture-based interface without the need for a controller. However the potential of the Kinect extends well beyond gaming.

The Kinect sensor makes gesture interaction widely available. Originally extended by a community of hackers, Microsoft have recently released a Kinect SDK for Windows. The release comes with an improved sensor with a minimum range of 40cm.

Kinect Examples (Hacks)

Kinect Shadow Puppet
Kinect Internet Browsing
Playing Street Fighter with the Kinect
Kinect Leading the Blind
Earthquake Rescue

Articles

Videos

Resources

Slides

BCI - Brain Computer Interface (Rody)

Brain Computer Interface

A Brain Computer Interface establishes a direct communication between the brain and an external computer by measuring changes in electrical activity in the brain. BCI is primarily used within the field of neuroprosthetics, restoring damaged sight, hearing and movement. However, BCI is increasingly being researched and implemented within the fields of computer gaming, contemporary art, and other interactive media, which has resulted in the recent commercialization of BCI products.

Methods

Electroencephalography

EEG is the method of recording electrical activity along the skull using a number of electrodes. These electrodes are attached to the head using a conductive paste, or alternatively through electrode embedded skull caps.

Electrocorticography

ECoG is a similar but partially invasive practice where electrodes are placed directly onto the cerebral cortex, which requires a craniotomy.

Single Unit Recording

Single Unit Recording is a highly invasive procedure. Microelectrodes are inserted into the brain and can acutely record the electrical responses of a single neuron. While this method is the most precise, it remains largely unused as it has a higher risk of long-term damage. A further issue with this method is that the signal seems to deteriorate over time due to inflamed areas of tissue around the microelectrode.

The Emerging World of ECoG Neuroprosthetics

Background

  • The first Human EEG was recorded in 1924 by German physiologist and psychiatrist Hans Berger. His experiment was the result of on going research and animal experimentation dating back to 1790.
  • Herbert Jasper and Wilder Penfield pioneered ECoG in the early 1950s at the Montreal Neurological Institute. The method was used to discern specific areas in the brain where epileptic seizures occurred.
  • In 1957 John Eccles used intracellular single unit recording to study synaptic mechanisms in motoneurons, for which he won the 1963 Nobel Prize.
  • The first massive break through in BCI came in 1969 when researchers at the University of Washington School of Medicine showed that monkeys could learn to control a ‘biofeedback meter arm’ using neuronal activity. :Since then much research and experimentation has been done using various EEG, ECoG, and Single Unit methods, and in the 1990s the first Human BCI systems began to be developed.
Monkey controls robotic arm with brain computer interface

Current Exploration of BCI Technology

Medicine

Much success in BCI technology can be seen within the field of neuro-prosthesis. Cochlear implants are currently available for people with non-congenital hearing disorders while continuous research is also being applied to retinal implants. There is also continuous research into the implementation of BCI systems to restore control of bodily movements and ‘assistive devices’ for patients with various mobility disorders. In January 2012 Scientists at the University of California, Berkely, succeeded in decoding electrical activity in the region of the brain that is stimulated by hearing. A computer program was created that can convert brainwaves into individual words using rhythm and frequency recognition. This breakthrough is claimed to be the first step in creating a prosthetic speaking device that would enable people with no voice to communicate, through the thought of words alone.

Cochlear Implant
Paralyzed man moves robotic arm with his thoughts

Video Gaming

BCI is increasingly being used in video game development, as a new and exciting way to control interactive environments. Most notably in 2007 Emotiv Systems, an electronic game company based in San Francisco released an EEG head set which incorporated three specific applications. The headset could detect winks, smiles and other facial expressions, which would be transferred to an avatar in real-time. This was a much easier method of conveying emotion within virtual worlds than complex key commands. The headset could also detect certain states of emotional being such as calm or excited, which could alter aspects of the virtual atmosphere like the soundtrack or the way virtual characters interact with the player. Finally the headset could detect a number of physical intentions like push, pull, rotate, etc. allowing the player to significantly manipulate virtual objects.

Emotiv's Epoc headset

Contemporary Art

In Feburary 2012 Japanese artist and musician Masaki Batoh released an album entitled ‘Brain Pulse Music’ which consists of sounds made from an EEG device that converts brainwaves into frequency waves, which are then modulated in real-time from a controller.

Masaki Batoh BRAIN PULSE MUSIC

Interactive Film

Myndplay is an emerging software that enables a viewer to alter aspects of specially made interactive films in real-time. The software uses an EEG headset that detects the emotional state of the individual viewer and changes the film accordingly.

World's first mind controlled video & movie player

Articles

Videos

Resources

Slides

Personal tools