Fortunately, researchers have been working steadily for decades to make computers and communication devices nearly vanish by embedding them into our surroundings and networking them so they can sense the environment and interact with us in a manner that would integrate better with our living and working situations.
Early research groups developed scenarios to demonstrate the utility of these systems, which included independence support for the elderly, meeting facilitation, augmented driving and enhanced social interaction.
In 2008, the working group on Ambient Computing from the European technology consortium InterLink published a paper summarizing the state of the art in Ambient Computing, focusing on the social awareness of systems and privacy concerns. Included in this paper are a history of projects and use-case scenarios and a description of the original vision of ubiquitous, pervasive networks based on many "invisible" small computing devices embedded into the environment. These "smart ecosystems" of devices were to provide an intuitive user experience, enabling new types of interaction, communication and collaboration:
"...the degree of diffusion of smart devices...will result in smart ecosystems that might parallel other ecosystems in the not too far future.
... a major challenge will be to orchestrate the large number of individual elements and their relationships, connect and combine them via different types of communication networks to higher level, aggregated entities and investigate their emerging behaviour."
Similarly, an MIT project called Oxygen, working around 2000-2003, focused on Intelligent Spaces, which would sense the presence of people, their tasks, and even their attention and react appropriately, and Mobile Devices, which presaged today's smart phones to create connections to the physical world through a cluster of technologies, like cameras, sensors, networking, accelerometer, microphone, speaker, phone, GPS, etc.
Examples of smart artifacts include the HelloWall, a wall-size (but somewhat primitive) large ambient display that communicates with a sort of flexible dot-based code based on who is nearby, the ViewPort, a mobile handheld device which can communicate with other items in the room, including the HelloWall, and a variety of telepresence devices such as the MirrorSpace and VideoProbe.
Scenarios from the MIT Oxygen project included: a business conference involving people in different countries coordinating a meeting in London using different languages and automated scheduling and travel planning, as well as navigational and data assistance once they arrive, and a "guardian angel" which allows aging-in-place by providing memory and safety support to elderly people living independently.
Over the last decade, many of these scenarios have been at least partially realized, often through the use of device configurations not precisely forseen by the early ambient computing thinkers, such as smart-phones, GPS's, RFID's, and blue-tooth, as the market determines the dominant technologies. These configurations emphasize embedded devices less at this point and portable devices more, but the end result is the same.
Smart-phones bristling with data inputs from GPS, accelerometers, networking, video cameras, microphones, keyboards, and multi-touch have accelerated the evolution of pervasive computing by allowing mashups between all of these technologies, resulting in what could be called augmented intelligence for daily living.
Embedded and Pervasive Performance Support Systems
Embedded technologies would be a natural fit in performance support systems, particularly where use of a keyboard or mouse is inconvenient or impossible. Embedded performance support systems have been portrayed for years in movies and science fiction, although not always favorably, usually when they stop being support and start taking over. Intelligent computers portrayed in movies and TV shows like 2001 and Star Trek inhabit the entire environment and answer questions using natural language. Somehow they always seem to understand the questioner's intent perfectly (how many search engines can do that?) and gather their own data through sensors as well as direct input.
- When use of a job aid would damage credibility
- When speedy performance is a priority
- When novel and unpredictable situations are involved
- When smooth and fluid performance is a top priority
- When the employee lacks sufficient reading, listening, or reference skills
- When the employee is not motivated
The Minority Report gestural interface
Video of the Minority Report interface in action
In a pervasive computing scenario, devices sense or communicate with people in order to provide contextually appropriate services. The key here is the interface between the device and the human, since the goal is to make interaction with the devices as natural and intuitive as possible. These days an astonishing amount of creativity and innovation is being directed at the problem of interface design and the results will be life and work-transforming.
Some of the more interesting modes of human-computer interaction that have been tried include directed attention, voice, gesture tracking, motion (body motion through space or motion of the device, like the Wiimote or the Siftables shown below), haptic feedback-enhanced touch and even brain-computer interfaces. Some of the most exciting of these innovative interfaces are shown below.
Interactive Floor Projection Screens
These systems project images on the floor, and use a camera to track body motion across the surface. You may be familiar with these displays from seeing them at malls and theaters, but they could be used to select files, control other devices, etc.
This video shows an interactive floor installation at a Japanese art gallery, showing its use as an interface for retrieving information about works of art. For more information about building this type of interface, see the Natural User Interface Group for setup instructions and code.
Multitouch interfaces (like the one on the iPhone) are used in walls, tables, and smartPhones. Microsoft Surface, a multi-touch table that allows multi-user, fine-grained control of objects on screen, also interacts with real objects and other devices using cameras and wifi. This is a great example of "smart ecosystems" of intelligent devices which sense the status of the environment and lower barriers between people and their information. An example is shown transferring an image from a camera to the Surface table to a smartphone, simply by laying the camera and smartphone on the table.
Touch Wall
An intelligent whiteboard that uses cameras to track hand motion across a vertical display allowing zoomable, panning navigation through information in a non-linear format.
Interactive Wall from Clarity Nearlife
Interactive Window
http://www.appvee.com/t/iphone-app-review-redlaser
http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=321048135&mt=8
http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=312720263&mt=8
http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewArtist?id=284973757
http://pervasivegaming.ning.com/profile/Sergio
http://gamesalfresco.com/2008/03/03/top-10-augmented-reality-demos-that-will-revolutionize-video-games/
http://web.media.mit.edu/~dmerrill/invisible_media.html
Being locked to a keyboard, mouse and monitor restricts movement, and limits interaction with people and the environment. In many job situations people simply do not have any hands free for interaction with the computer or device.
These new interfaces and device clusters have a lot of potential in educational contexts and for performance support.
"It seems like a paradox but it will soon become reality: The rate at which computers disappear will be matched by the rate at which computer/information technology will increasingly permeate our environments and determine our lives" (Streitz & Nixon, 2005) [LINK].
Picture the scenario of a nurse or doctor caring for a patient with a complex set of conditions in a hospital. Performance support might consist of something like a mashup between the patient's medical record, data from monitors, specific handoff instructions from the last shift, information about whatever is unusual in the care plan and contextual guidance to help prevent errors. Each of these technologies currently exists, but separately and not well integrated. Perhaps augmented technologies could be used to project displays based on imaging data right onto the patient.
Number 3. When novel and unpredictable situations are involved. This one suggests a solution like HAL in 2001: A Space Odyssey. Hal was built to adapt to unforeseen situations, and actually did, a little too well.