The "interface" between humans and technology 2014

From CSISWiki
Jump to: navigation, search



User Interface

To work with a system, users have to be able to control and assess the state of the system. For example, when driving an automobile, the driver uses the steering wheel to control the direction of the vehicle, and the accelerator pedal, brake pedal and gearstick to control the speed of the vehicle. The driver perceives the position of the vehicle by looking through the windshield and exact speed of the vehicle by reading the speedometer. The user interface of the automobile is on the whole composed of the instruments the driver can use to accomplish the tasks of driving and maintaining the automobile. Interactive products must constantly be designed to support the way humans interact with the world and information.

The Changes of Interface

The Evolution of the Human Machine Interface

The Evolution of the Interface


The user interface has becoming more and more invisible, automatic and convenient. There are more and more human machines becoming intelligence, which made the environment more and more intelligence. That is where ambient intelligence comes from. With the development with the technology, there are more possibility for us to use our imagination!



  • In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 2010–2020. In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users.
  • Ambient intelligence is influenced by user-centered design where the user is placed in the center of the design activity and asked to give feedback through specific user evaluations and tests to improve the design or even co-create the design together with the designer (participatory design) or with other users (end-user development).

Required Technologies

  • Unobtrusive hardware (Miniaturization, Nanotechnology, smart devices, sensors etc.)
  • Seamless mobile/fixed communication and computing infrastructure (interoperability, wired and wireless networks, service-oriented architecture, semantic web etc.)
  • Dynamic and massively distributed device networks, which are easy to control and program (e.g. service discovery, auto-configuration, end-user programmable devices and systems etc.)
  • Human-centric computer interfaces (intelligent agents, multimodal interaction, context awareness etc.)
  • Dependable and secure systems and devices (self-testing and self repairing software, privacy ensuring technology etc.)


The Ambient Life


  • Internet of Things - Wayne
  • Augmented Reality - Bochao



The Internet of Things (IoT) refers to uniquely identifiable objects and their virtual representations in an Internet-like structure. The term Internet of Things was proposed by Kevin Ashton in 1999 though the concept has been discussed in the literature since at least 1991. The concept of the Internet of Things first became popular through the Auto-ID Center at MIT and related market analysis publications. Radio-frequency identification (RFID) was seen as a prerequisite for the Internet of Things in the early days. If all objects and people in daily life were equipped with identifiers, they could be managed and inventoried by computers.Besides using RFID, the tagging of things may be achieved through such technologies as near field communication, barcodes, QR codes and digital watermarking.

Auto-ID Labs

Original Definition

In a seminal 2009 article for the RFID Journal, "That 'Internet of Things' Thing", Ashton made the following assessment:

Today computers—and, therefore, the Internet—are almost wholly dependent on human beings for information. The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world. And that's a big deal. If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best.
—Kevin Ashton, 'That 'Internet of Things' Thing', RFID Journal, July 22, 2009

As of 2013, research into the Internet of Things is still in its infancy. In consequence, we lack standard definitions for Internet of Things.

Key Tech

Unique Addressability of Things

  • Electronic Product Code: RFID-tags, QR Codes
  • Semantic Web: IPv6 would be able to communicate with devices attached to virtually all human-made objects because of the extremely large address space of the IPv6 protocol.
  • GS1/EPCglobal EPC Information Services: This system is being used to identify objects in industries ranging from aerospace to fast moving consumer products and transportation logistics.

Artificial Intelligence

Internet of Things.png

Embedded intelligence presents an “AI-oriented” perspective of Internet of Things, which can be more clearly defined as: leveraging the capacity to collect and analyze the digital traces left by people when interacting with widely deployed smart things to discover the knowledge about human life, environment interaction, as well as social connection/behavior.

Positioning technology

  • GPS
  • TCP/IP

Fancy Applications

Smart House


Bill Gates' house is a large mansion in the side of a hill which overlooks Lake Washington in Medina, Washington, United States of America. He spent more than $100 million on it that automatically controls lighting, digital art and security.

Smart homes rely on networking, programming and automation to connect all the devices and appliances in your home so they can communicate with each other and with you. With a smart home, you can control just about any element of daily living. Systems can turn on your coffee maker in the morning, adjust the temperature of your heated pool or control the time your landscape lighting goes on at night.

Smart TV


Smart TV have a higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and home networking access, with much less focus on the traditional broadcasting media that traditional television sets and set-top boxes offers.

Smart Refrigerator

Samsung smart fridge.jpg

Smart refrigerator is a refrigerator which has been programmed to sense what kinds of products are being stored inside it and keep a track of the stock through barcode or RFID scanning. This kind of refrigerator is often equipped to determine itself whenever a food item needs to be replenished.



Nest Labs is a home automation company headquartered in Palo Alto, California, that designs and manufactures sensor-driven, Wi-Fi-enabled, self-learning, programmable thermostats and smoke detectors. Co-founded by former Apple engineers Tony Fadell and Matt Rogers in 2010, the start-up company quickly grew to have more than 130 employees by the end of 2012.

Learning Thermostat

Smoke and Carbon Monoxide Alarm

On January 13, 2014, Google announced plans to acquire Nest Labs for US$3.2 billion and leaves Nest Labs to use its own brand.



Nabaztag is a Wi-Fi enabled ambient electronic device in the shape of a rabbit, invented by Rafi Haladjian and Olivier Mével. It can connect to the Internet (to download weather forecasts, read its owner's email, etc.). It is also customizable and programmable to an extent.

Smart University


Group work: Thinking of different equipment that can be changed and added into our lab.

Future and Prospect

In the near future the Internet and wireless technologies will connect different sources of information such as sensors, mobile phones and cars in an ever tighter manner. The number of devices which connect to the Internet is – seemingly exponentially – increasing. These billions of components produce consume and process information in different environments such as logistic applications, factories and airports as well as in the work and everyday lives of people. The society need new, scalable, compatible and secure solutions for both the management of the ever more broad, complexly-networked Internet of Things, and also for the support of various business models.

Everything will be automatic as you wish.



Augmented reality (AR) is a live, copy, view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world.


At present, there are two common definitions for augmented reality.

One was raised by Ronald Azuma from University of North Carolina, 1997. He considered that augmented reality included three aspects:

  • Combines real and virtual
  • Interactive in realtime
  • Registered in 3D

The other definition was “Milgram’s Reality-Virtual Continuum” raised by Paul Milgram and Fumio Kishino, 1994. They regarded real and virtual environments as both ends of the continuum, between which was called “Mixed Reality”. The one close to real environment is “Augmented Reality”, while the one close to virtual environment is “Augmented Virtuality”.


Augmented reality. What is augmented reality?



Hardware components for augmented reality are: processor, display, sensors and input devices. Modern mobile computing devices like smartphones and tablet computers contain these elements which often include a camera and MEMS sensors such as accelerometer, GPS, and solid state compass, making them suitable AR platforms.

Software and Algorithms

A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration which uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts. First detect interest points, or fiduciary markers, or optical flow in the camera images. First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods. The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiduciary markers) present in the scene. In some of those cases the scene 3D structure should be precalculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters,nonlinear optimization, robust statistics. Augmented Reality Markup Language (ARML) is a data standard developed within the Open Geospatial Consortium (OGC), which consists of an XML grammar to describe the location and appearance of virtual objects in the scene, as well as ECMAScript bindings to allow dynamic access to properties of virtual objects. To enable rapid development of Augmented Reality Application, some software development kits (SDK) have emerged. Some of the well known AR SDKs are offered by Metaio, Vuforia, Wikitude and Layar.




Layar: video



AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there. AR can also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.

Soluis AR. General AR demonstration:


Augmented reality applications can complement a standard curriculum. Text, graphics, video and audio can be superimposed into a student’s real time environment. Textbooks, flashcards and other educational reading material can contain embedded “markers” that, when scanned by an AR device, produce supplementary information to the student rendered in a multimedia format. Students can participate interactively with computer generated simulations of historical events, exploring and learning details of each significant area of the event site. AR can aid students in understanding chemistry by allowing them to visualize the spatial structure of a molecule and interact with a virtual model of it that appears, in a camera image, positioned at a marker held in their hand. Augmented reality technology also permits learning via remote collaboration, in which students and instructors not at the same physical location can share a common virtual learning environment populated by virtual objects and learning materials and interact with another within that setting.

Augmented Reality in Education: Shaw Wood Primary School



Augmented reality allows gamers to experience digital game play in a real world environment. In the last 10 years there has been a lot of improvements of technology, resulting in better movement detection and the possibility for the Wii to exist, but also direct detection of the player's movements.

Industrial design

AR can help industrial designers experience a product's design and operation before completion. Volkswagen uses AR for comparing calculated and actual crash test imagery. AR can be used to visualize and modify a car body structure and engine layout. AR can also be used to compare digital mock-ups with physical mock-ups for finding discrepancies between them.


Augmented Reality can provide the surgeon with information, which are otherwise hidden, such as showing the heartbeat rate, the blood pressure, the state of the patient’s organ, etc. AR can be used to let a doctor look inside a patient by combining one source of images such as an X-ray with another such as video. Examples include a virtual X-ray view based on prior tomography or on real time images from ultrasound and confocal microscopy probes or visualizing the position of a tumor in the video of an endoscope. AR can enhance viewing a fetus inside a mother's womb. See also Mixed reality.


The gaming industry has benefited a lot from the development of this technology. A number of games have been developed for prepared indoor and outdoor environments.

Augmented Reality Demo


AR systems can interpret foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles.

Introducing Word Lens


Augmented reality technology can also be applied in many other areas as follow: archaeology, art, commerce, construction, military, navigation, office workplace, task support, television, tourism and sightseeing, etc.

Matt Mills: Image recognition that triggers augmented reality

The Future of Augmented Reality

The future of augmented reality is clearly bright, even as it already has found its way into our cell phones and video game systems.

The future of augmented reality:


Personal tools