56a04815b1d24b7d61fb6249 1453349199107 2048

CHI Evolution - Tovias Torres 1719099

  • HyperText

    The idea for hypertext (where documents are linked to related documents) is credited to Vannevar Bush's famous MEMEX idea from 1945. Ted Nelson coined the term "hypertext" in 1965. Engelbart's NLS system at the Stanford Research Laboratories in 1965 made extensive use of linking (funding from ARPA, NASA, and Rome ADC).
  • Text Editing

    In 1962 at the Stanford Research Lab, Engelbart proposed, and later implemented, a word processor with automatic word wrap, search and replace, user-definable macros, scrolling text, and commands to move, copy, and delete characters, words, or blocks of text. Stanford's TVEdit (1965) was one of the first CRT-based display editors that was widely used.
  • Direct Manipulation of graphical objects

    The now ubiquitous direct manipulation interface, where visible objects on the screen are directly manipulated with a pointing device, was first demonstrated by Ivan Sutherland in Sketchpad, which was his 1963 MIT PhD thesis.
  • Virtual Reality and "Augmented Reality"

    The original work on VR was performed by Ivan Sutherland in Harvard (1965-1968, funding by Air Force, CIA, and Bell Labs). Very important early work was by Tom Furness when he was at Wright-Patterson AFB. Myron Krueger's early work at the University of Connecticut was influential. Fred Brooks' and Henry Fuch's groups at UNC did a lot of early research, including the study of force feedback.Much of the early research on head-mounted displays and on the DataGlove was supported by NASA.
  • The Mouse

    The Mouse: The mouse was developed at Stanford Research Laboratory (now SRI) in 1965 as part of the NLS project (funding from ARPA, NASA, and Rome ADC) [9] to be a cheap replacement for light-pens, which had been used at least since 1954.
  • Period: to

    Touch Screen

    The touchscreen enables the user to interact directly with what is displayed, rather than using any other intermediate device. E.A. Johnson described his work on capacitive touchscreens in a article published in 1965. Touchscreens would not be popularly used for video games until 2004.Recently, most consumer touchscreens could only sense one point of contact at a time. This has changed with the commercialization of multi-touch technology..
  • Period: to

    UIMSs and Toolkits

    There are software libraries and tools that support creating interfaces by writing code. The first User Interface Management System (UIMS) was William Newman's Reaction Handler created at Imperial College, London (1966-67 with SRC funding). Much of the modern research is being performed at universities, for example the Garnet (1988) and Amulet (1994) projects at CMU (ARPA funded), and subArctic at Georgia Tech (1996, funding by Intel and NSF).
  • Windows

    Multiple tiled windows were demonstrated in Engelbart's NLS in 1968. Early research at Stanford on systems like COPILOT (1974) and at MIT with the EMACS text editor (1974) also demonstrated tiled windows.
  • Period: to

    Smart Home

    According to Li et. al. (2016) there are three generations of home automation
    First generation: wireless technology with proxy server, e.g. Zigbee automation;
    Second generation: artificial intelligence controls electrical devices, e.g. amazon echo;
    Third generation: robot buddy "who" interacts with humans, e.g. Robot Rovio, Roomba.
  • Spreadsheets

    The initial spreadsheet was VisiCalc which was developed by Frankston and Bricklin (1977-8) for the Apple II while they were students at MIT and the Harvard Business School. The solver was based on a dependency-directed backtracking algorithm by Sussman and Stallman at the MIT AI Lab.
  • Period: to

    Interface Builders

    (These are interactive tools that allow interfaces composed of widgets such as buttons, menus and scrollbars to be placed using a mouse.) The Steamer project at BBN (1979-85; ONR funding) demonstrated many of the ideas later incorporated into interface builders and was probably the first object-oriented graphics system.
  • Component Architectures

    The idea of creating interfaces by connecting separately written components was first demonstrated in the Andrew project [32] by Carnegie Mellon University's Information Technology Center (1983, funded by IBM). It is now being widely popularized by Microsoft's OLE and Apple's OpenDoc architectures.
  • Period: to

    Graphical User Interfaces Succeed

    Graphical user interfaces were a disruptive revolution in
    interaction when they finally succeeded commercially, as were
    earlier shifts to stored programs and to interaction based on
    commands, full-screen forms and full-screen menus. Some sectors
    were affected well before others
  • Period: to

    CHI in the Internet Era

    In 2001, the Association for Information Systems established the Special Interest Group in Human-Computer Interaction (SIGHCI). The founders defined HCI by citing 12 works by CHI researchers and made it a priority to bridge to CHI and the Information Science community, it includes published work focuses on interface design for e-commerce, online shopping, online behavior “especially in the Internet era,” and effects of Web-based interfaces on attitudes and perceptions.
  • Period: to

    Autonomous Car

    An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input.
    Autonomous cars can detect surroundings using a variety of techniques such as radar, lidar, GPS, odometry, and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
  • Kinect

    Kinect (codenamed Project Natal during development) is a line of motion sensing input devices by Microsoft for Xbox 360 and Xbox One video game consoles and Windows PCs. Based around a webcam-style add-on peripheral, it enables users to control and interact with their console/computer without the need for a game controller, through a natural user interface using gestures and spoken commands.
  • Period: to

    Intelligent personal assistant

    An intelligent personal assistant (or simply IPA) is a software agent that can perform tasks or services for an individual. These tasks or services are based on user input, location awareness, and the ability to access information from a variety of online sources (such as weather or traffic conditions, news, stock prices, user schedules, retail prices, etc.). Examples of such an agent are Apple's Siri, Google's Google Home, Google Now (and later Google Assistant).