-
250,000 BCE
(Actually 3,900,000 BCE) Early Humans Use and Spread "Knapping" Through Emotional Expression
Early types of humans learned to shape and use stone tools for hunting and gathering meat. To be able to spread this type of technology, due to the lack of a common spoken language at the time, they most likely used emotional expression to be able to give feedback to the people that they were teaching. Emotions such as excitement/anger from the teacher towards the student could guide the person learning into using the correct technique. (p. 10-13) -
Period: 250,000 BCE to 200,000 BCE
Large Language Begins
This is when scientists estimate that the first large languages begun to form. Using this, humans now had a way to express themselves aside from simply using emotion and nonverbal communication. While this is not a direct affective technology, the now possible use of verbal communication did not eliminate the use of emotion and other nonverbal forms of communication, but made the technology of today possible. Verbal communication complements emotion and non-verbal communication. (p. 14) -
Period: to
Philosophers Begun to Formalize Logic
Throughout this time span, philosophers begun to think about logic and how it could be universalized. What is meant by this is that philosophers begun to put logic into the form that it is now used in computers/math today. If it weren't for this early form of logic, the computational logic that we use in programming languages and electrical diagrams today would not exist. This early logic was the foundation of the creation of computers. (p. 36) -
Alan Turing Asked If Computers Could Think
Alan Turing, a prominent figure in foundational Computer Science was an early researcher of AI and if computers could imitate human thinking. Through his development of the turing test, he created guidelines to tell if a computer was as "intelligent" as a human. The thoughts of Turing caused people to question whether computers were able to read/exhibit emotion or thinking, a foundational idea for affective computing. (p. 37) -
Paul Ekman Started Research Surrounding Facial Positioning of Different Emotions
Paul Ekman, a psychologist did a study that identified different facial positions and their connection with emotions that someone was feeling. This research used and popularized FACS which is a dataset of different facial muscle positions that correlate nearly universally with different emotions. This work was foundational in modern use of facial recognition in affective computing. (p. 54-56) -
Moore's Law is Coined
Moore's Law states that the number of transistors on a piece of computer silicon would double every one to two years. Moore's Law has been foundational in modern computer progress and in pushing for more computational power. Moore's Law is largely responsible for the exponential growth that we have been seeing in processing power, and is what has led us to have the computing power for both AI and affective computing. (p. 38-39) -
GUIs Came Into Prominence
Technology had progressed to a point where using it was no longer self-explanatory. To go along with this, programmers now needed to create visual queues and guides for users to be able to use their software in a way that felt natural. GUIs account for how the user is likely to think and feel while using the software. This is important in affective computing as GUI design can make users either very angry or very happy. (p. 27-28) -
Clynes' Stenograph Machine is One of The First to Measure Emotion Through Physiological Traits
Clynes made a machine called the stenograph (the linked article is a continuation of his research) which seeks to measure the emotions of a user of a stenograph machine based on the pressure that they exert on the buttons of the machine. This was one of the first studies to equate emotion to different physiological traits (in this case the force that the person was using). Using physiological data is key in modern affective computing. (p. 44) -
Picard Published Her Book About Affective Computing
Picard, a pioneer in the Affective Computing field of research published her first book called "Affective Computing" in which her revolutionary ideas about computers connecting with emotions were first brought to the public. This was one of the first times that someone in the field was brave enough to make the connection between computers and emotions. Her work was foundational in exploring affective computing. (p. 47). -
Microsoft's Office Assistant Started Coming with Windows
In response to frustration with Microsoft's office suite, Microsoft begun shipping their Windows OS with a program called Microsoft Office Assistant. This piece of software introduced "Clippit" or "Clippy" who was a paperclip that sought to sense when a user was having an issue, and give them tips. This was one of the first softwares that sought to use affective computing. (p. 51) -
The Galvactivator Was Integrated With Quake
The Galvactivator was a device that read the moisture from a user's palm to determine their emotion. This was an early application of affective computing based on physiological response. This device was able to be integrated with the video game Quake to sense when the user was experiencing fear, and reflect that in the game through moving the character in a unique way. This is an early application of affective computing. (p. 58) -
Sony Releases Aibo Robot Pet
Sony released a robotic dog that seemed to have emotions and could be influenced by the human that was taking care of it. Sony's robotic dog could be pet, play with a ball, and almost anything else that a normal dog could do. While AI was not powerful during the time of this piece of technology, this dog seemed to exhibit emotion and although they were not human emotions, was some innovation in Affective computing. Source -
Studies Surrounding Conveying of Tones Over Online Written Communication Conducted
During this time, there were studies done that studies whether people were able to receive the intended tone in an email. the studies found that there was a 50/50 chance that a user could correctly identify sarcasm in an email. This shows the need for affective technology goes much further back than just being a recent jump in technology. Since the inception of early computing and online communication, we have seen a need for being able to communicate our feelings through technology. (p. 26) -
Affectiva Was Launched Which Turned Into Commercial Emotion Assessment for Advertising
Affectiva was launched by Picard and some colleagues originally to use affective technology to address issues that some people had with being able to notice facial expressions and outward expressions of emotion. This eventually turned into a commercial application of attention and emotion monitoring in the sense that companies were able to use it to do advertisement research on the emotions that people were feeling while watching the ad. (p. 66) -
First Commercial Social Robot
In Japan, people were able to buy the first commercial/retail social robot that sought to be a conversational companion for consumers called Pepper. The robots sold out in only one minute after becoming available (Satoshi). Softbank, the creator, also provides an API for people to be able to customize their Pepper robots with more advanced code (Satoshi). -
Hume AI Makes an Emotionally Intelligent AI Chatbot
Hume AI created a system called EVI or Empathetic Voice Interface that analyzes the way that the user speaks to determine their emotions and responds accordingly. This AI system provides an SDK for other developers to be able to tap into and allows for real-time responses from the AI. This is the most prominent modern application of affective computing and could provide a glimpse into what the future of empathetic AI looks like. Source -
First Fully Functional Human-Like Personal Assistant
I believe that at this point in the future, people will be able to have fully functioning human-like personal assistants. I believe that at this point in time, the fact that the personal assistant is AI will not be evident unless you knew beforehand. I believe that both the movement and vocal interaction of the AI will be just like a human, and it will understand how to address human emotions. I believe that these will be commonplace in business applications and could replace human assistants. -
AI Humanoid Robots Become Independent
I believe that at this point in the future humanoid robots will be an integrated part of our society, existing just as normal humans do. I believe that at some point in the future when AI becomes more advanced and gives the illusion of having and understanding emotions, there will be a push for robot independence and for robots being allowed to exist outside of just being tools for humans. I believe that these robots will shop, go to work, and go to school just as humans do at this point.