Monthly Archives: April 2017

A Smart and Good Machine to Look For and Try From The Neocortex

Computers have transformed work and play, transportation and medicine, entertainment and sports. Yet for all their power, these machines still cannot perform simple tasks that a child can do, such as navigating an unknown room or using a pencil.

The solution is finally coming within reach. It will emerge from the intersection of two major pursuits: the reverse engineering of the brain and the burgeoning field of artificial intelligence. Over the next 20 years, these two pursuits will combine to usher in a new epoch of intelligent machines.

Why do we need to know how the brain works to build intelligent machines? Although machine-learning techniques such as deep neural networks have recently made impressive gains, they are still a world away from being intelligent, from being able to understand and act in the world the way that we do. The only example of intelligence, of the ability to learn from the world, to plan and to execute, is the brain. Therefore, we must understand the principles underlying human intelligence and use them to guide us in the development of truly intelligent machines.

At my company, Numenta, in Redwood City, Calif., we study the neocortex, the brain’s largest component and the one most responsible for intelligence. Our goal is to understand how it works and to identify the underlying principles of human cognition. In recent years, we have made significant strides in our work, and we have identified several features of biological intelligence that we believe will need to be incorporated into future thinking machines.

To understand these principles, we must start with some basic biology. The human brain is similar to a reptile’s brain. Each has a spinal cord, which controls reflex behaviors; a brain stem, which controls autonomic behaviors such as breathing and heart rate; and a midbrain, which controls emotions and basic behaviors. But humans, indeed all mammals, have something reptiles don’t: a neocortex.

The neocortex is a deeply folded sheet some 2 millimeters thick that, if laid out flat, would be about as big as a large dinner napkin. In humans, it takes up about 75 percent of the brain’s volume. This is the part that makes us smart.

At birth, the neocortex knows almost nothing; it learns through experience. Everything we learn about the world—driving a car, operating a coffee machine, and the thousands of other things we interact with every day—is stored in the neocortex. It learns what these objects are, where they are in the world, and how they behave. The neocortex also generates motor commands, so when you make a meal or write software it is the neocortex controlling these behaviors. Language, too, is created and understood by the neocortex.

The neocortex, like all of the brain and nervous system, is made up of cells called neurons. Thus, to understand how the brain works, you need to start with the neuron. Your neocortex has about 30 billion of them. A typical neuron has a single tail-like axon and several treelike extensions calleddendrites. If you think of the neuron as a kind of signaling system, the axon is the transmitter and the dendrites are the receivers. Along the branches of the dendrites lie some 5,000 to 10,000 synapses, each of which connects to counterparts on thousands of other neurons. There are thus more than100 trillion synaptic connections.

Your experience of the world around you—recognizing a friend’s face, enjoying a piece of music, holding a bar of soap in your hand—is the result of input from your eyes, ears, and other sensory organs traveling to your neocortex and causing groups of neurons to fire. When a neuron fires, an electrochemical spike travels down the neuron’s axon and crosses synapses to other neurons. If a receiving neuron gets enough input, it might then fire in response and activate other neurons. Of the 30 billion neurons in the neocortex, 1 or 2 percent are firing at any given instant, which means that many millions of neurons will be active at any point in time. The set of active neurons changes as you move and interact with the world. Your perception of the world, what you might consider your conscious experience, is determined by the constantly changing pattern of active neurons.

The neocortex stores these patterns primarily by forming new synapses. This storage enables you to recognize faces and places when you see them again, and also recall them from your memory. For example, when you think of your friend’s face, a pattern of neural firing occurs in the neocortex that is similar to the one that occurs when you are actually seeing your friend’s face.

Remarkably, the neocortex is both complex and simple at the same time. It is complex because it is divided into dozens of regions, each responsible for different cognitive functions. Within each region there are multiple layers of neurons, as well as dozens of neuron types, and the neurons are connected in intricate patterns.

The neocortex is also simple because the details in every region are nearly identical. Through evolution, a single algorithm developed that can be applied to all the things a neocortex does. The existence of such a universal algorithm is exciting because if we can figure out what that algorithm is, we can get at the heart of what it means to be intelligent, and incorporate that knowledge into future machines.

But isn’t that what AI is already doing? Isn’t most of AI built on “neural networks” similar to those in the brain? Not really. While it is true that today’s AI techniques reference neuroscience, they use an overly simplified neuron model, one that omits essential features of real neurons, and they are connected in ways that do not reflect the reality of our brain’s complex architecture. These differences are many, and they matter. They are why AI today may be good at labeling images or recognizing spoken words but is not able to reason, plan, and act in creative ways.

Our recent advances in understanding how the neocortex works give us insights into how future thinking machines will work. I am going to describe three aspects of biological intelligence that are essential, but largely missing from today’s AI. They are learning by rewiring, sparse representations, and embodiment, which refers to the use of movement to learn about the world.

Learning by rewiring: Brains exhibit some remarkable learning properties. First, we learn quickly. A few glances or a few touches with the fingers are often sufficient to learn something new. Second, learning is incremental. We can learn something new without retraining the entire brain or forgetting what we learned before. Third, brains learn continuously. As we move around the world, planning and acting, we never stop learning. Fast, incremental, and continuous learning are essential ingredients that enable intelligent systems to adapt to a changing world. The neuron is responsible for learning, and the complexities of real neurons are what make it a powerful learning machine.

In recent years, neuroscientists have learned some remarkable things about the dendrite. One is that each of its branches acts as a set of pattern detectors. It turns out that just 15 to 20 active synapses on a branch are sufficient to recognize a pattern of activity in a large population of neurons. Therefore, a single neuron can recognize hundreds of distinct patterns. Some of these recognized patterns cause the neuron to become active, but others change the internal state of the cell and act as a prediction of future activity.

Neuroscientists used to believe that learning occurred solely by modifying the effectiveness of existing synapses so that when an input arrived at a synapse it would either be more likely or less likely to make the cell fire. However, we now know that most learning results from growing new synapses between cells—by “rewiring” the brain. Up to 40 percent of the synapses on a neuron are replaced with new ones every day. New synapses result in new patterns of connections among neurons, and therefore new memories. Because the branches of a dendrite are mostly independent, when a neuron learns to recognize a new pattern on one of its dendrites, it doesn’t interfere with what the neuron has already learned on other dendrites.

This is why we can learn new things without interfering with old memories and why we don’t have to retrain the brain every time we learn something new. Today’s neural networks don’t have these properties.

Error setting up player: Invalid license key

Intelligent machines don’t have to model all the complexity of biological neurons, but the capabilities enabled by dendrites and learning by rewiring are essential. These capabilities will need to be in future AI systems.

Sparse representations: Brains and computers represent information quite differently. In a computer’s memory, all combinations of 1s and 0s are potentially valid, so if you change one bit it will typically result in an entirely different meaning, in much the same way that changing the letter i to a in the word fire results in an unrelated word, fare. Such a representation is therefore brittle.

Brains, on the other hand, use what’s called sparse distributed representations, or SDRs. They’re called sparse because relatively few neurons are fully active at any given time. Which neurons are active changes moment to moment as you move and think, but the percentage is always small. If we think of each neuron as a bit, then to represent a piece of information the brain uses thousands of bits (many more than the 8 to 64 used in computers), but only a small percentage of the bits are 1 at any time; the rest are 0.

Let’s say you want to represent the concept of “cat” using an SDR. You might use 10,000 neurons of which 100 are active. Each of the active neurons represents some aspect of a cat, such as “pet,” or “furry,” or “clawed.” If a few neurons die, or a few extra neurons become active, the new SDR will still be a good representation of “cat” because most of the active neurons are still the same. SDRs are thus not brittle but inherently robust to errors and noise. When we build silicon versions of the brain, they will be intrinsically fault tolerant.

There are two properties of SDRs I want to mention. One, the overlap property, makes it easy to see how two things are similar or different in meaning. Imagine you have one SDR representing “cat” and another representing “bird.” Both the “cat” and “bird” SDR would have the same active neurons representing “pet” and “clawed,” but they wouldn’t share the neuron for “furry.” This example is simplified, but the overlap property is important because it makes it immediately clear to the brain how the two objects are similar or different. This property confers the power to generalize, a capability lacking in computers.

The second, the union property, allows the brain to represent multiple ideas simultaneously. Imagine I see an animal moving in the bushes, but I got only a glimpse, so I can’t be sure of what I saw. It might be a cat, a dog, or a monkey. Because SDRs are sparse, a population of neurons can activate all three SDRs at the same time and not get confused, because the SDRs will not interfere with one another. The ability of neurons to constantly form unions of SDRs makes them very good at handling uncertainty.

Such properties of SDRs are fundamental to understanding, thinking, and planning in the brain. We can’t build intelligent machines without embracing SDRs.

Embodiment: The neocortex receives input from the sensory organs. Every time we move our eyes, limbs, or body, the sensory inputs change. This constantly changing input is the primary mechanism the brain uses to learn about the world. Imagine I present you with an object you have never seen before. For the sake of discussion, let’s say it’s a stapler. How would you learn about the new object? You might walk around the stapler, looking at it from different angles. You might pick it up, run your fingers over it, and rotate it in your hands. You then might push and pull on it to see how it behaves. Through this interactive process, you learn the shape of the stapler, what it feels like, what it looks like, and how it behaves. You make a movement, see how the inputs change, make another movement, see how the inputs change again, and so on. Learning through movement is the brain’s primary means for learning. It will be a central component of all truly intelligent systems.

This is not to say that an intelligent machine needs a physical body, only that it can change what it senses by moving. For example, a virtual AI machine could “move” through the Web by following links and opening files. It could learn the structure of a virtual world through virtual movements, analogous to what we do when walking through a building.

This brings us to an important discovery we made at Numenta last year. In the neocortex, sensory input is processed in a hierarchy of regions. As sensory input passes from one level of the hierarchy to another, more complex features are extracted, until at some point an object can be recognized. Deep-learning networks also use hierarchies, but they often require 100 levels of processing to recognize an image, whereas the neocortex achieves the same result with just four levels. Deep-learning networks also require millions of training patterns, while the neocortex can learn new objects with just a few movements and sensations. The brain is doing something fundamentally different than a typical artificial neural network, but what?

Hermann von Helmholtz, the 19th-century German scientist, was one of the first people to suggest an answer. He observed that, although our eyes move three to four times a second, our visual perception is stable. He deduced that the brain must take account of how the eyes are moving; otherwise it would appear as if the world were wildly jumping about. Similarly, as you touch something, it would be confusing if the brain processed only the tactile input and didn’t know how your fingers were moving at the same time. This principle of combining movement with changing sensations is calledsensorimotor integration. How and where sensorimotor integration occurs in the brain is mostly a mystery.

Our discovery is that sensorimotor integration occurs in every region of the neocortex. It is not a separate step but an integral part of all sensory processing. Sensorimotor integration is a key part of the “intelligence algorithm” of the neocortex. We at Numenta have a theory and a model of exactly how neurons do this, one that maps well onto the complex anatomy seen in every neocortical region.

What are the implications of this discovery for machine intelligence? Consider two types of files you might find on a computer. One is an image file produced by a camera, and the other is a computer-aided design file produced by a program such as Autodesk. An image file represents a two-dimensional array of visual features. A CAD file also represents a set of features, but each feature is assigned a location in three-dimensional space. A CAD file models complete objects, not how the object appears from one perspective. With a CAD file, you can predict what an object will look like from any direction and determine how an object will interact with other 3D objects. You can’t do these with an image file. Our discovery is that every region of the neocortex learns 3D models of objects much like a CAD program. Every time your body moves, the neocortex takes the current motor command, converts it into a location in the object’s reference frame, and then combines the location with the sensory input to learn 3D models of the world.

In hindsight, this observation makes sense. Intelligent systems need to learn multidimensional models of the world. Sensorimotor integration doesn’t occur in a few places in the brain; it is a core principle of brain function, part of the intelligence algorithm. Intelligent machines also must work this way.

These three fundamental attributes of the neocortex—learning by rewiring, sparse distributed representations, and sensorimotor integration—will be cornerstones of machine intelligence. Future thinking machines can ignore many aspects of biology, but not these three. Undoubtedly, there will be other discoveries about neurobiology that reveal other aspects of cognition that will need to be incorporated into such machines in the future, but we can get started with what we know today.

From the earliest days of AI, critics dismissed the idea of trying to emulate human brains, often with the refrain that “airplanes don’t flap their wings.” In reality, Wilbur and Orville Wright studied birds in detail. To create lift, they studied bird-wing shapes and tested them in a wind tunnel. For propulsion, they went with a nonavian solution: propeller and motor. To control flight, they observed that birds twist their wings to bank and use their tails to maintain altitude during the turn. So that’s what they did, too. Airplanes still use this method today, although we twist only the tail edge of the wings. In short, the Wright brothers studied birds and then chose which elements of bird flight were essential for human flight and which could be ignored. That’s what we’ll do to build thinking machines.

As I consider the future, I worry that we are not aiming high enough. While it is exciting for today’s computers to classify images and recognize spoken queries, we are not close to building truly intelligent machines. I believe it is vitally important that we do so. The future success and even survival of humanity may depend on it. For example, if we are ever to inhabit other planets, we will need machines to act on our behalf, travel through space, build structures, mine resources, and independently solve complex problems in environments where humans cannot survive. Here on Earth, we face challenges related to disease, climate, and energy. Intelligent machines can help. For example, it should be possible to design intelligent machines that sense and act at the molecular scale. These machines would think about protein folding and gene expression in the same way you and I think about computers and staplers. They could think and act a million times as fast as a human. Such machines could cure diseases and keep our world habitable.

In the 1940s, the pioneers of the computing age sensed that computing was going to be big and beneficial, and that it would likely transform human society. But they could not predict exactly how computers would change our lives. Similarly, we can be confident that truly intelligent machines will transform our world for the better, even if today we can’t predict exactly how. In 20 years, we will look back and see this as the time when advances in brain theory and machine learning started the era of true machine intelligence.

Mengapa Robot Tikus-Tikus Sangat Baik dalam Menjelajahi

If you take a common brown rat and drop it into a lab maze or a subway tunnel, it will immediately begin to explore its surroundings, sniffing around the edges, brushing its whiskers against surfaces, peering around corners and obstacles. After a while, it will return to where it started, and from then on, it will treat the explored terrain as familiar.

Roboticists have long dreamed of giving their creations similar navigation skills. To be useful in our environments, robots must be able to find their way around on their own. Some are already learning to do that in homes, offices,warehouses, hospitals, hotels, and, in the case of self-driving cars, entire cities. Despite the progress, though, these robotic platforms still struggle to operate reliably under even mildly challenging conditions. Self-driving vehicles, for example, may come equipped with sophisticated sensors and detailed maps of the road ahead, and yet human drivers still have to take control in heavy rain or snow, or at night.

The lowly brown rat, by contrast, is a nimble navigator that has no problem finding its way around, under, over, and through the toughest spaces. When a rat explores an unfamiliar territory, specialized neurons in its 2-gram brain fire, or spike, in response to landmarks or boundaries. Other neurons spike at regular distances—once every 20 centimeters, every meter, and so on—creating a kind of mental representation of space [PDF]. Yet other neurons act like an internal compass, recording the direction in which the animal’s head is turned [PDF]. Taken together, this neural activity allows the rat to remember where it’s been and how it got there. Whenever it follows the same path, the spikes strengthen, making the rat’s navigation more robust.

So why can’t a robot be more like a rat?

The answer is, it can. At the Queensland University of Technology (QUT), in Brisbane, Australia, Michael Milford and his collaborators have spent the last 14 years honing a robot navigation system modeled on the brains of rats. This biologically inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally intensive algorithms.

An earlier version of their system allowed an indoor package-delivery bot to operate autonomously for two weeks in a lab. During that period, it made more than 1,100 mock deliveries, traveled a total of 40 kilometers, and recharged itself 23 times. Another version successfully mapped an entire suburb of Brisbane, using only the imagery captured by the camera on a MacBook. Now Milford’s group is translating its rat-brain algorithms into a rugged navigation system for the heavy-equipment maker Caterpillar, which plans to deploy it on a fleet of underground mining vehicles.

Milford, who’s 35 and looks about 10 years younger, began investigating brain-based navigation in 2003, when he was a Ph.D. student at the University of Queensland working with roboticist Gordon Wyeth, who’s now dean of science and engineering at QUT.

At the time, one of the big pushes in robotics was the “kidnapped robot”problem: If you take a robot and move it somewhere else, can it figure out where it is? One way to solve the problem is SLAM, which stands forsimultaneous localization and mapping. While running a SLAM algorithm, a robot can explore strange terrain, building a map of its surroundings while at the same time positioning, or localizing, itself within that map.

Wyeth had long been interested in brain-inspired computing, starting with work on neural networks in the late 1980s. And so he and Milford decided to work on a version of SLAM that took its cues from the rat’s neural circuitry. They called it RatSLAM.

There already were numerous flavors of SLAM, and today they number in the dozens, each with its own advantages and drawbacks. What they all have in common is that they rely on two separate streams of data. One relates to what the environment looks like, and robots gather this kind of data using sensors as varied as sonars, cameras, and laser scanners. The second stream concerns the robot itself, or more specifically, its speed and orientation; robots derive that data from sensors like rotary encoders on their wheels or an inertial measurement unit (IMU) on their bodies. A SLAM algorithm looks at the environmental data and tries to identify notable landmarks, adding these to its map. As the robot moves, it monitors its speed and direction and looks for those landmarks; if the robot recognizes a landmark, it uses the landmark’s position to refine its own location on the map.

But whereas most implementations of SLAM aim for highly detailed, static maps, Milford and Wyeth were more interested in how to navigate through an environment that’s in constant flux. Their aim wasn’t to create maps built with costly lidars and high-powered computers—they wanted their system to make sense of space the way animals do.

“Rats don’t build maps,” Wyeth says. “They have other ways of remembering where they are.” Those ways include neurons called place cells and head-direction cells, which respectively let the rat identify landmarks and gauge its direction. Like other neurons, these cells are densely interconnected and work by adjusting their spiking patterns in response to different stimuli. To mimic this structure and behavior in software, Milford adopted a type of artificial neural network called an attractor network. These neural nets consist of hundreds to thousands of interconnected nodes that, like groups of neurons, respond to an input by producing a specific spiking pattern, known as an attractor state. Computational neuroscientists use attractor networks to study neurons associated with memory and motor behavior. Milford and Wyeth wanted to use them to power RatSLAM.

They spent months working on the software, and then they loaded it into a Pioneer robot, a mobile platform popular among roboticists. Their rat-brained bot was alive.

But it was a failure. When they let it run in a 2-by-2-meter arena, Milford says, “it got lost even in that simple environment.”

Milford and Wyeth realized that RatSLAM didn’t have enough information with which to reduce errors as it made its decisions. Like other SLAM algorithms, it doesn’t try to make exact, definite calculations about where things are on the map it’s generating; instead, it relies on approximations and probabilities as a way of incorporating uncertainties—conflicting sensor readings, for example—that inevitably crop up. If you don’t take that into account, your robot ends up lost.

That seemed to be the problem with RatSLAM. In some cases, the robot would recognize a landmark and be able to refine its position, but other times the data was too ambiguous. After not too long, the accrued error was bigger than 2 meters—the robot thought it was outside the arena!

In other words, their rat-brain model was too crude. It needed better neural circuitry to be able to abstract more information about the world.

“So we engineered a new type of neuron, which we called a ‘pose’ cell,” Milford says. The pose cell didn’t just tell the robot its location or its orientation, it did both at the same time. Now, when the robot identified a landmark it had seen before, it could more precisely encode its place on the map and keep errors in check.

Again, Milford placed the robot inside the 2-by-2-meter arena. “Suddenly, our robot could navigate quite well,” he recalls.

Interestingly, not long after the researchers devised these artificial cells, neuroscientists in Norway announced the discovery of grid cells, which are neurons whose spiking activity forms regular geometric patterns and tells the animal its relative position within a certain area. [For more on the neuroscience of rats, see “AI Designers Find Inspiration in Rat Brains.”]

“Our pose cells weren’t exactly grid cells, but they had similar features,” Milford says. “That was rather gratifying.”

The robot tests moved to bigger arenas with greater complexity. “We did a whole floor, then multiple floors in the building,” Wyeth recalls. “Then I told Michael, ‘Let’s do a whole suburb.’ I thought he would kill me.”

Milford loaded the RatSLAM software into a MacBook and taped it on the roof of his red 1994 Mazda Astina. To get a stream of data about the environment, he used the laptop’s camera, setting it to snap a photo of the street ahead of the car several times per second. To get a stream of the data about the robot itself—in this case, his car—he found a creative solution. Instead of attaching encoders to the wheels or using an IMU or GPS, he used simple image-processing techniques. By tracking and comparing pixels on sequences of photos from the MacBook, his SLAM algorithm could calculate the vehicle’s speed as well as direction changes.

Milford drove for about 2 hours through the streets of the Brisbane suburb of St. Lucia [PDF], covering 66 kilometers. The result wasn’t a precise, to-scale map, but it accurately represented the topology of the roads and could pinpoint exactly where the car was at any given moment. RatSLAM worked.

“It immediately drew attention and was widely discussed because it was very different from what other roboticists were doing,” says David Wettergreen, a roboticist at Carnegie ­Mellon University, in Pittsburgh, who specializes in autonomous robots for planetary exploration. Indeed, it’s still considered one of the most notable examples of brain-inspired robotics.

But though RatSLAM created a stir, it didn’t set off a wave of research based on those same principles. And when ­Milford and Wyeth approached companies about commercializing their system, they found many keen to hear their pitch but ultimately no takers. “A colleague told me we should have called it ‘NeuroSLAM,’ ” Wyeth says. “People have bad associations with rats.”

That’s why Milford is excited about the two-year project with Caterpillar, which began in March. “I’ve always wanted to create systems that had real-world uses,” he says. “It took a lot longer than I expected for that to happen.”

Apple and Tesla Show Big Silicon Valley Headcount Improved performance Since 2016 Intel and eBay Shed Staff

The Silicon Valley Business Journal annually publishes a list of the tech companies who are the biggest local employers. In recent years, Apple and Google (now Alphabet) have vied for the number one spot, with Cisco holding a lock on number three. That hasn’t changed. But there have been wild swings in headcount among those on the list.

Looking at the 20 largest tech employers in Silicon Valley, the overall workforce as reported by the Business Journal ranges from 25,000 at Apple to 2789 at Symantec. But what a difference a year makes. Since the Business Journal’s 2016 report, Apple hired 5000, Tesla Motors hired 3471 (for a total local workforce of 10,000), Facebook hired 2586 (for a total of 9385), and Gilead Sciences hired 1719 (for a total of 6949).

Among those companies shedding staff, eBay led the list, losing 3222 employees (for a total of 2978), followed by Intel (minus 3000 for a total of 7801), and Yahoo (minus 193 for a total of 3800).

Overall, headcount gains among the top 20 far exceeded losses, with a total of 16,604 new employees at those tech firms with a growing Silicon Valley presence, compared with 3299 fewer employees among those companies making cutbacks.

(Note: Western Digital was on the list at number 18 this year, and not on previous years, having consolidated operations to its San Jose office. TheBusiness Journal reported the company had 3000 local employees in 2017; it did not make the 2016 list, and the consolidation makes comparing annual totals complicated, so I didn’t include it in this discussion.)

Do not You Like Alexa Better if Knowing When It’s Annoying You?

What could your computer, phone, or other gadget do differently if it knew how you were feeling?

Rana el Kaliouby, founder and CEO of Affectiva, is considering the possibilities of such a world. Speaking at the Computer History Museum last week, el Kaliouby said that she has been working to teach computers to read human faces since 2000 as a PhD student at Cambridge University.

“I remember being stressed,” she says.  “I had a paper deadline, and “Clippy” [that’s Microsoft’s ill-fated computer assistant] would pop up and do a little twirl and say ‘It looks like you are writing a letter.’ I would think, ‘No I’m not!’”

(“You may,” Computer History Museum CEO John Hollar interjected, “be one of the few advanced scientists inspired by Clippy.”)

That was a piece of what led her to think about making computers more intelligent. Well, that, plus the fact that she was homesick. And the realization that, because she was spending more time with her computer than any human being, she really wanted her computer to understand her better.

Since then, she’s been using machine learning, and more recently deep learning, to teach computers to read faces, spinning Affectiva out of the MIT Media Lab in 2009 to commercialize her work. The company’s early customers are not exactly changing the world—they are mostly advertisers looking to better craft their messages. But that, she says, is just the beginning. Soon, she says, “all of our devices will have emotional intelligence”—not just our phones, but “our refrigerators, our cars.”

Early on, el Kaliouby focused on building smart tools for individuals with autism. She still thinks emotional intelligence technology—or EI—will be a huge boon to this community, potentially providing a sort of emotional hearing aid.

It’ll also be a mental healthcare aid, el Kaliouby predicts. She sees smart phones with EI as potentially able to regularly check a person’s mental state, providing early warning of depression, anxiety, or other problems. “People check their phones 15 times an hour. That’s a chance to understand that you are deviating from your baseline.”

Cars, she said, will need to have emotional intelligence as they transition to being fully automated; in the interim period, they will sometimes need to hand control back to a human driver, and need to know if the driver is ready to take control.

Smart assistants like Siri and Alexa, she says, “need to know when [they] gave you the wrong answer and you are annoyed, and say ‘I’m sorry.’”

Online education desperately needs emotional intelligence, she indicated, to give it a sense of when students are confused or engaged or frustrated or bored.

And the killer app? It just might be dating. “We have worked with teenagers who just want to have a girlfriend, but couldn’t tell if girls were interested in them,” el Kaliouby says. A little computer help reading their expressions could help with that. (Pornography and sex robots will likely be a big market as well, el Kaliouby says, but her company doesn’t plan on developing tools for this application. Nor for security, because that violates Affectiva’s policy of not tracking emotions without consent.)

While Affectiva is focusing on the face for its clues about emotions, el Kaliouby admits that the face is just part of the puzzle—gestures, tone of voice, and other factors need to be considered before computers can be completely accurate in decoding emotions.

And today’s emotional intelligence systems are still pretty dumb. “I liken the state of the technology to a toddler, el Kaliouby says. “It can do basic emotions. But what do people look like when inspired, or jealous, or proud? I think this technology can answer these basic science questions—we’re not done.”