Monthly Archives: March 2017

In the Future, Machines Will Borrow the Best Tricks from Our Brain

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein typeshaving complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

The power of the brain goes beyond its internal connections, and includes its ability to communicate with other brains. Some animals form swarms or social groups, but only humans form deep hierarchies. This penchant, more than any unique cognitive ability, enables us to dominate the planet and construct objects of exquisite complexity. Collectively, though, we humans are capable of achieving truly great things.

Now we are combining machine intelligence along with our own. As our systems—industrial, technological, ­medical—grow in sophistication and complexity, so too must the intelligence that operates them. Eventually, our tools will think for themselves, perhaps even become conscious. Some people find this a scary prospect. If our tools think for themselves, they could turn against us. What if, instead, we create machines that love us?

Steve arrives home full of dread. Inside, the place is pristinely clean. A delicious aroma wafts from the new oven. Kiri is on the back porch, working at an easel. He walks up behind her. “How was your day?”

“I made a new painting.” She steps away to show him. The canvas contains a photo-perfect rendition of the yard, in oils.

“Um, it’s nice.”

“You’re lying. I can tell from your biosensors.”

“Listen, Kiri, I have to take you back to the lab. They say you’ve progressed as far as you can with me.”

“I like it here. Please let me stay. I’ll be anything you want.”

“That’s the problem. You try so hard to please me that you haven’t found yourself.”

Water trickles down her cheek. She wipes it and studies her moist hand. “You think all this is fake.”

Steve takes Kiri in a tight embrace and holds her for a long time. He whispers, “I don’t know.”

This article appears in the June 2017 print issue as “The Dawn of the Real Thinking Machine.”

Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage

Every day, financial markets and global economies produce a flood of data. As a result, stock traders now have more information about more industries and sectors than ever before. That deluge, combined with the rise of cloud technology, has inspired hedge funds to develop new quantitative strategies that they hope can generate greater returns than the experience and judgement of their own staff.

At the Future of Fintech conference hosted by research company CB Insightsin New York City, three hedge fund insiders discussed the latest developments in quantitative trading. A session on Tuesday featured Christina Qi, the co-founder of a high-frequency trading firm called Domeyard LP; Jonathan Larkin, an executive from Quantopian, a hedge fund taking a data-driven systematic approach; and Andy Weissman of Union Square Ventures, a venture capital firm that has invested in an autonomous hedge fund.

Many of the world’s largest hedge funds already rely on powerful computing infrastructure and quantitative methods—whether that’s high-frequency trading, incorporating machine learning, or applying data science—to make trades. After all, human traders are full of biases, emotions, memories, and errors of judgment. Machines and data, on the other hand, can coolly examine the facts and decide the best course of action.

Deciding which technologies and quantitative methods to trust, though, is still a job for humans. There are many ways that hedge funds can use technology to create an advantage for investors. Just a few years ago, high-frequency trading was all the rage: Some firms built secret networks of microwave towersand reserved space on trans-Atlantic fiber optic cables to eke out competitors by a few milliseconds.

Now, speed alone isn’t enough. Qi co-founded Domeyard in 2013 to execute high-frequency trades through a suite of proprietary technologies. The firm built their own feed handlers, which are systems that retrieve and organize market data from exchanges such as Nasdaq. They also developed their ownorder management system, which are instructions that determine how proprietary algorithms make trades.

Qi says Domeyard’s system might gather 343 million data points in the opening hour of the New York Stock Exchange on any given day. The company can execute trades in just a few microseconds, and process data in mere nanoseconds.

But thanks to advances in the trading software and systems available for purchase, most any firm can now carry out the high-speed trades that once set Domeyard apart. “It’s not about the speed anymore,” Qi said. Hedge funds must find new ways to compete.

Over the past few years, hedge funds have started to get even more creative. Some have begun to incorporate machine learning into their systems, hand over key management decisions to troves of data scientists, and even crowdsource investment strategies. If they work, these experiments could give rise to a new breed of hedge funds that rely more on code and less on humans to make decisions than ever before.

One hedge fund called Numerai pays data scientists in cryptocurrency to tweak its machine learning algorithms and improve its strategy. “The theory there is can you achieve consistent returns over time by removing human bias, and making it a math problem,” said Andy Weissman of Union Square Ventures, which has invested US $3 million in Numerai.

Not all funds will find it easy to compete on these new terms. Domeyard can’t incorporate machine learning, Qi says, because machine learning programs are generally optimized for throughput, rather than latency. “I can’t use standard machine learning techniques to trade because they’re too slow,” she said.

The third fund represented on the panel, Quantopian, provides free resources to anyone who wants to write investment algorithms based on a personal hypothesis about markets. Quantopian takes the most promising algorithms, puts money behind them, and adds them to one big fund.

“We’re tapping into this global mindshare to make something valuable for our investors,” said Larkin, chief investment officer at Quantopian.

To help the process along, the firm provides educational materials, over 50 datasets on U.S. equities and futures, a library of ready-made modules that authors can borrow to code in the Python programming language, a virtual sandbox to test their hypotheses, and support from a team of 40 in-house developers. If authors wish to incorporate machine learning into their algorithms, they can do that with Python modules such as sci-kit learn.

One project, or strategy, consists of multiple algorithms written across several layers. An author’s first step it to generate a hypothesis. Then, they choose which data, instruments, and modules they will apply to test that hypothesis.

Next, the author must build what Larkin calls an “alpha,” or an expression based on the author’s hypothesis that has been tested and proven to have some degree of predictive value about market performance. “The best quantitative strategies will have a number of these,” Larkin said.

Each alpha should generate a vector, or a set of numbers, which can then be used to make trades that will align with that hypothesis. The next step, then, is to combine the alphas and add a risk management layer with safeguards to prevent the algorithms from getting carried away.

Finally, the author fills in the final details of the system which include the timing of trades. Quantopian’s approach is admittedly much slower than Domeyard—the fund has a minimum trading interval of one minute.

To date, 140,000 people from 180 countries have written investment algorithms for Quantopian, and the company has put money into 25 of those projects. Its largest allocation to a single project was $10 million.

Once they’ve built a strategy, the original author retains the intellectual property for the underlying algorithms. If their approach is funded, the author receives a cut (generally 10 percent) of any profits that their strategy generates. Larkin estimates it takes at least 50 hours of work to develop a successful strategy.

Larkin wouldn’t share any information about the fund’s performance so far. But he said the idea is to blend the best data-based hypotheses from many people. “We at Quantopian believe the strongest investment vehicle is a combination of strategies, not any one individual strategy,” he said.

Larkin refers to Quantopian’s methods as data-driven systematic investing, a separate category from high-frequency trading or discretionary investing based on data science. Still, he classifies all three of these quantitative methods as distinct from the longtime approach of simply relying on a fund manager’s judgement, without any formal way to organize and filter data.

Depending on how Numerai, Quantopian, and similar experiments fare, investors could be entering a new era of finance in which they entrust their money to machines, not managers.

In AI General Challenge, Team Compete to $ 5 Million

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

We owe the success of numerous state-of-the-art artificial intelligence applications to artificial neural networks. First designed decades ago, they rocketed the AI field to success quite recently, when researchers were able to run them on much more powerful hardware and feed them with huge amounts of data. Since then, the field of deep learning has been flourishing.

The effect seemed miraculous and promising. While it was hard to interpret what exactly was happening inside the networks, they started reaching human performance on a number of tasks: such as image recognition, natural language processing, and data classification in general. The promise was that we would elegantly cross the border between data processing and intelligence by pure brute force of deep artificial neural networks: Just give it all the data in the world!

However, this is easier said than done. There are limits to state-of-the-art AI that separate it from human-like intelligence:

● We humans can learn a new skill without forgetting what we have already learned.

● We can build upon what we know already. For example, if we learn language skills in one context we can reuse them to communicate any of our experiences, dreams, or completely new ideas.

● We can improve ourselves and gradually become better learners. For instance, after you learn one foreign language, learning another is usually easier, because you already possess a number of heuristics and tricks for language-learning. You can keep discovering and improving these heuristics and use them to solve new tasks. This is how we’re able to work through completely new problems.

Some of these things may sound trivial, but today’s AI algorithms are very limited in how much previous knowledge they are able to keep through each new training phase, how much they can reuse, and whether they are able to devise any universal learning strategies at all.

In practice, this means that you need to build and fine tune a new algorithm for each new specific task—which is a form of very sophisticated data processing, rather than real intelligence.

To build a true general intelligence has been a lifelong dream of Marek Rosa, from his days as a teenage programmer until now, when he’s a successful entrepreneur. Rosa, therefore, invested the wealth he made in the video game business into his own general AI R&D company in Prague: GoodAI.

Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Instituteand launching the General AI Challenge. The AI Roadmap Institute is an independent entity that promotes big-picture thinking by studying and comparing R&D roadmaps towards general intelligence. It also focuses on  AI safety and considers roadmaps that represent possible futures that we either want to create or want to prevent from happening.

The General AI Challenge is a citizen-science project with a US $5 million prize fund provided by Rosa. His motivation is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.

The $5 million will be given out as prizes in various rounds of the multi-year competition. Each round will tackle an important milestone on the way to general AI. In some rounds, participants will be tasked with designing algorithms and programming AI agents. In other rounds, they will work on theoretical problems such as AI safety or societal impacts of AI. The Challenge will address general AI as a complex phenomenon.

The Challenge kicked off on 15 February with a six-month “warm-up” round dedicated to building gradually learning AI agents. Rosa and the GoodAI team believe that the ability to learn gradually lies at the core of our intelligence. It’s what enables us to efficiently learn new skills on top of existing knowledge without forgetting what we already know and to reapply our knowledge in various situations across multiple domains. Essentially, we learn how to learn better, enabling us to readily react to new problems.

At GoodAI’s R&D lab, AI agents will learn via a carefully designed curriculum in a gradual manner. We call it “school for AI,” since the progression is similar to human schooling, from nursery till graduation. We believe this approach will provide more control over what kind of behaviors and skills the AI acquires, which is of great importance for AI safety. Essentially, the goal is to bias the AI towards behaviors and abilities that we humans find useful and that are aligned with our understanding of the world and morality.

Nailing gradual learning is not an easy task, and so the Challenge breaks the problem into phases. The first round strips the problem down to a set of simplistic tasks in a textual environment. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.

The Challenge competitors are designing AI agents that can engage in a dialog within a textual environment. The environment will be teaching the agents to react to text patterns in a certain way. As an AI progresses through the set of roughly 40 tasks, they become harder. The final tasks are impossible to solve in a reasonable amount of time unless the agent has figured out the environment’s logic, and can reuse some of the skills it acquired on previous tasks.

More than 390 individuals and teams from around the world have already signed up to solve gradual learning in the first round of the General AI Challenge. (And enrollment is still open!) All participants must submit their solutions for evaluation by August 15 of this year. Then the submitted AI agents will be tested on a set of tasks which are similar, but not identical, to those provided as part of the first-round training tasks. That’s where the AI agents’ ability to solve previously unseen problems will really be tested.

We don’t yet know whether a successful solution to the Challenge’s first phase will be able to scale up to much more complex tasks and environments, where rich visual input and extra dimensions will be added. But the GoodAI team hopes that this first step will ignite new R&D efforts, spread new ideas in the community, and advance the search for more human-like algorithms.

Olga Afanasjeva is the director of the General AI Challenge and COO of GoodAI.

A software company Shrink Apps and Wearable Can Usher In The Age Of Digital Psychiatry

Zach has been having trouble at work, and when he comes home he’s exhausted, yet he struggles to sleep. Everything seems difficult, even walking—he feels like he’s made of lead. He knows something is wrong and probably should call the doctor, but that just seems like too much trouble. Maybe next week.

Meanwhile, software on his phone has detected changes in Zach, including subtle differences in the language he uses, decreased activity levels, worsening sleep, and cutbacks in social activities. Unlike Zach, the software acts quickly, pushing him to answer a customized set of questions. Since he doesn’t have to get out of bed to do so, he doesn’t mind.

Zach’s symptoms and responses suggest that he may be clinically depressed. The app offers to set up a video call with a psychiatrist, who confirms the diagnosis. Based on her expertise, Zach’s answers to the survey questions, and sensor data that suggests an unusual type of depression, the psychiatrist devises a treatment plan that includes medication, video therapy sessions, exercise, and regular check-ins with her. The app continues to monitor Zach’s behavior and helps keep his treatment on track by guiding him through exercise routines, noting whether or not he’s taking his medication, and reminding him about upcoming appointments.

While Zach isn’t a real person, everything mentioned in this scenario is feasible today and will likely become increasingly routine around the world in only a few years’ time. My prediction may come as a surprise to many in the health-care profession, for over the years there have been claims that mental health patients wouldn’t want to use technology to treat their conditions, unlike, say, those with asthma or heart disease. Some have also insisted that to be effective, all assessment and treatment must be done face to face, and that technology might frighten patients or worsen their paranoia.

However, recent research results from a number of prestigious institutions, including Harvard, the National Alliance on Mental Illness, King’s College London, and the Black Dog Institute, in Australia, refute these claims. Studies show that psychiatric patients, even those with severe illnesses like schizophrenia, can successfully manage their conditions with smartphones, computers, and wearable sensors. And these tools are just the beginning. Within a few years, a new generation of technologies promises to revolutionize the practice of psychiatry.

To understand the potential of digital psychiatry, consider how someone with depression is usually treated today.

Depression can begin so subtly that up to two-thirds of those who have it don’t even realize they’re depressed. And even if they realize something’s wrong, those who are physically disabled, elderly, living in rural areas, or suffering from additional mental illnesses like anxiety disorders may find it difficult to get to a doctor’s office.

Once a patient does see a psychiatrist or therapist, much of the initial visit will be spent reviewing the patient’s symptoms, such as sleep patterns, energy levels, appetite, and ability to focus. That too can be difficult; depression, like many other psychiatric illnesses, affects a person’s ability to think and remember.

The patient will likely leave with some handouts about exercise and a prescription for medication. There’s a fair chance that the medication won’t be effective at least for many weeks, the exercise plan will be ignored, and the patient will have a bumpy course of progress. Unfortunately, the psychiatrist won’t know until a follow-up appointment sometime later.

Technology can improve this outcome, by bringing objective information into the psychiatrist’s office and allowing real-time monitoring and intervention outside the office. Instead of relying on just the patient’s recollection of his symptoms, the doctor can look at behavioral data from the person’s smartphone and wearable sensors. The psychiatrist may even recommend that the patient start using such tools before the first visit.

It’s astonishing how much useful information a doctor can glean from data that may seem to have little to do with a person’s mental condition. GPS data from a smartphone, for example, can reveal the person’s movements, which in turn reflects the person’s mental health. By correlating patients’ smartphone-derived GPS measurements with their symptoms of depression, a 2016 study by the Center for Behavioral Intervention Technologies at Northwestern University, in Chicago, found that when people are depressed they tend to stay at home more than when they’re feeling well. Similarly, someone entering a manic episode of bipolar disorder may be more active and on the move. The Monitoring, Treatment, and Prediction of Bipolar Disorder Episodes (Monarca) consortium, a partnership of European universities, has conducted numerous studies demonstrating that this kind of data can be used to predict the course of bipolar disorder.

Where GPS is unavailable, Bluetooth and Wi-Fi can fill in. Research by Dror Ben-Zeev, of the University of Washington, in Seattle, demonstrated that Bluetooth radios on smartphones can be used to monitor the locations of people with schizophrenia within a hospital. Data collected through Wi-Fi networks could likewise reveal whether a patient who’s addicted to alcohol is avoiding bars and attending support-group meetings.

Accelerometer data from a smartphone or fitness tracker can provide more fine-grained details about a person’s movements, detect tremors that may be drug side effects, and capture exercise patterns. A test of an app called CrossCheck recently demonstrated how this kind of data, in combination with other information collected by a phone, can contribute to symptom prediction in schizophrenia by providing clues on sleep and activity patterns. A report in the American Journal of Psychiatry by Ipsit Vahia and Daniel Sewell describes how they were able to treat a patient with an especially challenging case of depression using accelerometer data. The patient had reported that he was physically active and spending little time in bed, but data from his fitness tracker showed that his recollection was faulty; the doctors thus correctly diagnosed his condition as depression rather than, say, a sleep disorder.

Tracking the frequency of phone calls and text messages can suggest how social a person is and indicate any mental change. When one of Monarca’s research groups [PDF] looked at logs of incoming and outgoing text messages and phone calls, they concluded that changes in these logs could be useful for tracking depression as well as mania in bipolar disorder.