Category Archives: computer software

Mengapa Robot Tikus-Tikus Sangat Baik dalam Menjelajahi

If you take a common brown rat and drop it into a lab maze or a subway tunnel, it will immediately begin to explore its surroundings, sniffing around the edges, brushing its whiskers against surfaces, peering around corners and obstacles. After a while, it will return to where it started, and from then on, it will treat the explored terrain as familiar.

Roboticists have long dreamed of giving their creations similar navigation skills. To be useful in our environments, robots must be able to find their way around on their own. Some are already learning to do that in homes, offices,warehouses, hospitals, hotels, and, in the case of self-driving cars, entire cities. Despite the progress, though, these robotic platforms still struggle to operate reliably under even mildly challenging conditions. Self-driving vehicles, for example, may come equipped with sophisticated sensors and detailed maps of the road ahead, and yet human drivers still have to take control in heavy rain or snow, or at night.

The lowly brown rat, by contrast, is a nimble navigator that has no problem finding its way around, under, over, and through the toughest spaces. When a rat explores an unfamiliar territory, specialized neurons in its 2-gram brain fire, or spike, in response to landmarks or boundaries. Other neurons spike at regular distances—once every 20 centimeters, every meter, and so on—creating a kind of mental representation of space [PDF]. Yet other neurons act like an internal compass, recording the direction in which the animal’s head is turned [PDF]. Taken together, this neural activity allows the rat to remember where it’s been and how it got there. Whenever it follows the same path, the spikes strengthen, making the rat’s navigation more robust.

So why can’t a robot be more like a rat?

The answer is, it can. At the Queensland University of Technology (QUT), in Brisbane, Australia, Michael Milford and his collaborators have spent the last 14 years honing a robot navigation system modeled on the brains of rats. This biologically inspired approach, they hope, could help robots navigate dynamic environments without requiring advanced, costly sensors and computationally intensive algorithms.

An earlier version of their system allowed an indoor package-delivery bot to operate autonomously for two weeks in a lab. During that period, it made more than 1,100 mock deliveries, traveled a total of 40 kilometers, and recharged itself 23 times. Another version successfully mapped an entire suburb of Brisbane, using only the imagery captured by the camera on a MacBook. Now Milford’s group is translating its rat-brain algorithms into a rugged navigation system for the heavy-equipment maker Caterpillar, which plans to deploy it on a fleet of underground mining vehicles.

Milford, who’s 35 and looks about 10 years younger, began investigating brain-based navigation in 2003, when he was a Ph.D. student at the University of Queensland working with roboticist Gordon Wyeth, who’s now dean of science and engineering at QUT.

At the time, one of the big pushes in robotics was the “kidnapped robot”problem: If you take a robot and move it somewhere else, can it figure out where it is? One way to solve the problem is SLAM, which stands forsimultaneous localization and mapping. While running a SLAM algorithm, a robot can explore strange terrain, building a map of its surroundings while at the same time positioning, or localizing, itself within that map.

Wyeth had long been interested in brain-inspired computing, starting with work on neural networks in the late 1980s. And so he and Milford decided to work on a version of SLAM that took its cues from the rat’s neural circuitry. They called it RatSLAM.

There already were numerous flavors of SLAM, and today they number in the dozens, each with its own advantages and drawbacks. What they all have in common is that they rely on two separate streams of data. One relates to what the environment looks like, and robots gather this kind of data using sensors as varied as sonars, cameras, and laser scanners. The second stream concerns the robot itself, or more specifically, its speed and orientation; robots derive that data from sensors like rotary encoders on their wheels or an inertial measurement unit (IMU) on their bodies. A SLAM algorithm looks at the environmental data and tries to identify notable landmarks, adding these to its map. As the robot moves, it monitors its speed and direction and looks for those landmarks; if the robot recognizes a landmark, it uses the landmark’s position to refine its own location on the map.

But whereas most implementations of SLAM aim for highly detailed, static maps, Milford and Wyeth were more interested in how to navigate through an environment that’s in constant flux. Their aim wasn’t to create maps built with costly lidars and high-powered computers—they wanted their system to make sense of space the way animals do.

“Rats don’t build maps,” Wyeth says. “They have other ways of remembering where they are.” Those ways include neurons called place cells and head-direction cells, which respectively let the rat identify landmarks and gauge its direction. Like other neurons, these cells are densely interconnected and work by adjusting their spiking patterns in response to different stimuli. To mimic this structure and behavior in software, Milford adopted a type of artificial neural network called an attractor network. These neural nets consist of hundreds to thousands of interconnected nodes that, like groups of neurons, respond to an input by producing a specific spiking pattern, known as an attractor state. Computational neuroscientists use attractor networks to study neurons associated with memory and motor behavior. Milford and Wyeth wanted to use them to power RatSLAM.

They spent months working on the software, and then they loaded it into a Pioneer robot, a mobile platform popular among roboticists. Their rat-brained bot was alive.

But it was a failure. When they let it run in a 2-by-2-meter arena, Milford says, “it got lost even in that simple environment.”

Milford and Wyeth realized that RatSLAM didn’t have enough information with which to reduce errors as it made its decisions. Like other SLAM algorithms, it doesn’t try to make exact, definite calculations about where things are on the map it’s generating; instead, it relies on approximations and probabilities as a way of incorporating uncertainties—conflicting sensor readings, for example—that inevitably crop up. If you don’t take that into account, your robot ends up lost.

That seemed to be the problem with RatSLAM. In some cases, the robot would recognize a landmark and be able to refine its position, but other times the data was too ambiguous. After not too long, the accrued error was bigger than 2 meters—the robot thought it was outside the arena!

In other words, their rat-brain model was too crude. It needed better neural circuitry to be able to abstract more information about the world.

“So we engineered a new type of neuron, which we called a ‘pose’ cell,” Milford says. The pose cell didn’t just tell the robot its location or its orientation, it did both at the same time. Now, when the robot identified a landmark it had seen before, it could more precisely encode its place on the map and keep errors in check.

Again, Milford placed the robot inside the 2-by-2-meter arena. “Suddenly, our robot could navigate quite well,” he recalls.

Interestingly, not long after the researchers devised these artificial cells, neuroscientists in Norway announced the discovery of grid cells, which are neurons whose spiking activity forms regular geometric patterns and tells the animal its relative position within a certain area. [For more on the neuroscience of rats, see “AI Designers Find Inspiration in Rat Brains.”]

“Our pose cells weren’t exactly grid cells, but they had similar features,” Milford says. “That was rather gratifying.”

The robot tests moved to bigger arenas with greater complexity. “We did a whole floor, then multiple floors in the building,” Wyeth recalls. “Then I told Michael, ‘Let’s do a whole suburb.’ I thought he would kill me.”

Milford loaded the RatSLAM software into a MacBook and taped it on the roof of his red 1994 Mazda Astina. To get a stream of data about the environment, he used the laptop’s camera, setting it to snap a photo of the street ahead of the car several times per second. To get a stream of the data about the robot itself—in this case, his car—he found a creative solution. Instead of attaching encoders to the wheels or using an IMU or GPS, he used simple image-processing techniques. By tracking and comparing pixels on sequences of photos from the MacBook, his SLAM algorithm could calculate the vehicle’s speed as well as direction changes.

Milford drove for about 2 hours through the streets of the Brisbane suburb of St. Lucia [PDF], covering 66 kilometers. The result wasn’t a precise, to-scale map, but it accurately represented the topology of the roads and could pinpoint exactly where the car was at any given moment. RatSLAM worked.

“It immediately drew attention and was widely discussed because it was very different from what other roboticists were doing,” says David Wettergreen, a roboticist at Carnegie ­Mellon University, in Pittsburgh, who specializes in autonomous robots for planetary exploration. Indeed, it’s still considered one of the most notable examples of brain-inspired robotics.

But though RatSLAM created a stir, it didn’t set off a wave of research based on those same principles. And when ­Milford and Wyeth approached companies about commercializing their system, they found many keen to hear their pitch but ultimately no takers. “A colleague told me we should have called it ‘NeuroSLAM,’ ” Wyeth says. “People have bad associations with rats.”

That’s why Milford is excited about the two-year project with Caterpillar, which began in March. “I’ve always wanted to create systems that had real-world uses,” he says. “It took a lot longer than I expected for that to happen.”

Apple and Tesla Show Big Silicon Valley Headcount Improved performance Since 2016 Intel and eBay Shed Staff

The Silicon Valley Business Journal annually publishes a list of the tech companies who are the biggest local employers. In recent years, Apple and Google (now Alphabet) have vied for the number one spot, with Cisco holding a lock on number three. That hasn’t changed. But there have been wild swings in headcount among those on the list.

Looking at the 20 largest tech employers in Silicon Valley, the overall workforce as reported by the Business Journal ranges from 25,000 at Apple to 2789 at Symantec. But what a difference a year makes. Since the Business Journal’s 2016 report, Apple hired 5000, Tesla Motors hired 3471 (for a total local workforce of 10,000), Facebook hired 2586 (for a total of 9385), and Gilead Sciences hired 1719 (for a total of 6949).

Among those companies shedding staff, eBay led the list, losing 3222 employees (for a total of 2978), followed by Intel (minus 3000 for a total of 7801), and Yahoo (minus 193 for a total of 3800).

Overall, headcount gains among the top 20 far exceeded losses, with a total of 16,604 new employees at those tech firms with a growing Silicon Valley presence, compared with 3299 fewer employees among those companies making cutbacks.

(Note: Western Digital was on the list at number 18 this year, and not on previous years, having consolidated operations to its San Jose office. TheBusiness Journal reported the company had 3000 local employees in 2017; it did not make the 2016 list, and the consolidation makes comparing annual totals complicated, so I didn’t include it in this discussion.)

Do not You Like Alexa Better if Knowing When It’s Annoying You?

What could your computer, phone, or other gadget do differently if it knew how you were feeling?

Rana el Kaliouby, founder and CEO of Affectiva, is considering the possibilities of such a world. Speaking at the Computer History Museum last week, el Kaliouby said that she has been working to teach computers to read human faces since 2000 as a PhD student at Cambridge University.

“I remember being stressed,” she says.  “I had a paper deadline, and “Clippy” [that’s Microsoft’s ill-fated computer assistant] would pop up and do a little twirl and say ‘It looks like you are writing a letter.’ I would think, ‘No I’m not!’”

(“You may,” Computer History Museum CEO John Hollar interjected, “be one of the few advanced scientists inspired by Clippy.”)

That was a piece of what led her to think about making computers more intelligent. Well, that, plus the fact that she was homesick. And the realization that, because she was spending more time with her computer than any human being, she really wanted her computer to understand her better.

Since then, she’s been using machine learning, and more recently deep learning, to teach computers to read faces, spinning Affectiva out of the MIT Media Lab in 2009 to commercialize her work. The company’s early customers are not exactly changing the world—they are mostly advertisers looking to better craft their messages. But that, she says, is just the beginning. Soon, she says, “all of our devices will have emotional intelligence”—not just our phones, but “our refrigerators, our cars.”

Early on, el Kaliouby focused on building smart tools for individuals with autism. She still thinks emotional intelligence technology—or EI—will be a huge boon to this community, potentially providing a sort of emotional hearing aid.

It’ll also be a mental healthcare aid, el Kaliouby predicts. She sees smart phones with EI as potentially able to regularly check a person’s mental state, providing early warning of depression, anxiety, or other problems. “People check their phones 15 times an hour. That’s a chance to understand that you are deviating from your baseline.”

Cars, she said, will need to have emotional intelligence as they transition to being fully automated; in the interim period, they will sometimes need to hand control back to a human driver, and need to know if the driver is ready to take control.

Smart assistants like Siri and Alexa, she says, “need to know when [they] gave you the wrong answer and you are annoyed, and say ‘I’m sorry.’”

Online education desperately needs emotional intelligence, she indicated, to give it a sense of when students are confused or engaged or frustrated or bored.

And the killer app? It just might be dating. “We have worked with teenagers who just want to have a girlfriend, but couldn’t tell if girls were interested in them,” el Kaliouby says. A little computer help reading their expressions could help with that. (Pornography and sex robots will likely be a big market as well, el Kaliouby says, but her company doesn’t plan on developing tools for this application. Nor for security, because that violates Affectiva’s policy of not tracking emotions without consent.)

While Affectiva is focusing on the face for its clues about emotions, el Kaliouby admits that the face is just part of the puzzle—gestures, tone of voice, and other factors need to be considered before computers can be completely accurate in decoding emotions.

And today’s emotional intelligence systems are still pretty dumb. “I liken the state of the technology to a toddler, el Kaliouby says. “It can do basic emotions. But what do people look like when inspired, or jealous, or proud? I think this technology can answer these basic science questions—we’re not done.”

In the Future, Machines Will Borrow the Best Tricks from Our Brain

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein typeshaving complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

The power of the brain goes beyond its internal connections, and includes its ability to communicate with other brains. Some animals form swarms or social groups, but only humans form deep hierarchies. This penchant, more than any unique cognitive ability, enables us to dominate the planet and construct objects of exquisite complexity. Collectively, though, we humans are capable of achieving truly great things.

Now we are combining machine intelligence along with our own. As our systems—industrial, technological, ­medical—grow in sophistication and complexity, so too must the intelligence that operates them. Eventually, our tools will think for themselves, perhaps even become conscious. Some people find this a scary prospect. If our tools think for themselves, they could turn against us. What if, instead, we create machines that love us?

Steve arrives home full of dread. Inside, the place is pristinely clean. A delicious aroma wafts from the new oven. Kiri is on the back porch, working at an easel. He walks up behind her. “How was your day?”

“I made a new painting.” She steps away to show him. The canvas contains a photo-perfect rendition of the yard, in oils.

“Um, it’s nice.”

“You’re lying. I can tell from your biosensors.”

“Listen, Kiri, I have to take you back to the lab. They say you’ve progressed as far as you can with me.”

“I like it here. Please let me stay. I’ll be anything you want.”

“That’s the problem. You try so hard to please me that you haven’t found yourself.”

Water trickles down her cheek. She wipes it and studies her moist hand. “You think all this is fake.”

Steve takes Kiri in a tight embrace and holds her for a long time. He whispers, “I don’t know.”

This article appears in the June 2017 print issue as “The Dawn of the Real Thinking Machine.”

Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage

Every day, financial markets and global economies produce a flood of data. As a result, stock traders now have more information about more industries and sectors than ever before. That deluge, combined with the rise of cloud technology, has inspired hedge funds to develop new quantitative strategies that they hope can generate greater returns than the experience and judgement of their own staff.

At the Future of Fintech conference hosted by research company CB Insightsin New York City, three hedge fund insiders discussed the latest developments in quantitative trading. A session on Tuesday featured Christina Qi, the co-founder of a high-frequency trading firm called Domeyard LP; Jonathan Larkin, an executive from Quantopian, a hedge fund taking a data-driven systematic approach; and Andy Weissman of Union Square Ventures, a venture capital firm that has invested in an autonomous hedge fund.

Many of the world’s largest hedge funds already rely on powerful computing infrastructure and quantitative methods—whether that’s high-frequency trading, incorporating machine learning, or applying data science—to make trades. After all, human traders are full of biases, emotions, memories, and errors of judgment. Machines and data, on the other hand, can coolly examine the facts and decide the best course of action.

Deciding which technologies and quantitative methods to trust, though, is still a job for humans. There are many ways that hedge funds can use technology to create an advantage for investors. Just a few years ago, high-frequency trading was all the rage: Some firms built secret networks of microwave towersand reserved space on trans-Atlantic fiber optic cables to eke out competitors by a few milliseconds.

Now, speed alone isn’t enough. Qi co-founded Domeyard in 2013 to execute high-frequency trades through a suite of proprietary technologies. The firm built their own feed handlers, which are systems that retrieve and organize market data from exchanges such as Nasdaq. They also developed their ownorder management system, which are instructions that determine how proprietary algorithms make trades.

Qi says Domeyard’s system might gather 343 million data points in the opening hour of the New York Stock Exchange on any given day. The company can execute trades in just a few microseconds, and process data in mere nanoseconds.

But thanks to advances in the trading software and systems available for purchase, most any firm can now carry out the high-speed trades that once set Domeyard apart. “It’s not about the speed anymore,” Qi said. Hedge funds must find new ways to compete.

Over the past few years, hedge funds have started to get even more creative. Some have begun to incorporate machine learning into their systems, hand over key management decisions to troves of data scientists, and even crowdsource investment strategies. If they work, these experiments could give rise to a new breed of hedge funds that rely more on code and less on humans to make decisions than ever before.

One hedge fund called Numerai pays data scientists in cryptocurrency to tweak its machine learning algorithms and improve its strategy. “The theory there is can you achieve consistent returns over time by removing human bias, and making it a math problem,” said Andy Weissman of Union Square Ventures, which has invested US $3 million in Numerai.

Not all funds will find it easy to compete on these new terms. Domeyard can’t incorporate machine learning, Qi says, because machine learning programs are generally optimized for throughput, rather than latency. “I can’t use standard machine learning techniques to trade because they’re too slow,” she said.

The third fund represented on the panel, Quantopian, provides free resources to anyone who wants to write investment algorithms based on a personal hypothesis about markets. Quantopian takes the most promising algorithms, puts money behind them, and adds them to one big fund.

“We’re tapping into this global mindshare to make something valuable for our investors,” said Larkin, chief investment officer at Quantopian.

To help the process along, the firm provides educational materials, over 50 datasets on U.S. equities and futures, a library of ready-made modules that authors can borrow to code in the Python programming language, a virtual sandbox to test their hypotheses, and support from a team of 40 in-house developers. If authors wish to incorporate machine learning into their algorithms, they can do that with Python modules such as sci-kit learn.

One project, or strategy, consists of multiple algorithms written across several layers. An author’s first step it to generate a hypothesis. Then, they choose which data, instruments, and modules they will apply to test that hypothesis.

Next, the author must build what Larkin calls an “alpha,” or an expression based on the author’s hypothesis that has been tested and proven to have some degree of predictive value about market performance. “The best quantitative strategies will have a number of these,” Larkin said.

Each alpha should generate a vector, or a set of numbers, which can then be used to make trades that will align with that hypothesis. The next step, then, is to combine the alphas and add a risk management layer with safeguards to prevent the algorithms from getting carried away.

Finally, the author fills in the final details of the system which include the timing of trades. Quantopian’s approach is admittedly much slower than Domeyard—the fund has a minimum trading interval of one minute.

To date, 140,000 people from 180 countries have written investment algorithms for Quantopian, and the company has put money into 25 of those projects. Its largest allocation to a single project was $10 million.

Once they’ve built a strategy, the original author retains the intellectual property for the underlying algorithms. If their approach is funded, the author receives a cut (generally 10 percent) of any profits that their strategy generates. Larkin estimates it takes at least 50 hours of work to develop a successful strategy.

Larkin wouldn’t share any information about the fund’s performance so far. But he said the idea is to blend the best data-based hypotheses from many people. “We at Quantopian believe the strongest investment vehicle is a combination of strategies, not any one individual strategy,” he said.

Larkin refers to Quantopian’s methods as data-driven systematic investing, a separate category from high-frequency trading or discretionary investing based on data science. Still, he classifies all three of these quantitative methods as distinct from the longtime approach of simply relying on a fund manager’s judgement, without any formal way to organize and filter data.

Depending on how Numerai, Quantopian, and similar experiments fare, investors could be entering a new era of finance in which they entrust their money to machines, not managers.

In AI General Challenge, Team Compete to $ 5 Million

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

We owe the success of numerous state-of-the-art artificial intelligence applications to artificial neural networks. First designed decades ago, they rocketed the AI field to success quite recently, when researchers were able to run them on much more powerful hardware and feed them with huge amounts of data. Since then, the field of deep learning has been flourishing.

The effect seemed miraculous and promising. While it was hard to interpret what exactly was happening inside the networks, they started reaching human performance on a number of tasks: such as image recognition, natural language processing, and data classification in general. The promise was that we would elegantly cross the border between data processing and intelligence by pure brute force of deep artificial neural networks: Just give it all the data in the world!

However, this is easier said than done. There are limits to state-of-the-art AI that separate it from human-like intelligence:

● We humans can learn a new skill without forgetting what we have already learned.

● We can build upon what we know already. For example, if we learn language skills in one context we can reuse them to communicate any of our experiences, dreams, or completely new ideas.

● We can improve ourselves and gradually become better learners. For instance, after you learn one foreign language, learning another is usually easier, because you already possess a number of heuristics and tricks for language-learning. You can keep discovering and improving these heuristics and use them to solve new tasks. This is how we’re able to work through completely new problems.

Some of these things may sound trivial, but today’s AI algorithms are very limited in how much previous knowledge they are able to keep through each new training phase, how much they can reuse, and whether they are able to devise any universal learning strategies at all.

In practice, this means that you need to build and fine tune a new algorithm for each new specific task—which is a form of very sophisticated data processing, rather than real intelligence.

To build a true general intelligence has been a lifelong dream of Marek Rosa, from his days as a teenage programmer until now, when he’s a successful entrepreneur. Rosa, therefore, invested the wealth he made in the video game business into his own general AI R&D company in Prague: GoodAI.

Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Instituteand launching the General AI Challenge. The AI Roadmap Institute is an independent entity that promotes big-picture thinking by studying and comparing R&D roadmaps towards general intelligence. It also focuses on  AI safety and considers roadmaps that represent possible futures that we either want to create or want to prevent from happening.

The General AI Challenge is a citizen-science project with a US $5 million prize fund provided by Rosa. His motivation is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.

The $5 million will be given out as prizes in various rounds of the multi-year competition. Each round will tackle an important milestone on the way to general AI. In some rounds, participants will be tasked with designing algorithms and programming AI agents. In other rounds, they will work on theoretical problems such as AI safety or societal impacts of AI. The Challenge will address general AI as a complex phenomenon.

The Challenge kicked off on 15 February with a six-month “warm-up” round dedicated to building gradually learning AI agents. Rosa and the GoodAI team believe that the ability to learn gradually lies at the core of our intelligence. It’s what enables us to efficiently learn new skills on top of existing knowledge without forgetting what we already know and to reapply our knowledge in various situations across multiple domains. Essentially, we learn how to learn better, enabling us to readily react to new problems.

At GoodAI’s R&D lab, AI agents will learn via a carefully designed curriculum in a gradual manner. We call it “school for AI,” since the progression is similar to human schooling, from nursery till graduation. We believe this approach will provide more control over what kind of behaviors and skills the AI acquires, which is of great importance for AI safety. Essentially, the goal is to bias the AI towards behaviors and abilities that we humans find useful and that are aligned with our understanding of the world and morality.

Nailing gradual learning is not an easy task, and so the Challenge breaks the problem into phases. The first round strips the problem down to a set of simplistic tasks in a textual environment. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.

The Challenge competitors are designing AI agents that can engage in a dialog within a textual environment. The environment will be teaching the agents to react to text patterns in a certain way. As an AI progresses through the set of roughly 40 tasks, they become harder. The final tasks are impossible to solve in a reasonable amount of time unless the agent has figured out the environment’s logic, and can reuse some of the skills it acquired on previous tasks.

More than 390 individuals and teams from around the world have already signed up to solve gradual learning in the first round of the General AI Challenge. (And enrollment is still open!) All participants must submit their solutions for evaluation by August 15 of this year. Then the submitted AI agents will be tested on a set of tasks which are similar, but not identical, to those provided as part of the first-round training tasks. That’s where the AI agents’ ability to solve previously unseen problems will really be tested.

We don’t yet know whether a successful solution to the Challenge’s first phase will be able to scale up to much more complex tasks and environments, where rich visual input and extra dimensions will be added. But the GoodAI team hopes that this first step will ignite new R&D efforts, spread new ideas in the community, and advance the search for more human-like algorithms.

Olga Afanasjeva is the director of the General AI Challenge and COO of GoodAI.

A software company Shrink Apps and Wearable Can Usher In The Age Of Digital Psychiatry

Zach has been having trouble at work, and when he comes home he’s exhausted, yet he struggles to sleep. Everything seems difficult, even walking—he feels like he’s made of lead. He knows something is wrong and probably should call the doctor, but that just seems like too much trouble. Maybe next week.

Meanwhile, software on his phone has detected changes in Zach, including subtle differences in the language he uses, decreased activity levels, worsening sleep, and cutbacks in social activities. Unlike Zach, the software acts quickly, pushing him to answer a customized set of questions. Since he doesn’t have to get out of bed to do so, he doesn’t mind.

Zach’s symptoms and responses suggest that he may be clinically depressed. The app offers to set up a video call with a psychiatrist, who confirms the diagnosis. Based on her expertise, Zach’s answers to the survey questions, and sensor data that suggests an unusual type of depression, the psychiatrist devises a treatment plan that includes medication, video therapy sessions, exercise, and regular check-ins with her. The app continues to monitor Zach’s behavior and helps keep his treatment on track by guiding him through exercise routines, noting whether or not he’s taking his medication, and reminding him about upcoming appointments.

While Zach isn’t a real person, everything mentioned in this scenario is feasible today and will likely become increasingly routine around the world in only a few years’ time. My prediction may come as a surprise to many in the health-care profession, for over the years there have been claims that mental health patients wouldn’t want to use technology to treat their conditions, unlike, say, those with asthma or heart disease. Some have also insisted that to be effective, all assessment and treatment must be done face to face, and that technology might frighten patients or worsen their paranoia.

However, recent research results from a number of prestigious institutions, including Harvard, the National Alliance on Mental Illness, King’s College London, and the Black Dog Institute, in Australia, refute these claims. Studies show that psychiatric patients, even those with severe illnesses like schizophrenia, can successfully manage their conditions with smartphones, computers, and wearable sensors. And these tools are just the beginning. Within a few years, a new generation of technologies promises to revolutionize the practice of psychiatry.

To understand the potential of digital psychiatry, consider how someone with depression is usually treated today.

Depression can begin so subtly that up to two-thirds of those who have it don’t even realize they’re depressed. And even if they realize something’s wrong, those who are physically disabled, elderly, living in rural areas, or suffering from additional mental illnesses like anxiety disorders may find it difficult to get to a doctor’s office.

Once a patient does see a psychiatrist or therapist, much of the initial visit will be spent reviewing the patient’s symptoms, such as sleep patterns, energy levels, appetite, and ability to focus. That too can be difficult; depression, like many other psychiatric illnesses, affects a person’s ability to think and remember.

The patient will likely leave with some handouts about exercise and a prescription for medication. There’s a fair chance that the medication won’t be effective at least for many weeks, the exercise plan will be ignored, and the patient will have a bumpy course of progress. Unfortunately, the psychiatrist won’t know until a follow-up appointment sometime later.

Technology can improve this outcome, by bringing objective information into the psychiatrist’s office and allowing real-time monitoring and intervention outside the office. Instead of relying on just the patient’s recollection of his symptoms, the doctor can look at behavioral data from the person’s smartphone and wearable sensors. The psychiatrist may even recommend that the patient start using such tools before the first visit.

It’s astonishing how much useful information a doctor can glean from data that may seem to have little to do with a person’s mental condition. GPS data from a smartphone, for example, can reveal the person’s movements, which in turn reflects the person’s mental health. By correlating patients’ smartphone-derived GPS measurements with their symptoms of depression, a 2016 study by the Center for Behavioral Intervention Technologies at Northwestern University, in Chicago, found that when people are depressed they tend to stay at home more than when they’re feeling well. Similarly, someone entering a manic episode of bipolar disorder may be more active and on the move. The Monitoring, Treatment, and Prediction of Bipolar Disorder Episodes (Monarca) consortium, a partnership of European universities, has conducted numerous studies demonstrating that this kind of data can be used to predict the course of bipolar disorder.

Where GPS is unavailable, Bluetooth and Wi-Fi can fill in. Research by Dror Ben-Zeev, of the University of Washington, in Seattle, demonstrated that Bluetooth radios on smartphones can be used to monitor the locations of people with schizophrenia within a hospital. Data collected through Wi-Fi networks could likewise reveal whether a patient who’s addicted to alcohol is avoiding bars and attending support-group meetings.

Accelerometer data from a smartphone or fitness tracker can provide more fine-grained details about a person’s movements, detect tremors that may be drug side effects, and capture exercise patterns. A test of an app called CrossCheck recently demonstrated how this kind of data, in combination with other information collected by a phone, can contribute to symptom prediction in schizophrenia by providing clues on sleep and activity patterns. A report in the American Journal of Psychiatry by Ipsit Vahia and Daniel Sewell describes how they were able to treat a patient with an especially challenging case of depression using accelerometer data. The patient had reported that he was physically active and spending little time in bed, but data from his fitness tracker showed that his recollection was faulty; the doctors thus correctly diagnosed his condition as depression rather than, say, a sleep disorder.

Tracking the frequency of phone calls and text messages can suggest how social a person is and indicate any mental change. When one of Monarca’s research groups [PDF] looked at logs of incoming and outgoing text messages and phone calls, they concluded that changes in these logs could be useful for tracking depression as well as mania in bipolar disorder.

Righetti now Launches Full-Stack Quantum Computing Service and Quantum IC Fab

Much of the ongoing quantum computing battle among tech giants such as Google and IBM has focused on developing the hardware necessary to solve impossible classical computing problems. A Berkeley-based startup looks to beat those larger rivals with a one-two combo: a fab lab designed for speedy creation of better quantum circuits and a quantum computing cloud service that provides early hands-on experience with writing and testing software.

Rigetti Computing recently unveiled its Fab-1 facility, which will enable its engineers to rapidly build new generations of quantum computing hardware based on quantum bits, or qubits. The facility can spit out entirely new designs for 3D-integrated quantum circuits within about two weeks—much faster than the months usually required for academic research teams to design and build new quantum computing chips. It’s not so much a quantum computing chip factory as it is a rapid prototyping facility for experimental designs.

“We’re fairly confident it’s the only dedicated quantum computing fab in the world,” says Andrew Bestwick, director of engineering at Rigetti Computing. “By the standards of industry, it’s still quite small and the volume is low, but it’s designed for extremely high-quality manufacturing of these quantum circuits that emphasizes speed and flexibility.”

But Rigetti is not betting on faster hardware innovation alone. It has also announced its Forest 1.0 service that enables developers to begin writing quantum software applications and simulating them on a 30-qubit quantum virtual machine. Forest 1.0 is based on Quil—a custom instruction language for hybrid quantum/classical computing—and open-source python tools intended for building and running Quil programs.

By signing up for the service, both quantum computing researchers and scientists in other fields will get the chance to begin practicing how to write and test applications that will run on future quantum computers. And it’s likely that Rigetti hopes such researchers from various academic labs or companies could end up becoming official customers.

“We’re a full stack quantum computing company,” says Madhav Thattai, Rigetti’s chief strategy officer. “That means we do everything from design and fabrication of quantum chips to packaging the architecture needed to control the chips, and then building the software so that people can write algorithms and program the system.”

Much still has to be done before quantum computing becomes a practical tool for researchers and companies. Rigetti’s approach to universal quantum computing uses silicon-based superconducting qubits that can take advantage of semiconductor manufacturing techniques common in today’s computer industry. That means engineers can more easily produce the larger arrays of qubits necessary to prove that quantum computing can outperform classical computing—a benchmark that has yet to be reached.

Google researchers hope to demonstrate such “quantum supremacy” over classical computing with a 49-qubit chip by the end of 2017. If they succeed, it would be an “incredibly exciting scientific achievement,” Bestwick says. Rigetti Computing is currently working on scaling up from 8-qubit chips.

But even that huge step forward in demonstrating the advantages of quantum computing would not result in a quantum computer that is a practical problem-solving tool. Many researchers believe that practical quantum computing requires systems to correct the quantum errors that can arise in fragile qubits. Error correction will almost certainly be necessary to achieve the future promise of 100-million-qubit systems that could perform tasks that are currently impractical, such as cracking modern cryptography keys.

Though it may seem like quantum computing demands far-off focus, Rigetti Computing is complementing its long-term strategy with a near-term strategy that can serve clients long before more capable quantum computers arise. The quantum computing cloud service is one example of that. The startup also believes a hybrid system that combines classical computing architecture with quantum computing chips can solve many practical problems in the short term, especially in the fields of machine learning and chemistry. What’s more, says Rigetti, such hybrid classical/quantum computers can perform well even without error correction.

“We’ve uncovered a whole new class of problems that can be solved by the hybrid model,” Bestwick says. “There is still a large role for classical computing to own the shell of the problem, but we can offload parts of the problem that the quantum computing resource can handle.”

There is another tall hurdle that must be overcome before we’ll be able to build the quantum computing future: There are not many people in the world qualified to build a full-stack quantum computer. But Rigetti Computing is focused on being a full-stack quantum computing company that’s attractive to talented researchers and engineers who want to work at a company that is trying to take this field beyond the academic lab to solve real-world problems.

Much of Rigetti’s strategy here revolves around its Junior Quantum Engineer Program, which helps recruit and train the next generation of quantum computing engineers. The program, says Thattai, selects some of the “best undergraduates in applied physics, engineering, and computer science” to learn how to build full-stack quantum computing in the most hands-on experience possible. It’s a way to ensure that the company continues to feed the talent pipeline for the future industry.

On the client side, Rigetti is not yet ready to name its main customers. But it did confirm that it has partnered with NASA to develop potential quantum computing applications. Venture capital firms seem impressed by the startup’s near-term and long-term strategies as well, given news earlier this year that Rigetti had raised $64 million in series A and B funding led by Andreessen Horowitz and Vy Capital.

Whether it’s clients or investors, Rigetti has sought out like-minded people who believe in the startup’s model of preparing for the quantum computing future beyond waiting on the hardware.

“Those people know that when the technology crosses the precipice of being beyond what classical computing can do, it will flip very, very quickly in one generation,” Thattai says. “The winners and losers in various industries will be decided by who took advantage of quantum computing systems early.”

How to Bots Win Friends and Influence People

Every now and then sociologist Phil Howard writes messages to social media accounts accusing them of being bots. It’s like a Turing test of the state of online political propaganda. “Once in a while a human will come out and say, ‘I’m not a bot,’ and then we have a conversation,” he said at the European Conference for Science Journalists in Copenhagen on June 29.

In his academic writing, Howard calls bots “highly automated accounts.” By default, the accounts publish messages on Twitter, Facebook, or other social media sites at rates even a teenager couldn’t match. Human puppet-masters manage them, just like the Wizard of Oz, but with a wide variety of commercial aims and political repercussions. Howard and colleagues at theOxford Internet Institute in England published a working paper [PDF] last month examining the influence of these social media bots on politics in nine countries.

“Our goal is to produce large amounts of evidence, gathered systematically, so that we can make some safe, if not conservative, generalizations about where public life is going,” Howard says. The working paper, available ahead of peer-review in draft form, reports on countries with a mixture of different types of governments: Brazil, Canada, China, Germany, Poland, Russia, Taiwan, Ukraine, and the United States.

“My biggest surprise (maybe disappointment) is how it’s seemingly taken the 2016 U.S. election outcome to elevate the conversation and concerns related to this issue… because it’s not new,” says John F. Gray, co-founder ofMentionmapp, a social media analytics company in Vancouver, Canada. For years, bot companies have flooded protest movements’ hashtags with pro-government spam from Mexico [PDF] to Russia [PDF]. More sophisticated bots replicate real-life human networks and post or promote “fake news” and conspiracy theories seeking to sway voters. Indiana University researchers are building a taxonomy of social-network bots to simplify research (see “Taxonomy Goes Digital: Getting a Handle on Social Bots”, IEEE Spectrum, 9 June 2017).

Howard and colleagues have taken a social science approach: They found informants willing to provide access to programmers behind the botnets and have spent time with those programmers, getting to know their business models and motivations. One of their discoveries, Howard says, is that bot networks are, “not really bought and sold: they’re rented.” That’s because the older a profile is and the more varied its activity, the easier it is to evade detection by social networks’ security teams.

Private companies, not just governments and political parties, are major botnet users, Howard adds. The big business of renting botnets to influence public conversations may encourage firms to create ever-more realistic bots. The computation for spreading propaganda via bots, Howard says, isn’t that complicated. Instead, Gray says the sophistication of botnet design, their coordination, and how they manipulate social media has been “discouragingly impressive.”

Both Howard and Gray say they are pessimistic about the ability of regulations to keep up with the fast-changing social bot-verse. Howard and his team are instead trying to examine each country’s situation and in the working paper they call for social media firms to revise their designs to promote democracy.

Gray calls it a literacy problem. Humans must get better at evaluating the source of a message to help them decide how much to believe the message itself, he says.

Even Computer Users Can Now Access Quantum Computing Secret

You may not need a quantum computer of your own to securely use quantum computing in the future. For the first time, researchers have shown how even ordinary classical computer users could remotely access quantum computing resources online while keeping their quantum computations securely hidden from the quantum computer itself.

Tech giants such as Google and IBM are racing to build universal quantum computers that could someday analyze millions of possible solutions much faster than today’s most powerful classical supercomputers. Such companies have also begun offering online access to their early quantum processors as a glimpse of how anyone could tap the power of cloud-based quantum computing. Until recently, most researchers believed that there was no way for remote users to securely hide their quantum computations from prying eyes unless they too possessed quantum computers. That assumption is now being challenged by researchers in Singapore and Australia through a new paper published in the 11 July issue of the journal Physical Review X.

“Frankly, I think we are all quite surprised that this is possible,” says Joseph Fitzsimons, a theoretical physicist for the Centre for Quantum Technologies at the National University of Singapore and principal investigator on the study. “There had been a number of results showing that it was unlikely for a classical user to be able to hide [delegated quantum computations] perfectly, and I think many of us in the field had interpreted this as