Category Archives: computer software

Apple and Tesla Show Big Silicon Valley Headcount Improved performance Since 2016 Intel and eBay Shed Staff

The Silicon Valley Business Journal annually publishes a list of the tech companies who are the biggest local employers. In recent years, Apple and Google (now Alphabet) have vied for the number one spot, with Cisco holding a lock on number three. That hasn’t changed. But there have been wild swings in headcount among those on the list.

Looking at the 20 largest tech employers in Silicon Valley, the overall workforce as reported by the Business Journal ranges from 25,000 at Apple to 2789 at Symantec. But what a difference a year makes. Since the Business Journal’s 2016 report, Apple hired 5000, Tesla Motors hired 3471 (for a total local workforce of 10,000), Facebook hired 2586 (for a total of 9385), and Gilead Sciences hired 1719 (for a total of 6949).

Among those companies shedding staff, eBay led the list, losing 3222 employees (for a total of 2978), followed by Intel (minus 3000 for a total of 7801), and Yahoo (minus 193 for a total of 3800).

Overall, headcount gains among the top 20 far exceeded losses, with a total of 16,604 new employees at those tech firms with a growing Silicon Valley presence, compared with 3299 fewer employees among those companies making cutbacks.

(Note: Western Digital was on the list at number 18 this year, and not on previous years, having consolidated operations to its San Jose office. TheBusiness Journal reported the company had 3000 local employees in 2017; it did not make the 2016 list, and the consolidation makes comparing annual totals complicated, so I didn’t include it in this discussion.)

Do not You Like Alexa Better if Knowing When It’s Annoying You?

What could your computer, phone, or other gadget do differently if it knew how you were feeling?

Rana el Kaliouby, founder and CEO of Affectiva, is considering the possibilities of such a world. Speaking at the Computer History Museum last week, el Kaliouby said that she has been working to teach computers to read human faces since 2000 as a PhD student at Cambridge University.

“I remember being stressed,” she says.  “I had a paper deadline, and “Clippy” [that’s Microsoft’s ill-fated computer assistant] would pop up and do a little twirl and say ‘It looks like you are writing a letter.’ I would think, ‘No I’m not!’”

(“You may,” Computer History Museum CEO John Hollar interjected, “be one of the few advanced scientists inspired by Clippy.”)

That was a piece of what led her to think about making computers more intelligent. Well, that, plus the fact that she was homesick. And the realization that, because she was spending more time with her computer than any human being, she really wanted her computer to understand her better.

Since then, she’s been using machine learning, and more recently deep learning, to teach computers to read faces, spinning Affectiva out of the MIT Media Lab in 2009 to commercialize her work. The company’s early customers are not exactly changing the world—they are mostly advertisers looking to better craft their messages. But that, she says, is just the beginning. Soon, she says, “all of our devices will have emotional intelligence”—not just our phones, but “our refrigerators, our cars.”

Early on, el Kaliouby focused on building smart tools for individuals with autism. She still thinks emotional intelligence technology—or EI—will be a huge boon to this community, potentially providing a sort of emotional hearing aid.

It’ll also be a mental healthcare aid, el Kaliouby predicts. She sees smart phones with EI as potentially able to regularly check a person’s mental state, providing early warning of depression, anxiety, or other problems. “People check their phones 15 times an hour. That’s a chance to understand that you are deviating from your baseline.”

Cars, she said, will need to have emotional intelligence as they transition to being fully automated; in the interim period, they will sometimes need to hand control back to a human driver, and need to know if the driver is ready to take control.

Smart assistants like Siri and Alexa, she says, “need to know when [they] gave you the wrong answer and you are annoyed, and say ‘I’m sorry.’”

Online education desperately needs emotional intelligence, she indicated, to give it a sense of when students are confused or engaged or frustrated or bored.

And the killer app? It just might be dating. “We have worked with teenagers who just want to have a girlfriend, but couldn’t tell if girls were interested in them,” el Kaliouby says. A little computer help reading their expressions could help with that. (Pornography and sex robots will likely be a big market as well, el Kaliouby says, but her company doesn’t plan on developing tools for this application. Nor for security, because that violates Affectiva’s policy of not tracking emotions without consent.)

While Affectiva is focusing on the face for its clues about emotions, el Kaliouby admits that the face is just part of the puzzle—gestures, tone of voice, and other factors need to be considered before computers can be completely accurate in decoding emotions.

And today’s emotional intelligence systems are still pretty dumb. “I liken the state of the technology to a toddler, el Kaliouby says. “It can do basic emotions. But what do people look like when inspired, or jealous, or proud? I think this technology can answer these basic science questions—we’re not done.”

In the Future, Machines Will Borrow the Best Tricks from Our Brain

Steve sits up and takes in the crisp new daylight pouring through the bedroom window. He looks down at his companion, still pretending to sleep. “Okay, Kiri, I’m up.”

She stirs out of bed and begins dressing. “You received 164 messages overnight. I answered all but one.”

In the bathroom, Steve stares at his disheveled self. “Fine, give it to me.”

“Your mother wants to know why you won’t get a real girlfriend.”

He bursts out laughing. “Anything else?”

“Your cholesterol is creeping up again. And there have been 15,712 attempts to hack my mind in the last hour.”

“Good grief! Can you identify the source?”

“It’s distributed. Mostly inducements to purchase a new RF oven. I’m shifting ciphers and restricting network traffic.”

“Okay. Let me know if you start hearing voices.” Steve pauses. “Any good deals?”

“One with remote control is in our price range. It has mostly good reviews.”

“You can buy it.”

Kiri smiles. “I’ll stay in bed and cook dinner with a thought.”

Steve goes to the car and takes his seat.

Car, a creature of habit, pulls out and heads to work without any prodding.

Leaning his head back, Steve watches the world go by. Screw the news. He’ll read it later.

Car deposits Steve in front of his office building and then searches for a parking spot.

Steve walks to the lounge, grabs a roll and some coffee. His coworkers drift in and chat for hours. They try to find some inspiration for a new movie script. AI-generated art is flawless in execution, even in depth of story, but somehow it doesn’t resonate well with humans, much as one generation’s music does not always appeal to the next. AIs simply don’t share the human condition.

But maybe they could if they experienced the world through a body. That’s the whole point of the experiment with Kiri.…

It’s sci-fi now, but by midcentury we could be living in Steve and Kiri’s world. Computing, after about 70 years, is at a momentous juncture. The old approaches, based on CMOS technology and the von Neumann architecture, are reaching their fundamental limits. Meanwhile, massive efforts around the world to understand the workings of the human brain are yielding new insights into one of the greatest scientific mysteries: the biological basis of human cognition.

The dream of a thinking machine—one like Kiri that reacts, plans, and reasons like a human—is as old as the computer age. In 1950, Alan Turing proposed to test whether machines can think, by comparing their conversation with that of humans. He predicted computers would pass his test by the year 2000. Computing pioneers such as John von ­Neumann also set out to imitate the brain. They had only the simplest notion of neurons, based on the work of neuro­scientist ­Santiago Ramón y Cajal and others in the late 1800s. And the dream proved elusive, full of false starts and blind alleys. Even now, we have little idea how the tangible brain gives rise to the intangible experience of conscious thought.

Today, building a better model of the brain is the goal of major government efforts such as the BRAIN Initiative in the United States and the Human Brain Project in Europe, joined by private efforts such as those of the Allen Institute for Brain Science, in Seattle. Collectively, these initiatives involve hundreds of researchers and billions of dollars.

With systematic data collection and rigorous insights into the brain, a new generation of computer pioneers hopes to create truly thinking machines.

If they succeed, they will transform the human condition, just as the Industrial Revolution did 200 years ago. For nearly all of human history, we had to grow our own food and make things by hand. The Industrial Revolution unleashed vast stores of energy, allowing us to build, farm, travel, and communicate on a whole new scale. The AI revolution will take us one enormous leap further, freeing us from the need to control every detail of operating the machines that underlie modern civilization. And as a consequence of copying the brain, we will come to understand ourselves in a deeper, truer light. Perhaps the first benefits will be in mental health, organizational behavior, or even international relations.

Such machines will also improve our health in general. Imagine a device, whether a robot or your cellphone, that keeps your medical records. Combining this personalized data with a sophisticated model of all the pathways that regulate the human body, it could simulate scenarios and recommend healthy behaviors or medical actions tailored to you. A human doctor can correlate only a few variables at once, but such an app could consider thousands. It would be more effective and more personal than any physician.

Re-creating the processes of the brain will let us automate anything humans now do. Think about fast food. Just combine a neural controller chip that imitates the reasoning, intuitive, and mechanical-control powers of the brain with a few thousand dollars’ worth of parts, and you have a short-order bot. You’d order a burger with your phone, and then drive up to retrieve your food from a building with no humans in it. Many other commercial facilities would be similarly human free.

That may sound horrifying, given how rigid computers are today. Ever call a customer service or technical support line, only to be forced through a frustrating series of automated menus by a pleasant canned voice asking you repeatedly to “press or say 3,” at the end of which you’ve gotten nowhere? The charade creates human expectations, yet the machines frequently fail to deliver and can’t even get angry when you scream at them. Thinking machines will sense your emotions, understand your goals, and actively help you achieve them. Rather than mechanically running through a fixed set of instructions, they will adjust as circumstances change.

That’s because they’ll be modeled on our brains, which are exquisitely adapted to navigating complex environments and working with other humans. With little conscious effort, we understand language and grasp shades of meaning and mood from the subtle cues of body language, facial expression, and tone of voice. And the brain does all that while consuming astonishingly little energy.

That 1.3-kilogram lump of neural tissue you carry around in your head accounts for about 20 percent of your body’s metabo­lism. Thus, with an average basal metabolism of 100 watts, each of us is equipped with the biological equivalent of a 20-W supercomputer. Even today’s most powerful computers, running at 20 million W, can’t come close to matching the brain.

How does the brain do it? It’s not that neurons are so much more efficient than transistors. In fact, when it comes to moving signals around, neurons have one-tenth the efficiency. It must be the organization of those neurons and their patterns of interaction, or “algorithms.” The brain has relatively shallow but massively parallel networks. At every level, from deep inside cells to large brain regions, there are feedback loops that keep the system in balance and change it in response to activity from neighboring units. The ultimate feedback loop is through the muscles to the outside world and back through the senses.

Traditionally, neurons were viewed as units that collect thousands of inputs, transform them computationally, and then send signals downstream to other neurons via connections called synapses. But it turns out that this model is too simplistic; surprising computational power exists in every part of the system. Even a single synapse contains hundreds of different protein typeshaving complex interactions. It’s a molecular computer in its own right.

And there are hundreds of different types of neurons, each performing a special role in the neural circuitry. Most neurons communicate through physical contact, so they grow long skinny branches to find the right partner. Signals move along these branches via a chain of amplifiers. Ion pumps keep the neuron’s cell membrane charged, like a battery. Signals travel as short sharp changes of voltage, called spikes, which ripple down the membrane.

The power of the brain goes beyond its internal connections, and includes its ability to communicate with other brains. Some animals form swarms or social groups, but only humans form deep hierarchies. This penchant, more than any unique cognitive ability, enables us to dominate the planet and construct objects of exquisite complexity. Collectively, though, we humans are capable of achieving truly great things.

Now we are combining machine intelligence along with our own. As our systems—industrial, technological, ­medical—grow in sophistication and complexity, so too must the intelligence that operates them. Eventually, our tools will think for themselves, perhaps even become conscious. Some people find this a scary prospect. If our tools think for themselves, they could turn against us. What if, instead, we create machines that love us?

Steve arrives home full of dread. Inside, the place is pristinely clean. A delicious aroma wafts from the new oven. Kiri is on the back porch, working at an easel. He walks up behind her. “How was your day?”

“I made a new painting.” She steps away to show him. The canvas contains a photo-perfect rendition of the yard, in oils.

“Um, it’s nice.”

“You’re lying. I can tell from your biosensors.”

“Listen, Kiri, I have to take you back to the lab. They say you’ve progressed as far as you can with me.”

“I like it here. Please let me stay. I’ll be anything you want.”

“That’s the problem. You try so hard to please me that you haven’t found yourself.”

Water trickles down her cheek. She wipes it and studies her moist hand. “You think all this is fake.”

Steve takes Kiri in a tight embrace and holds her for a long time. He whispers, “I don’t know.”

This article appears in the June 2017 print issue as “The Dawn of the Real Thinking Machine.”

Hedge Funds Look to Machine Learning, Crowdsourcing for Competitive Advantage

Every day, financial markets and global economies produce a flood of data. As a result, stock traders now have more information about more industries and sectors than ever before. That deluge, combined with the rise of cloud technology, has inspired hedge funds to develop new quantitative strategies that they hope can generate greater returns than the experience and judgement of their own staff.

At the Future of Fintech conference hosted by research company CB Insightsin New York City, three hedge fund insiders discussed the latest developments in quantitative trading. A session on Tuesday featured Christina Qi, the co-founder of a high-frequency trading firm called Domeyard LP; Jonathan Larkin, an executive from Quantopian, a hedge fund taking a data-driven systematic approach; and Andy Weissman of Union Square Ventures, a venture capital firm that has invested in an autonomous hedge fund.

Many of the world’s largest hedge funds already rely on powerful computing infrastructure and quantitative methods—whether that’s high-frequency trading, incorporating machine learning, or applying data science—to make trades. After all, human traders are full of biases, emotions, memories, and errors of judgment. Machines and data, on the other hand, can coolly examine the facts and decide the best course of action.

Deciding which technologies and quantitative methods to trust, though, is still a job for humans. There are many ways that hedge funds can use technology to create an advantage for investors. Just a few years ago, high-frequency trading was all the rage: Some firms built secret networks of microwave towersand reserved space on trans-Atlantic fiber optic cables to eke out competitors by a few milliseconds.

Now, speed alone isn’t enough. Qi co-founded Domeyard in 2013 to execute high-frequency trades through a suite of proprietary technologies. The firm built their own feed handlers, which are systems that retrieve and organize market data from exchanges such as Nasdaq. They also developed their ownorder management system, which are instructions that determine how proprietary algorithms make trades.

Qi says Domeyard’s system might gather 343 million data points in the opening hour of the New York Stock Exchange on any given day. The company can execute trades in just a few microseconds, and process data in mere nanoseconds.

But thanks to advances in the trading software and systems available for purchase, most any firm can now carry out the high-speed trades that once set Domeyard apart. “It’s not about the speed anymore,” Qi said. Hedge funds must find new ways to compete.

Over the past few years, hedge funds have started to get even more creative. Some have begun to incorporate machine learning into their systems, hand over key management decisions to troves of data scientists, and even crowdsource investment strategies. If they work, these experiments could give rise to a new breed of hedge funds that rely more on code and less on humans to make decisions than ever before.

One hedge fund called Numerai pays data scientists in cryptocurrency to tweak its machine learning algorithms and improve its strategy. “The theory there is can you achieve consistent returns over time by removing human bias, and making it a math problem,” said Andy Weissman of Union Square Ventures, which has invested US $3 million in Numerai.

Not all funds will find it easy to compete on these new terms. Domeyard can’t incorporate machine learning, Qi says, because machine learning programs are generally optimized for throughput, rather than latency. “I can’t use standard machine learning techniques to trade because they’re too slow,” she said.

The third fund represented on the panel, Quantopian, provides free resources to anyone who wants to write investment algorithms based on a personal hypothesis about markets. Quantopian takes the most promising algorithms, puts money behind them, and adds them to one big fund.

“We’re tapping into this global mindshare to make something valuable for our investors,” said Larkin, chief investment officer at Quantopian.

To help the process along, the firm provides educational materials, over 50 datasets on U.S. equities and futures, a library of ready-made modules that authors can borrow to code in the Python programming language, a virtual sandbox to test their hypotheses, and support from a team of 40 in-house developers. If authors wish to incorporate machine learning into their algorithms, they can do that with Python modules such as sci-kit learn.

One project, or strategy, consists of multiple algorithms written across several layers. An author’s first step it to generate a hypothesis. Then, they choose which data, instruments, and modules they will apply to test that hypothesis.

Next, the author must build what Larkin calls an “alpha,” or an expression based on the author’s hypothesis that has been tested and proven to have some degree of predictive value about market performance. “The best quantitative strategies will have a number of these,” Larkin said.

Each alpha should generate a vector, or a set of numbers, which can then be used to make trades that will align with that hypothesis. The next step, then, is to combine the alphas and add a risk management layer with safeguards to prevent the algorithms from getting carried away.

Finally, the author fills in the final details of the system which include the timing of trades. Quantopian’s approach is admittedly much slower than Domeyard—the fund has a minimum trading interval of one minute.

To date, 140,000 people from 180 countries have written investment algorithms for Quantopian, and the company has put money into 25 of those projects. Its largest allocation to a single project was $10 million.

Once they’ve built a strategy, the original author retains the intellectual property for the underlying algorithms. If their approach is funded, the author receives a cut (generally 10 percent) of any profits that their strategy generates. Larkin estimates it takes at least 50 hours of work to develop a successful strategy.

Larkin wouldn’t share any information about the fund’s performance so far. But he said the idea is to blend the best data-based hypotheses from many people. “We at Quantopian believe the strongest investment vehicle is a combination of strategies, not any one individual strategy,” he said.

Larkin refers to Quantopian’s methods as data-driven systematic investing, a separate category from high-frequency trading or discretionary investing based on data science. Still, he classifies all three of these quantitative methods as distinct from the longtime approach of simply relying on a fund manager’s judgement, without any formal way to organize and filter data.

Depending on how Numerai, Quantopian, and similar experiments fare, investors could be entering a new era of finance in which they entrust their money to machines, not managers.

In AI General Challenge, Team Compete to $ 5 Million

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

We owe the success of numerous state-of-the-art artificial intelligence applications to artificial neural networks. First designed decades ago, they rocketed the AI field to success quite recently, when researchers were able to run them on much more powerful hardware and feed them with huge amounts of data. Since then, the field of deep learning has been flourishing.

The effect seemed miraculous and promising. While it was hard to interpret what exactly was happening inside the networks, they started reaching human performance on a number of tasks: such as image recognition, natural language processing, and data classification in general. The promise was that we would elegantly cross the border between data processing and intelligence by pure brute force of deep artificial neural networks: Just give it all the data in the world!

However, this is easier said than done. There are limits to state-of-the-art AI that separate it from human-like intelligence:

● We humans can learn a new skill without forgetting what we have already learned.

● We can build upon what we know already. For example, if we learn language skills in one context we can reuse them to communicate any of our experiences, dreams, or completely new ideas.

● We can improve ourselves and gradually become better learners. For instance, after you learn one foreign language, learning another is usually easier, because you already possess a number of heuristics and tricks for language-learning. You can keep discovering and improving these heuristics and use them to solve new tasks. This is how we’re able to work through completely new problems.

Some of these things may sound trivial, but today’s AI algorithms are very limited in how much previous knowledge they are able to keep through each new training phase, how much they can reuse, and whether they are able to devise any universal learning strategies at all.

In practice, this means that you need to build and fine tune a new algorithm for each new specific task—which is a form of very sophisticated data processing, rather than real intelligence.

To build a true general intelligence has been a lifelong dream of Marek Rosa, from his days as a teenage programmer until now, when he’s a successful entrepreneur. Rosa, therefore, invested the wealth he made in the video game business into his own general AI R&D company in Prague: GoodAI.

Rosa recently took steps to scale up the research on general AI by founding the AI Roadmap Instituteand launching the General AI Challenge. The AI Roadmap Institute is an independent entity that promotes big-picture thinking by studying and comparing R&D roadmaps towards general intelligence. It also focuses on  AI safety and considers roadmaps that represent possible futures that we either want to create or want to prevent from happening.

The General AI Challenge is a citizen-science project with a US $5 million prize fund provided by Rosa. His motivation is to incentivize talent to tackle crucial research problems in human-level AI development and to speed up the search for safe and beneficial general artificial intelligence.

The $5 million will be given out as prizes in various rounds of the multi-year competition. Each round will tackle an important milestone on the way to general AI. In some rounds, participants will be tasked with designing algorithms and programming AI agents. In other rounds, they will work on theoretical problems such as AI safety or societal impacts of AI. The Challenge will address general AI as a complex phenomenon.

The Challenge kicked off on 15 February with a six-month “warm-up” round dedicated to building gradually learning AI agents. Rosa and the GoodAI team believe that the ability to learn gradually lies at the core of our intelligence. It’s what enables us to efficiently learn new skills on top of existing knowledge without forgetting what we already know and to reapply our knowledge in various situations across multiple domains. Essentially, we learn how to learn better, enabling us to readily react to new problems.

At GoodAI’s R&D lab, AI agents will learn via a carefully designed curriculum in a gradual manner. We call it “school for AI,” since the progression is similar to human schooling, from nursery till graduation. We believe this approach will provide more control over what kind of behaviors and skills the AI acquires, which is of great importance for AI safety. Essentially, the goal is to bias the AI towards behaviors and abilities that we humans find useful and that are aligned with our understanding of the world and morality.

Nailing gradual learning is not an easy task, and so the Challenge breaks the problem into phases. The first round strips the problem down to a set of simplistic tasks in a textual environment. The tasks were specifically designed to test gradual learning potential, so they can serve as guidance for the developers.

The Challenge competitors are designing AI agents that can engage in a dialog within a textual environment. The environment will be teaching the agents to react to text patterns in a certain way. As an AI progresses through the set of roughly 40 tasks, they become harder. The final tasks are impossible to solve in a reasonable amount of time unless the agent has figured out the environment’s logic, and can reuse some of the skills it acquired on previous tasks.

More than 390 individuals and teams from around the world have already signed up to solve gradual learning in the first round of the General AI Challenge. (And enrollment is still open!) All participants must submit their solutions for evaluation by August 15 of this year. Then the submitted AI agents will be tested on a set of tasks which are similar, but not identical, to those provided as part of the first-round training tasks. That’s where the AI agents’ ability to solve previously unseen problems will really be tested.

We don’t yet know whether a successful solution to the Challenge’s first phase will be able to scale up to much more complex tasks and environments, where rich visual input and extra dimensions will be added. But the GoodAI team hopes that this first step will ignite new R&D efforts, spread new ideas in the community, and advance the search for more human-like algorithms.

Olga Afanasjeva is the director of the General AI Challenge and COO of GoodAI.

A software company Shrink Apps and Wearable Can Usher In The Age Of Digital Psychiatry

Zach has been having trouble at work, and when he comes home he’s exhausted, yet he struggles to sleep. Everything seems difficult, even walking—he feels like he’s made of lead. He knows something is wrong and probably should call the doctor, but that just seems like too much trouble. Maybe next week.

Meanwhile, software on his phone has detected changes in Zach, including subtle differences in the language he uses, decreased activity levels, worsening sleep, and cutbacks in social activities. Unlike Zach, the software acts quickly, pushing him to answer a customized set of questions. Since he doesn’t have to get out of bed to do so, he doesn’t mind.

Zach’s symptoms and responses suggest that he may be clinically depressed. The app offers to set up a video call with a psychiatrist, who confirms the diagnosis. Based on her expertise, Zach’s answers to the survey questions, and sensor data that suggests an unusual type of depression, the psychiatrist devises a treatment plan that includes medication, video therapy sessions, exercise, and regular check-ins with her. The app continues to monitor Zach’s behavior and helps keep his treatment on track by guiding him through exercise routines, noting whether or not he’s taking his medication, and reminding him about upcoming appointments.

While Zach isn’t a real person, everything mentioned in this scenario is feasible today and will likely become increasingly routine around the world in only a few years’ time. My prediction may come as a surprise to many in the health-care profession, for over the years there have been claims that mental health patients wouldn’t want to use technology to treat their conditions, unlike, say, those with asthma or heart disease. Some have also insisted that to be effective, all assessment and treatment must be done face to face, and that technology might frighten patients or worsen their paranoia.

However, recent research results from a number of prestigious institutions, including Harvard, the National Alliance on Mental Illness, King’s College London, and the Black Dog Institute, in Australia, refute these claims. Studies show that psychiatric patients, even those with severe illnesses like schizophrenia, can successfully manage their conditions with smartphones, computers, and wearable sensors. And these tools are just the beginning. Within a few years, a new generation of technologies promises to revolutionize the practice of psychiatry.

To understand the potential of digital psychiatry, consider how someone with depression is usually treated today.

Depression can begin so subtly that up to two-thirds of those who have it don’t even realize they’re depressed. And even if they realize something’s wrong, those who are physically disabled, elderly, living in rural areas, or suffering from additional mental illnesses like anxiety disorders may find it difficult to get to a doctor’s office.

Once a patient does see a psychiatrist or therapist, much of the initial visit will be spent reviewing the patient’s symptoms, such as sleep patterns, energy levels, appetite, and ability to focus. That too can be difficult; depression, like many other psychiatric illnesses, affects a person’s ability to think and remember.

The patient will likely leave with some handouts about exercise and a prescription for medication. There’s a fair chance that the medication won’t be effective at least for many weeks, the exercise plan will be ignored, and the patient will have a bumpy course of progress. Unfortunately, the psychiatrist won’t know until a follow-up appointment sometime later.

Technology can improve this outcome, by bringing objective information into the psychiatrist’s office and allowing real-time monitoring and intervention outside the office. Instead of relying on just the patient’s recollection of his symptoms, the doctor can look at behavioral data from the person’s smartphone and wearable sensors. The psychiatrist may even recommend that the patient start using such tools before the first visit.

It’s astonishing how much useful information a doctor can glean from data that may seem to have little to do with a person’s mental condition. GPS data from a smartphone, for example, can reveal the person’s movements, which in turn reflects the person’s mental health. By correlating patients’ smartphone-derived GPS measurements with their symptoms of depression, a 2016 study by the Center for Behavioral Intervention Technologies at Northwestern University, in Chicago, found that when people are depressed they tend to stay at home more than when they’re feeling well. Similarly, someone entering a manic episode of bipolar disorder may be more active and on the move. The Monitoring, Treatment, and Prediction of Bipolar Disorder Episodes (Monarca) consortium, a partnership of European universities, has conducted numerous studies demonstrating that this kind of data can be used to predict the course of bipolar disorder.

Where GPS is unavailable, Bluetooth and Wi-Fi can fill in. Research by Dror Ben-Zeev, of the University of Washington, in Seattle, demonstrated that Bluetooth radios on smartphones can be used to monitor the locations of people with schizophrenia within a hospital. Data collected through Wi-Fi networks could likewise reveal whether a patient who’s addicted to alcohol is avoiding bars and attending support-group meetings.

Accelerometer data from a smartphone or fitness tracker can provide more fine-grained details about a person’s movements, detect tremors that may be drug side effects, and capture exercise patterns. A test of an app called CrossCheck recently demonstrated how this kind of data, in combination with other information collected by a phone, can contribute to symptom prediction in schizophrenia by providing clues on sleep and activity patterns. A report in the American Journal of Psychiatry by Ipsit Vahia and Daniel Sewell describes how they were able to treat a patient with an especially challenging case of depression using accelerometer data. The patient had reported that he was physically active and spending little time in bed, but data from his fitness tracker showed that his recollection was faulty; the doctors thus correctly diagnosed his condition as depression rather than, say, a sleep disorder.

Tracking the frequency of phone calls and text messages can suggest how social a person is and indicate any mental change. When one of Monarca’s research groups [PDF] looked at logs of incoming and outgoing text messages and phone calls, they concluded that changes in these logs could be useful for tracking depression as well as mania in bipolar disorder.

Righetti now Launches Full-Stack Quantum Computing Service and Quantum IC Fab

Much of the ongoing quantum computing battle among tech giants such as Google and IBM has focused on developing the hardware necessary to solve impossible classical computing problems. A Berkeley-based startup looks to beat those larger rivals with a one-two combo: a fab lab designed for speedy creation of better quantum circuits and a quantum computing cloud service that provides early hands-on experience with writing and testing software.

Rigetti Computing recently unveiled its Fab-1 facility, which will enable its engineers to rapidly build new generations of quantum computing hardware based on quantum bits, or qubits. The facility can spit out entirely new designs for 3D-integrated quantum circuits within about two weeks—much faster than the months usually required for academic research teams to design and build new quantum computing chips. It’s not so much a quantum computing chip factory as it is a rapid prototyping facility for experimental designs.

“We’re fairly confident it’s the only dedicated quantum computing fab in the world,” says Andrew Bestwick, director of engineering at Rigetti Computing. “By the standards of industry, it’s still quite small and the volume is low, but it’s designed for extremely high-quality manufacturing of these quantum circuits that emphasizes speed and flexibility.”

But Rigetti is not betting on faster hardware innovation alone. It has also announced its Forest 1.0 service that enables developers to begin writing quantum software applications and simulating them on a 30-qubit quantum virtual machine. Forest 1.0 is based on Quil—a custom instruction language for hybrid quantum/classical computing—and open-source python tools intended for building and running Quil programs.

By signing up for the service, both quantum computing researchers and scientists in other fields will get the chance to begin practicing how to write and test applications that will run on future quantum computers. And it’s likely that Rigetti hopes such researchers from various academic labs or companies could end up becoming official customers.

“We’re a full stack quantum computing company,” says Madhav Thattai, Rigetti’s chief strategy officer. “That means we do everything from design and fabrication of quantum chips to packaging the architecture needed to control the chips, and then building the software so that people can write algorithms and program the system.”

Much still has to be done before quantum computing becomes a practical tool for researchers and companies. Rigetti’s approach to universal quantum computing uses silicon-based superconducting qubits that can take advantage of semiconductor manufacturing techniques common in today’s computer industry. That means engineers can more easily produce the larger arrays of qubits necessary to prove that quantum computing can outperform classical computing—a benchmark that has yet to be reached.

Google researchers hope to demonstrate such “quantum supremacy” over classical computing with a 49-qubit chip by the end of 2017. If they succeed, it would be an “incredibly exciting scientific achievement,” Bestwick says. Rigetti Computing is currently working on scaling up from 8-qubit chips.

But even that huge step forward in demonstrating the advantages of quantum computing would not result in a quantum computer that is a practical problem-solving tool. Many researchers believe that practical quantum computing requires systems to correct the quantum errors that can arise in fragile qubits. Error correction will almost certainly be necessary to achieve the future promise of 100-million-qubit systems that could perform tasks that are currently impractical, such as cracking modern cryptography keys.

Though it may seem like quantum computing demands far-off focus, Rigetti Computing is complementing its long-term strategy with a near-term strategy that can serve clients long before more capable quantum computers arise. The quantum computing cloud service is one example of that. The startup also believes a hybrid system that combines classical computing architecture with quantum computing chips can solve many practical problems in the short term, especially in the fields of machine learning and chemistry. What’s more, says Rigetti, such hybrid classical/quantum computers can perform well even without error correction.

“We’ve uncovered a whole new class of problems that can be solved by the hybrid model,” Bestwick says. “There is still a large role for classical computing to own the shell of the problem, but we can offload parts of the problem that the quantum computing resource can handle.”

There is another tall hurdle that must be overcome before we’ll be able to build the quantum computing future: There are not many people in the world qualified to build a full-stack quantum computer. But Rigetti Computing is focused on being a full-stack quantum computing company that’s attractive to talented researchers and engineers who want to work at a company that is trying to take this field beyond the academic lab to solve real-world problems.

Much of Rigetti’s strategy here revolves around its Junior Quantum Engineer Program, which helps recruit and train the next generation of quantum computing engineers. The program, says Thattai, selects some of the “best undergraduates in applied physics, engineering, and computer science” to learn how to build full-stack quantum computing in the most hands-on experience possible. It’s a way to ensure that the company continues to feed the talent pipeline for the future industry.

On the client side, Rigetti is not yet ready to name its main customers. But it did confirm that it has partnered with NASA to develop potential quantum computing applications. Venture capital firms seem impressed by the startup’s near-term and long-term strategies as well, given news earlier this year that Rigetti had raised $64 million in series A and B funding led by Andreessen Horowitz and Vy Capital.

Whether it’s clients or investors, Rigetti has sought out like-minded people who believe in the startup’s model of preparing for the quantum computing future beyond waiting on the hardware.

“Those people know that when the technology crosses the precipice of being beyond what classical computing can do, it will flip very, very quickly in one generation,” Thattai says. “The winners and losers in various industries will be decided by who took advantage of quantum computing systems early.”

How to Bots Win Friends and Influence People

Every now and then sociologist Phil Howard writes messages to social media accounts accusing them of being bots. It’s like a Turing test of the state of online political propaganda. “Once in a while a human will come out and say, ‘I’m not a bot,’ and then we have a conversation,” he said at the European Conference for Science Journalists in Copenhagen on June 29.

In his academic writing, Howard calls bots “highly automated accounts.” By default, the accounts publish messages on Twitter, Facebook, or other social media sites at rates even a teenager couldn’t match. Human puppet-masters manage them, just like the Wizard of Oz, but with a wide variety of commercial aims and political repercussions. Howard and colleagues at theOxford Internet Institute in England published a working paper [PDF] last month examining the influence of these social media bots on politics in nine countries.

“Our goal is to produce large amounts of evidence, gathered systematically, so that we can make some safe, if not conservative, generalizations about where public life is going,” Howard says. The working paper, available ahead of peer-review in draft form, reports on countries with a mixture of different types of governments: Brazil, Canada, China, Germany, Poland, Russia, Taiwan, Ukraine, and the United States.

“My biggest surprise (maybe disappointment) is how it’s seemingly taken the 2016 U.S. election outcome to elevate the conversation and concerns related to this issue… because it’s not new,” says John F. Gray, co-founder ofMentionmapp, a social media analytics company in Vancouver, Canada. For years, bot companies have flooded protest movements’ hashtags with pro-government spam from Mexico [PDF] to Russia [PDF]. More sophisticated bots replicate real-life human networks and post or promote “fake news” and conspiracy theories seeking to sway voters. Indiana University researchers are building a taxonomy of social-network bots to simplify research (see “Taxonomy Goes Digital: Getting a Handle on Social Bots”, IEEE Spectrum, 9 June 2017).

Howard and colleagues have taken a social science approach: They found informants willing to provide access to programmers behind the botnets and have spent time with those programmers, getting to know their business models and motivations. One of their discoveries, Howard says, is that bot networks are, “not really bought and sold: they’re rented.” That’s because the older a profile is and the more varied its activity, the easier it is to evade detection by social networks’ security teams.

Private companies, not just governments and political parties, are major botnet users, Howard adds. The big business of renting botnets to influence public conversations may encourage firms to create ever-more realistic bots. The computation for spreading propaganda via bots, Howard says, isn’t that complicated. Instead, Gray says the sophistication of botnet design, their coordination, and how they manipulate social media has been “discouragingly impressive.”

Both Howard and Gray say they are pessimistic about the ability of regulations to keep up with the fast-changing social bot-verse. Howard and his team are instead trying to examine each country’s situation and in the working paper they call for social media firms to revise their designs to promote democracy.

Gray calls it a literacy problem. Humans must get better at evaluating the source of a message to help them decide how much to believe the message itself, he says.

Even Computer Users Can Now Access Quantum Computing Secret

You may not need a quantum computer of your own to securely use quantum computing in the future. For the first time, researchers have shown how even ordinary classical computer users could remotely access quantum computing resources online while keeping their quantum computations securely hidden from the quantum computer itself.

Tech giants such as Google and IBM are racing to build universal quantum computers that could someday analyze millions of possible solutions much faster than today’s most powerful classical supercomputers. Such companies have also begun offering online access to their early quantum processors as a glimpse of how anyone could tap the power of cloud-based quantum computing. Until recently, most researchers believed that there was no way for remote users to securely hide their quantum computations from prying eyes unless they too possessed quantum computers. That assumption is now being challenged by researchers in Singapore and Australia through a new paper published in the 11 July issue of the journal Physical Review X.

“Frankly, I think we are all quite surprised that this is possible,” says Joseph Fitzsimons, a theoretical physicist for the Centre for Quantum Technologies at the National University of Singapore and principal investigator on the study. “There had been a number of results showing that it was unlikely for a classical user to be able to hide [delegated quantum computations] perfectly, and I think many of us in the field had interpreted this as

In MOOC FutureLearn, Conversation Powers Learn Massive Scale

Personalized learning” is one of the hottest trends in education these days. The idea is to create software that tracks the progress of each student and then adapts the content, pace of instruction, and assessment to the individual’s performance. These systems succeed by providing immediate feedback that addresses the student’s misunderstandings and offers additional instruction and materials.

The Bill & Melinda Gates Foundation has reportedly spent more than US $300 million on personalized learning R&D, while the Chan Zuckerberg Initiative—the investment and philanthropic company created by Facebook CEO Mark Zuckerberg and his wife, Priscilla Chan—has also signalled its commitment to personalized learning (which Zuckerberg announced on Facebook, of course). Just last month, the two groups teamed up for the first time to jointly fund a $12 million program to promote personalized classroom instruction.

But personalized learning is hard to do. It requires breaking down a topic into its component parts in order to create different pathways through the material. It can be done, with difficulty, for well-structured and well-established topics, such as algebra and computer programming. But it really can’t be done for subjects that don’t form neat chunks, such as economics or psychology, nor for still-evolving areas, such as cybersecurity.

What’s more, this latest wave of personalized learning may have the unintended consequence of isolating students because it ignores the biggest advance in education of the past 50 years: learning through cooperation and conversation. It’s ironic that the inventor of the world’s leading social media platform is promoting education that’s the opposite of social.

Interestingly, one early proponent of personalized learning had a far more expansive view. In the 1960s, Gordon Pask, a deeply eccentric British scientist who pioneered the application of cybernetics to entertainment, architecture, and education, co-invented the first commercial adaptive teaching machine, which trained typists in keyboard skills and adjusted the training to their personal characteristics. A decade later, Pask extended personalized learning into a grand unified theory of learning as conversation.

For the layperson and even for a lot of experts, Pask’s Conversation Theory is impenetrable. But for those who manage to grasp it, it’s quite exciting. In essence, it explains how language-using systems, including people and artificial intelligences, can come to know things through well-structured conversation. He proposed that all human learning involves conversation. We converse with ourselves when we relate new experience to what we already know. We converse with teachers when we respond to their questions and they correct our misunderstandings. We converse with other learners to reach agreement.

This is more than an abstract theory of learning. It is a blueprint for designing educational technology. Pask himself developed teaching machines that conversed with students in a formalized language, represented as dynamic maps of interconnected concepts. He also introduced conversational teaching methods, such as Teachback, where the student explains to the teacher what has just been taught.

Pask’s theory still has relevance today. I know, because for the past four years, I’ve helped develop a new MOOC (Massive Open Online Course) platform based on his ideas. The platform is operated by FutureLearn, a company owned by The Open University, the UK’s 48-year-old public distance learning and research university.

As Academic Lead for FutureLearn, I was determined not to copy existing MOOC platforms, which primarily focus on delivering lectures at a distance. Instead, we designed FutureLearn for learning as conversation, and in such a way that learning would improve with scale, so that the more people who signed up, the better the learning experience would be.

Every course involves conversation as a core element. Each teaching step, whether video, text, or interactive exercise, has a flow of comments, questions, and replies from learners running alongside it. The steps make careful use of questions to prompt responses: What was the most important thing you learned from the video? Can you give an example from your own experience?

There are also dedicated discussions, in which learners reflect on the week’s activity, describe how they performed on assessments, or answer an open-ended question about the course. And online study groups allow learners to work together on a task and discuss their learning goals.

Even student assessment has a conversational component. Learners write short structured reviews of other students’ assignments, and in return they receive reviews of their assignments from their peers. Quizzes and tests are marked by computer, but the results come with pre-written responses from the educator.

When we began designing FutureLearn, previous research suggested that students don’t like to collaborate and converse online. Other online learning platforms that provide forums to discuss a course find these features are generally not well used. But that may be because these features are peripheral, whereas we put conversation at the heart of learning.

From the start, the conversations took off. In June 2015, the British Council ran the largest ever online MOOC, on preparing for the IELTS English language proficiency exam. Some 271,000 people joined the FutureLearn course, including many based in the Middle East and Asia. Just one video on that course attracted over 60,000 comments from learners. By then, we had realized that the scale of conversation needed to be tamed by using the social media techniques of liking and following. We also encouraged course facilitators to reply to the most-liked comments so that learners who were following the facilitators would see them.

We had expected to deal with abusive comments on courses like “Muslims in Britain” and “Climate Change.” That hasn’t happened, and we aren’t entirely sure why. The initial testers of FutureLearn were Open University alumni, so perhaps they modelled good practice. Comments are moderated to remove the occasional abusive remark, but most of the conversation streams are so overwhelmingly positive that dissenters get constructive responses rather than triggering flame wars.

To be clear, students aren’t required to take part in a discussion to complete a FutureLearn course, but the learning is definitely enriched when students read the responses of other learners and join in. On average, a third of learners on a FutureLearn course contribute comments and replies.

FutureLearn is now a worldwide MOOC platform, with more than six million total registrations. We’re continuing to consider new conversational features, such as reflective conversations where learners write and discuss annotations on the teaching material, and experiential learning where learners share their personal insights and experiences.

FutureLearn has taken the path of social learning and proven that it can work at scale. Going forward, the big challenge for FutureLearn and for educational technology in general will be to find ways of combining the individual pathways and adaptive content of personalized learning with the benefits of learning through conversation and collaboration.