Science & Tech

Control Issues
Coursework and research look at how to build better robots—and how to make sure they don’t take over.

By Erik Ness / May/June 2018
May 15th, 2018
Illustration of robot and human hands
Illustrations by John Hersey

“AI is a fundamental existential risk for human civilization,” warned Elon Musk in July 2017,  speaking to the National Governors Association, which had gathered in Providence. At Brown, students and professors explore and refine the technologies involved in robotics while asking what the limits are—and what they should be.  In 10 years, will robots be folding our laundry, or will they be starting World War III?

Welcome to the Machine

Learn It: CSCI 1420: Machine Learning
Discuss It: The Humanity Centered Robotics Initiative takes a sharp look at the ethics and morality of smart machines.

When Michael Littman showed up to teach Machine Learning for the first time this spring, the 299-seat classroom was overflowing. “We were just mobbed with students,” says the Professor of Computer Science and codirector of the Humanity Centered Robotics Initiative. “During the shopping period people were sitting in the aisles. It was like a rock concert. Great energy, people really excited about the topic.”

Machine learning sits at the interface between artificial intelligence and data science and is also emerging as a core technology for robotics. It’s been slowly evolving for decades in labs and server rooms and was considered a curiosity so long as it wasn’t really having an impact on people’s lives. But now we’ve got bot networks, predictive analytics, and data breaches thanks to  computers that can “learn.”

“I’m scared of robots,” HCRI associate director Peter Haas told a TedX audience last year. “Actually, I’m kind of terrified.” HCRI moderates these kinds of discussions, tackling everything from the ethics of sexbots to the morality of robotic warfare. In December an HCRI team was also named a top-ten finalist in the IBM X Prize competition for its work on identifying human social and moral norms and implementing them in robots.

“We’ve developed things that matter,” says Littman. “It means that we also have to shift our attitudes towards the ideas that we’re generating and how they might be used to help people and how they might be used to hurt people—and how we might mitigate that.”

Artful introduction: People won’t let robots into their homes unless they perceive them as beneficial, said Artist Catie Cuan, speaking at Brown in March. “Who is really great at making and conveying meaning through movement and set and visuals? It’s theater and art people.”

Virtual Control

Learn It: How to use virtual reality to control robots
Apply It: Move a robot from across town, or perform surgery from across the ocean.

Robots of the future may master complex tasks, like fixing a nuclear reactor, defusing a bomb, or repairing a satellite on the International Space Station. But for now we need humans operating the equipment remotely, called teleoperation. Enter virtual reality, intuitive and efficient and now available at a gaming store near you.

If you want a surgeon in Boston to operate on a patient in Beijing, you don’t want to have to teach him robotics first, explains Eric Rosen ’18. “We don’t want to make experts re-learn a system,” he explains.

Meanwhile, computer science PhD candidate David Whitney points out, “For the first time ever we have VR hardware that is cheap and widely accessible.” So he and Rosen, working in the Humans To Robots Laboratory of computer science professor Stefanie Tellex—and both experienced video game players—set out to connect a VR controller with the lab’s robots.

After they got the two systems to talk to each other, this communication had to be fast enough to travel over the Internet. Done: Their open source software is now available online for anyone and takes up no more bandwidth than a Skype call. This can potentially be used to operate robots in places where it’s dangerous, impractical, or expensive to send a human. In their practice runs they’ve controlled robots at labs in Boston.

Future challenges include coordinating robots to work together and reimagining the mechanisms of control. Current VR applications tend to allow the human to operate a robot arm, modeled on the human appendage. But there is nothing that says a robot can’t be shaped like a horse, or  like a snake, so it can slither through pipes. Must humans then get on their belly, or on all fours to operate it via VR? “You want to build the robot to best fit the task,” says Rosen, whom we profile at greater length on page 26, as one of our exceptional seniors. “And then you want to build the interface to best enable the human to control the robot.”

Talking Points

Learn It: CSCI2951-K; Topics in
Collaborative Robotics; Professor Tellex
Consider It: “We will study the problem of endowing robots with the ability to interact with humans.”

In Stefanie Tellex’s vision, within twenty years every home will have a personal robot helper to set the dinner table, do laundry, and prepare dinner. Her Humans to Robots Laboratory is devoted to making it happen. And you won’t have to program your robot—you’ll just talk to it. But if you have a working relationship with Siri or Alexa, you probably know there are a few bugs to work out.

The lab’s research on these challenges grew from a Tellex class project by Dilip Arumugam ’17, ’19 ScM and Siddharth Karamcheti ’18. “Language is the most effective way for people to communicate with these really complex machines,” explains Karamcheti. You could micromanage this problem by trying to anticipate every logical hiccup in everybody’s language. Instead, the pair decided to use machine learning. “How do we build techniques for getting robots to understand natural language?” they asked.

The students created videos of robots performing simple tasks and showed them to hundreds of different English speakers. They had participants describe what they would say to get the robots to behave that way. They solicited both general and highly specific directions, because a robot would need to be able to handle both a general “fold that basket of laundry” command and a more specific  one like “put the red underwear on top.”

To process instructions like this without human intervention, the robot would have to be able to plan its actions and understand that the general action of folding the laundry included both making a well-ordered pile and the detail that the red underwear would be dealt with last.

Using both general and specific instructions, their machine learning system used a hierarchical planning algorithm to map out the chore. When their robot could figure out the task and how specific the instructions were, 90 percent of the time it could execute the commands in just one second. But when the robot can’t figure out the specifics, the planning time goes up—the robot version of a teen’s blank stare.

“This is particularly challenging because language has all of this complexity,” says Arumugam. “You need to be able to consolidate that reliably into robot behavior.”

We’re only one major innovation away from the landscape changing completely. The fascinating thing about this moment in history is that it is coming tomorrow, and we don’t know what it will be. I don’t think technology has ever moved this quickly before. I have no idea what I’m going to grow old hating, from my rocking chair.”—Dr. Ben D. Sawyer from MIT’s AgeLab, speaking at Brown HCRI, December 6, 2017

What do you think?
See what other readers are saying about this article and add your voice. 
Related Issue
May/June 2018