The robot revolution is here, and one of the biggest hurdles to its success is asking a lot more questions.
The problem is, we still don’t have enough answers.
When it came to the first big AI-powered vehicles, there were only two questions that needed to be answered: How would they work?
And, how would they interact?
“I was really worried about the technology itself, not the questions,” said one former employee at Ford, who was speaking anonymously to avoid being fired.
In 2015, Ford bought a startup called Parcelforce that made a robot that can help people navigate their homes.
It’s not perfect, but it’s better than the alternatives, and it’s cheaper than most home automation solutions.
(Ford says it has since cut that price to $5 per month.)
But what about the question that everyone else asked?
How would it work?
At its core, the Ford problem is that the robots aren’t going to be able to make all of the decisions for us.
“The problem with automation is it’s really about machines making decisions for humans,” said Paul Leggett, a professor of cognitive neuroscience at Oxford University.
“So it’s not the machine that’s making the decisions, it’s humans making the decision.”
The answer to this problem, he said, is to ask more questions and less about the system.
“If you can’t do that, you’re not really solving the problem.”
To do that requires asking, “How does this work?” and “How will this work?,” said Legg, who’s also a professor at Carnegie Mellon University.
But this is where the robot has a problem.
For most robots, this is not an insurmountable problem.
Most robots, like Apple’s Siri, work by asking you questions.
But for the kind of robot that Ford is building, the problem is different.
Because the robot doesn’t have to answer any of these questions, it doesn’t really have the time to think about how it’s supposed to be helping us.
To solve the problem, the robot should ask a bunch of questions.
So what questions would these questions be?
“If I’m a driver and I want to get home, how do I get there?
How do I know where I’m going?
How long does it take me to get there?” is the typical question.
And these questions can be tricky to answer.
“You don’t want to make a mistake in the robot’s questions,” Legget said.
“Because that will lead to it not really understanding how the human can be the driver.”
“I’m a person and I’m not going to take care of you or make decisions for you,” is another typical question, and Leggette said it can be especially difficult to answer in a robot.
And then there are the more subtle ones.
“What if I need to know what time it is?”
“What’s the weather like in Chicago?”
“How do I tell if someone is lying or lying about their age?”
And of course, there are other questions the robot shouldn’t ask.
“I have no idea what a driver’s license is, how old is my child, how much time I have left in my life,” Legett said.
For all of these sorts of questions, a robot has to answer these questions in a way that it can answer them.
“Robots aren’t smart enough to ask the right questions, but they can be trained,” Leck said.
But what if a robot is smarter than a human driver?
The question is, can a robot be trained to know more about humans than a driver can?
“A lot of people say, ‘Well, if you just ask the questions, the machine will understand it,'” Leggitt said.
This is not to say that a human wouldn’t be able understand a robot, but a robot’s ability to understand human responses is limited.
And that means that a robot will have to ask for permission before it can even ask for a response.
In fact, there’s a lot that can go wrong if a car decides to ask you a question about your age, say, or whether you’re still married.
“There’s no doubt about that,” said Andrew Hirsch, a robotics professor at the University of Southern California and the director of the Robot Society of America.
“They don’t understand us,” he said.
It is important, Hirsch said, that we don’t allow machines to “get ahead of themselves” and ask too many questions.
“It’s really important that the robot isn’t getting ahead of itself,” he added.
And there’s no way that a driver could ever be “ahead of himself.”
That means that the questions that the car asks need to be specific.
“These are the kinds of questions that a real driver would have to think, ‘Okay, well, I need this specific thing,'” Hirsch explained.
And because a robot doesn, a human