"A computer program masquerading as a 13-year-old Ukrainian boy has reached a technological and philosophical threshold by passing the so-called Turing Test: it fooled a third of its human interlocutors into believing they were conversing with a real person instead of a machine," NPR's Scott Neuman reported on June 9th.
The headline on a Washington Post story by Terrence McCoy read, "A computer just passed the Turing Test in landmark trial."
Dante D'Orazio at The Verge gave us this: "Computer passes Turing Test for first time by convincing judges it is a 13-year-old boy" named Eugene Goostman.
At least, that's what you would have read yesterday. If you look now, you will find an editor's note at the bottom of the story that says, "Update 6/9 11:48am ET: The headline and post have been updated with regard to controversy over whether or not "Eugene" actually passed the Turing Test."
And how did The Verge update its story? Here's the headline now: "Computer allegedly passes Turing Test for first time by convincing judges it is a 13-year-old boy." The post itself was updated exactly the same way.
Sorry, Verge; inserting "allegedly" is not an update. We'll give you credit for at least telling us there is a controversy, even if you didn't tell us what it was or link to relevant articles.
And a controversy it is.
The Turing Test was invented by the artificial intelligence pioneer Alan Turing. To pass the Turing Test, a computer must convince human interlocutors in a series of text messages that it is human. A machine that can behave in a way that is indistinguishable from human behavior passes the test.
And the claim was that a Russian program has just done that. But the bar was low: All the program had to do, in a competition in England, was persuade 30 percent of a group of judges that it was human. In other words, it could pass the test even if more than two-thirds of the judges thought it was a computer.
Nevertheless, Kevin Warwick, a professor of cybernetics at the University of Reading who administered the test, called the Russian victory "a milestone” that would go down in history as one of the most exciting moments in the field of artificial intelligence.
Some reporters took a much dimmer view of that "milestone."
"What Goostman's victory really reveals [is] the ease with which we can fool others," wrote the NYU cognitive scientist Gary Marcus in The New Yorker.
Marcus doesn't have much use for the Turing Test. Quoting from a post he wrote last year, he says:
If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence.
Here are other headlines:
"No, A 'Supercomputer' Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better," from Mike Masnick at techdirt.com.
"Why You Should Be Skeptical of the Turing Test Story," by Rachel Z. Arndt at Popular Mechanics.
"Don't Believe the So-Called Turing Test Breakthrough," by Bianca Bosker at The Huffington Post.
And there are more along those lines.
But too many stories reported the news without skepticism, comment, analysis, or background.
And I think I know why. While we might say that a truly intelligent machine poses a threat to what it means to be human, many reporters and others would secretly like to see a machine pass the test. It's more exciting–and it's a much better story.
And why let the facts get in the way of a good story?