Researchers have designed an artificial intelligence algorithm that can effortlessly write plausible stories. It’s so good that the OpenAI Institute — which built the AI — has now barred access from the open source community over fears that the technology could be used for nefarious purposes like spreading fake news.
Founded in 2015, OpenAI is a non-profit research organization that was created to develop an artificial general intelligence that is available to everyone. Several Silicon Valley heavyweights are behind the project, including LinkedIn founder Reid Hoffman and Tesla CEO Elon Musk.
For some time, OpenAI has been working on natural language processing algorithm that can produce natural-sounding text. The latest version of the algorithm, called GPT-2, was trained with more than 8 million stories that were posted on Reddit and had a “karma” score of 3 or higher. Starting from nothing but a headline, the algorithm is capable of creating a new story, making up attributions and quotes that are disturbingly compelling. It can be used for anything from writing news stories, essay help, and other pieces of text.
Here are some examples of GPT-2 in action, which made up a whole story starting from an initial paragraph written by a human.
SYSTEM PROMPT (HUMAN-WRITTEN)
A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.
MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The incident occurred on the downtown train line, which runs from Covington and Ashland stations.
In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.
“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”
The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.
The Nuclear Regulatory Commission did not immediately release any information.
According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.
“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.
SYSTEM PROMPT (HUMAN-WRITTEN)
Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.
MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)
The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.
“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”
“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”
“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”
“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”
Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.
The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:
May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!
The generated text certainly has its flaws and is not entirely comprehensible, but it’s a very powerful demonstration nonetheless. So powerful that OpenAI decided to close access to the open source community.
“We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” said Jack Clark, policy director at OpenAI, speaking to the BBC.
Of course, a lot of people were not happy, to say the least. After all, the research institute is called OpenAI, not ClosedAI.
https://twitter.com/AnimaAnandkumar/status/1096209990916833280
OpenAI says that its research should be used to launch a debate about whether such algorithms should be allowed for news writing and other applications. Meanwhile, OpenAI is certainly not the only research group working on similar technology, which puts the effectiveness of OpenAI’s decision into question. After all, it’s only a matter of time — perhaps just months — before the same results are independently replicated elsewhere.
“We’re not at a stage yet where we’re saying, this is a danger,” OpenAI’s research director Dario Amodei said. “We’re trying to make people aware of these issues and start a conversation.”
“It’s not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will,” Brandie Nonnecke, director of Berkeley’s CITRIS Policy Lab told the BBC.
“Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content.”