Professional Manufacturer For Headset and Earphone Since 1997

+86-755-82957397 sales@yes-hope.com

Home > News > International News > Fake news written by artificia.....
Contact us
Our location Add.: 3F., 1st Bldg., Wanxia Industrial Park, Shajing, Bao’an District, Shenzhen, China Contact information Email: sales@yes-hope.com Te...Contact Now
Hot Headphone

News

Fake news written by artificial intelligence too realistic

  • Source:kekenet
  • Release on :2019-03-12

Elon Musk's OpenAI has an algorithm that can generate weirdly believable fake news stories.

Artificial intelligence is getting pretty good at generating entire articles and stories, which raises troubling implications about its potential to mass produce fake news.

A program developed by a team at OpenAI, the non-profit research institute founded by Elon Musk and Sam Altman, can make up stories based on a handful of words that actually sound pretty believable.


Headband headphones supplier china

Here's a snippet of what it's capable of:

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had identified the missiles trajectory and will take necessary measures to ensure the security of the Russian population and the countrys strategic nuclear forces.The White House said it was extremely concerned by the Russian violationof a treaty banning intermediate-range ballistic missiles.


Hi-fi sound headphones supplier

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraines Crimea region and backed separatists in eastern Ukraine.

The AI came up with the entire story on its own, after simply being provided with the words "Russia has declared war on the United States after Donald Trump accidentally..."

The researchers wanted to develop a general purpose language algorithm, trained on a huge amount of text from the web. This encompassed 45 million pages from the web, chosen via Reddit.


FM headset radio supplier

They originally intended for the program to be able to do things like translate text and answer questions, but it soon became clear that there was also great potential for abuse and exploitation. It was simply too competent at generating stories, which could then be misused.