Artificial intelligence (AI) and fake news

Jesus Larrubia
3 min readMay 6, 2020

The progress of AI in the last decade…

…would have been inconceivable without the preconditions set up by DARPA and Caltech in 1958, but it’s not over yet. (In fact, in this area, we are only about a decade away from having a small fraction of human brain function replaced by AI.)

What can make all this progress possible is the diversity of the technologies we have available. From now on, each innovation, from leading artificial intelligence techniques like deep learning, to mobile technologies (like Fetch, our very own augmented reality technology), is making it faster and more economical to pursue various forms of human-computer interaction.

We see the combination of deep learning and massively parallel computation at Google leading to the development of a vast online image database. Deep learning offers real potential for a variety of tasks in the realm of biomedicine, scientific visualisation, social media, and the automation of tasks in a variety of industries. Google’s deep learning community is incredibly active and highly productive, providing a platform for contributions in the areas of natural language processing, speech recognition, image captioning, and vision.

“Fake” information

I’ll be honest. I didn’t write the first piece of the article. Actually, I started it, but then I was “assisted”. But more importantly, did you find it compelling? Or… maybe a little disjointed?

Some of you probably noticed. The start of the article wasn’t written by me or any other human, but by an AI language processor model given an initial seed (the opening sentence: “The progress of AI in the last decade…”).

It’s that easy. All I had to do was make use of an existing online tool such as Talk to Transformer (which relies on the model GPT-2) to create a (small) article that made sense to a large enough portion of people reading it. There’s a good chance they don’t have much prior understanding of the subject, maybe they’re just having a quick glance during their daily grind, or perhaps they just don’t have the time to challenge the information. The reader isn’t to blame, this is simply the direction our societies and behaviour are moving towards. And to be fair, being able to check the veracity of every single online article would be impossible.

Anyway, the purpose of this article is not to delve into philosophical or moral questions, rather provide a quick insight into the tremendous progress of AI regarding text processing and generation over the last few years, and its potential (especially when in the right hands).

The “too” smart GPT-2 model

GPT-2 is a natural language model developed and released by OpenAI (a research organization founded by Elon Musk) whose initial goal was predicting the next word given a text sample. The trained model, based on 40GB of Internet, evolved to be able to write whole sentences given just an initial seed. The results were a bit shocking, even for their creators, who initially decided to release a smaller, un-pre-trained version of the model after concerns it could end up being used with malicious intent.

However, after discussing it with the AI community, the largest version of the (trained) model was published in November last year. Luckily, the OpenAI team decided to persue the progress of technology and, now, their model is available for us to be improved upon or used in real applications.

To play with it, go to their Github repository or alternatively you can find more information on their website:

--

--