Skip to content

PAINFUL TRUTH: AIs and the age of digital lying

We made computers that are more like people – they make stuff up
31629541_web1_P-230120-edh-opi-RiseofChatbots
Toy robot on a desktop, man working on the computer in the background. Symbol for artificial intelligence, chatbots or social bots and algorithms

If you’ve been online over the last year, you’ve seen images and chunks of text begin to appear that were created entirely by artificial intelligence.

Well, sort of.

First off, it depends on what you mean by “created.”

Secondly, it depends on what you mean by “intelligence.”

A collection of artificial image generators, including Midjourney and DALL-E 2, and most recently a text generator called ChatGPT, have been released to segments of the public. The results have quickly made the rounds.

The image generators appeared first, and astounded people.

You could input a simple word prompt like “Penguins on an ice floe in the style of Vincent van Gogh,” and get, well, what appeared to be a painting of penguins on an ice floe in the style of van Gogh. Many images appeared shockingly well done. It was leaps and bounds beyond what computers had been capable of before.

The same was true of ChatGPT.

While less of an advance – there have been chatbots for decades – it could spout some impressively coherent paragraphs. It could make up poems and understood rhyme, it could write song lyrics in the style of various famous bands (often badly), it could sort of carry on a conversation.

The problems are in the details.

READ ALSO: As cyberattack reports climb in Canada, experts look at why and how to protect yourselves

READ ALSO: Privacy bill would set out rules on use of personal data, artificial intelligence

The art generators have some serious flaws. Not a one of them knows how human fingers work, or how many teeth people have – they tend to overestimate on both, resulting in alarming smiles and hands ending in spidery clusters of six to eight rubbery digits that would have given H.P. Lovecraft shrieking fits.

As for ChatGPT, we quickly learned that its greatest skill is lying.

It doesn’t take much prompting to send it careening down a path of frantic misinformation. It makes stuff up like a con man facing interrogation.

This is a result of the way these AIs were created.

Because they are very clever models that understand, based on prompts, which pixel should go next to another, or which word next to another.

That’s all they “know,” and they don’t even really “know” that. There’s nothing there to do the knowing, just code waiting for a prompt.

This half-knowing, this tendency towards wild fabrication, comes because the programs are doing a sort of brute-force pastiche of millions of samples on which they have been trained – images or text.

They use photos, paintings, illustrations; books, blog posts, news articles.

Almost all have been scraped from the internet without permission, then smushed together again into new forms by some clever code.

For decades, we thought computers would eventually get so smart that they would give us perfect, clear information.

What we got is software that gets called intelligent, but isn’t smart enough to count the legs on a horse, or lies like a rug when given the least reason.

We shouldn’t be surprised. It’s learned everything it “knows” from us.


Have a story tip? Email: matthew.claxton@langleyadvancetimes.com
Like us on Facebook and follow us on Twitter.


Matthew Claxton

About the Author: Matthew Claxton

Raised in Langley, as a journalist today I focus on local politics, crime and homelessness.
Read more