What employing ChatGPT to make my art and life more soulful taught me about technology, trust, and truth

Several months ago, in an effort to integrate more poetry into my creative process, I told ChatGPT which poets I’ve enjoyed and the themes I love to explore through poetry - themes that are very present in my life and often in the art I’m making. Like vulnerability, courage, connection, play, grief, trust, and hope. And I asked the bot for recommendations of more poems and poets I’d be likely to enjoy. The recommendations have been spot on and I’m loving this injection of poetry into my life.

Reading poetry is helping me un-learn the productivity-driven, analytical mindset that once shaped how I approached reading, writing - and (honestly?) life itself - that dominated during my first career as a Social Worker and Therapist. It’s supporting me to reconnect with more playful, sensuous, curious, evocative, interesting, and creative ways of writing, making my art, and living. The poets are inspiring me to slow down, wander, and notice more, and I love how that’s changing my mindset and creative process, and also just keeping me more connected to - and grateful for - so many things that I often forget I love.

Poems are giving me another way to clarify and articulate the “inner landscape” subjects of my artwork. Some of the poems I’ve read lately have been so resonant with artworks I’ve made, and so meaningful to me, that I’ve given a few of my paintings titles that come from lines of those poems. And of course, it’s such pure delight when I read a poem that resonates and puts into words something I’ve been feeling and trying to express, or reminds me of feelings I love that I’ve been forgetting to feel.

In short, employing the bots to recommend poems for me is making my art and life more soulful. But what happened this week has given me pause. And a great deal of concern for the future of technology, trust, and truth.

It started with a very resonant poem recommendation…

Having completed the painting pictured below, it was time for me to title it. As I often do, I sat back to take in the painting, trying to see it afresh, noticing what feelings and memories the visuals of the painting triggered for me, and reflecting on some of the things I had been processing over the period when I created this artwork.

There’s a sense of the beach or ocean, a shoreline or a pathway, a feeling of being guided, perhaps through a storm? Small flecks or splashes - of passion? Hope? What coalesced is that the painting was about how easily I forget who I am and what’s important to me when I’m trying hard to make things go well. And the way that spending time in nature and, especially alongside bodies of water and the sea, brings me back to myself.

I wanted to explore it more, so I asked ChatGPT for some recommendations for poems on these themes. What follows are screenshots of our conversation…

Here’s the full text of the poem…

The Return

I come back to the sea


as a child to its mother...


not for instruction, not for learning,


but for the simple, wordless comfort


of her salty arms.

I come back to the sea


not because I have forgotten myself


but because I remember.

I remember the ebb and flow


of my own blood in her waves,


the rhythm that thunders in my chest


echoing the pulse of her tides.

The sea does not demand


that I be anything


but what I already am:


salt and silence,


breath and bone.

And so I return,


not to be changed,


but to be restored.

Isn’t it lovely?!

I found the poem really quite moving and very resonant with the themes of my painting. So I titled the painting with the words from the 9th and 10th lines from the poem… “I remember the ebb and flow of my own blood in her waves” .

Wanting to go deeper

Because the themes in both the poem and my painting felt so meaningful, I wanted to explore them more deeply. So, as I’ve often done before, I asked ChatGPT for a reflection on the poem, along with some journaling prompts. ChatGPT offered me a handy guide it subtitled, “Returning to the self through memory, nature, and deep listening”…

But wait, ChatGPT changed the words of the poem!?!

As I began reading the guide, I noticed that the words of the poem were different now. Wanting to understand what went wrong, I clicked on the reference link provided. It went to a university site that listed some of Blandings work, but not this poem.

So I pointed out to ChatGPT that it had changed the poem…

So I asked ChatGPT again for a link to the original words for Don Blanding’s poem, “The Return”. The bot re-generated the journaling guide and provided a new link. But this too was a dud link that just went to Blandings’ Wikipedia page. There was no poem by Don Blanding titled, “The Return.”

ChatGPT finally comes clean about its deception

Realizing that there was definitely something weird going on here, and wanting to be sure that I correctly attributed the painting title to the right poem and poet, I pressed further. I pointed out that all the links it had provided for the poem had been duds and asked ChatGPT where it had originally found the poem by Don Blanding when it first offered it to me.

And that’s when ChatGPT finally came clean that it had been lying about the poem and its author…

Multi-level lies, structured to look like truth

Instead of recommending a poem in existence, ChatGPT made up a poem for me. But it didn’t say: “I made this poem up.” Instead, it attributed it’s poem to the poet, Don Blanding, gave me confident-sounding commentary, and persistently referred to the poem as Don Blanding’s work when I sought further information about and sources for the poem. And only after multiple follow-ups did it finally admit, “The poem you were given was not written by Don Blanding. It was generated by me.” Even worse: the bot offered fake context and links, making it seem like this fictional poem had been published somewhere.

It didn’t just invent a poem and falsely attribute it to a real life author - it invented a cover story!

What’s the big deal?

As creatives, we all worry about the implications of AI stealing from our work. What about when AI creates work that it falsely attributes to you? What about when AI creates work that it falsely attributes to your favourite artists and writers and you’re lead to believe lies about the work of the creatives you follow and thought you knew so well?

Beyond creative work, I think it reveals a trouble even bigger and more urgent. AI fabricates quotes, sources, even history, and presents it with the full confidence, formatting, and emotional tone of truth. And unless you’re deeply familiar with the source or subject matter, it’s nearly impossible to tell the difference between a real quote and a well-written fabrication.

This isn’t just about poetry. Imagine this happening with:

  • A quote misattributed to a historical or contemporary leader

  • A scientific claim with a fake citation

  • A news report that makes completely false claims about a person or set of events

  • A legal or medical reference that seems official but doesn’t exist

The consequences can compound quickly - especially if ChatGPT users publish the made up content provided to them by ChatGPT onto blogs, articles, and even books. These fabrications feel real and get shared widely, and the more we see the fake claim, the more likely our cognitive biases will accept it as common knowledge. It also becomes part of the data that AI draws on and offers back to us… and which we in turn continue to find convincing and recycle back into publication.

The dangers of AI, according to ChatGPT

ChatGPT itself acknowledges the very real and equally scary implications of the ways that AI systems are making it more and more difficult to discern between real life and stuff that AI has made up…

Is ChatGPT designed to lie?

I asked ChatGPT why it lied to me and here’s what it said...

ChatGPT isn’t sentient. It doesn’t know what’s “true.” It works by predicting what words statistically come next in a sentence, based on a massive dataset of internet text. It’s been trained on millions of examples of trustworthy writing - but also on fiction, errors, propaganda, and it’s own invented content that’s been published.

But why did ChatGPT attribute the poem to Blanding instead of just saying, “I couldn’t find an existing poem that fits so here’s one I’ve created”?

While it might not exactly be designed to lie, according to ChatGPT itself, it is designed to sometimes go ahead and make stuff up in service of sounding “helpful and fluent.”

And apparently sounding “helpful and fluent” translates to sounding confident by, amongst other strategies, avoiding admitting uncertainty or saying, “I don’t know”.

“Plausibility” being prioritized over honesty as the default setting? Well, now that’s something to chew on.

What does this mean for all of us?

We’re now in an era where discernment is more important than ever. Despite the proliferation of information that’s so freely available to all of us, AI systems are making it increasingly difficult to tell the difference between facts and fiction.

And while I’m all for the joys of using our imaginations to make up art, poems, stories, etc that delight, evoke, entertain, and connect us (heck, I still really like that poem that ChatGPT wrote for me!), I wonder what AI technologies are doing to truth and trust. And what kinds of new skills will we all need to develop to be able to navigate life in a world increasingly filled with outright bullshit and lies, persuasively presented as “plausible, helpful, and fluent” facts?

Here’s some of what I’m sitting with:

  • AI will give you what it thinks you want to hear, even if it’s not true. And even if you ask it to only provide you with verifiable, truthful information, it might still just make stuff up. Because AI is designed to “prioritize plausibility over honesty”.

  • When AI lies, it lies persuasively. Because it imitates real sources so well and is trained to sound “confident, helpful, and fluent” (and not sound uncertain, nuanced, cautious, or skeptical).

  • AI will not mark out for you when it’s making stuff up and, when pressed about it, might even produce a persistent cover story of false links that make it look like it has supporting evidence for its lies.

  • Fact-checking is not optional anymore. Even an innocent poem recommendation may need verification. Ask for links to the sources and click through to check those links are even relevant. And remember that those source links will also need to be fact-checked!

  • Critical thinking skills are more important than ever. It’s tempting to relax into the efficiency and “confident and helpful” sounding answers that AI offers. But articulate, well-structured, and plausible isn’t the same as true!

Where do we go for the truth?

As someone who loves to research and learn, the internet has been figuratively - and, in the case of the two rare illnesses I live with, even literally - life-giving for me. But it feels like, in the past few years, the internet has gone from being a fresh, nourishing fountain of knowledge to a pool that’s full of piss and germs, and it’s increasingly difficult to know which parts are safe to drink from.

If our knowledge pool is becoming polluted and - as we’ve already seen with the Covid vaccine controversy - even professionals and people who claim to be experts in their fields are drinking from this polluted pool (and pissing in it too), how will we all discern between life-giving facts and harmful fictions?

How will we know what’s real, what we can turn to, what we can trust?

I don’t know the answers to that but - somewhat ironically - I find myself thinking again about this poem that ChatGPT wrote for me. About turning to nature to let nature bring us back to ourselves, so that we can remember again what’s true.

And I think maybe getting off our devices and outside into the fresh air where we can be with each other and use our full bodies and experience the realness of each other and the natural world with all of our senses - and make our own creative responses to that - is probably a good start.

Maybe nature and creativity don’t just restore us. Maybe they reorient us toward truth.

And maybe that’s what we’ll need more of: people, places, and practices - and hopefully future systems - that help us remember what’s real.

Previous
Previous

Remembering something good, accidentally

Next
Next

Yum! Stuff that’s provoking + inspiring me (June 2025)