Darnley's Cyber Café

When AI Cites AI: ChatGPT, Grokipedia, and the Future of Trust Online

Darnley's Cyber Café Season 6 Episode 37

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 10:05

AI is starting to cite other AI as a source, and most people don’t realize what that means yet or will it be too late?

In this episode of Darnley’s Cyber Café, we look at reports of ChatGPT citing Grokipedia, an AI-generated encyclopedia, and why this signals a deeper shift in how information is created, recycled, and trusted.

As AI gets better at generating text, audio, and video, the real challenge isn’t spotting the fake; it’s knowing where anything actually came from.

A conversation about AI, trust, provenance, and why verification matters more than ever.

Click here to send future episode recommendation

Support the show

Subscribe now to Darnley's Cyber Cafe and stay informed on the latest developments in the ever-evolving digital landscape.

Darnley’s Cyber Café

Grokipedia Leaks Into ChatGPT: When AI Starts Citing AI

Length: ~10–12 minutes



Unified Opening (0:00–1:20)

[SFX: soft rain against glass, low neon hum, espresso machine in the background]

DARNLEY:
Welcome back to Darnley’s Cyber Café.
Glad you’re here, grab your espresso and settle down for a few moments with me. 

Today I want to talk about something I came across recently that stopped me for a moment — not because it was shocking, but because of what it suggests about where things are heading.

There have been reports that ChatGPT has started citing Grokipedia, an AI-generated encyclopedia built by Elon Musk’s xAI, in some of its answers. Not as background context — but as a source.

And once you sit with that for a second, a bigger question comes up:
What happens when AI systems start relying on other AI-generated material to explain the world to us?

Understand this isn’t really about one company or one platform.
 It’s about trust — how information gets shaped now, how easily it moves, and how difficult it’s becoming to tell where something actually started.

And with AI getting better at generating text, audio, and video, this isn’t a future problem.
 It’s a right now problem.

So let’s walk through what’s going on, why it matters, and what it could mean over the next year as these systems become even more convincing.

[SFX: cup set down softly]



 


Segment 1 — What’s Grokipedia, and what’s being reported? (1:20–3:30)

Grokipedia launched as an AI-generated encyclopedia — positioned as an alternative to Wikipedia, but without the same open, human-edited structure.

Early reporting found that many entries appeared heavily derived from Wikipedia, while others included controversial or inaccurate claims. What matters here isn’t who built it — it’s how AI-generated reference material behaves once it’s released into the open web.

Now here’s the key development:
 Journalists testing newer AI models found that Grokipedia was being cited inside ChatGPT responses — particularly on more obscure or niche topics.

Not on widely scrutinized historical events.
 Not on topics people immediately challenge.

But on the kinds of questions where most users assume the answer is harmless — and accurate.

That’s an important detail.

Because this isn’t about one bad article slipping through.
 It’s about how AI decides what counts as “knowledge.” And presents it to humans. 



Segment 2 — The real issue: AI citation laundering (3:30–6:20)

There’s a term security researchers are starting to use more often, which I do like:
credibility laundering.

Here’s how it works:

An AI-generated site publishes content.
 Another AI system pulls from it.
 The answer now appears “sourced.”
 And the user assumes legitimacy.

But the citation didn’t pass through human editors.
 It didn’t face peer review.
 It didn’t survive debate.

It just… circulated. Like a bad case of broken telephone…

Modern AI systems rely heavily on retrieval-based answering — pulling text from the web and assembling responses dynamically. This reduces hallucinations, but it introduces a new vulnerability:

If the retrieval pool is polluted, the answer is polluted. Meaning will be rife with misinformation..

Researchers have already warned about data poisoning, LLM grooming, and information flooding — where large volumes of synthetic content are created specifically to influence what AI systems later surface.

And once that content gets picked up by multiple systems, it becomes extremely difficult to unwind.

This is how AI stops reflecting the web — and starts reinforcing it.



Segment 3 — Pros and cons: what’s gained, what’s at risk (6:20–8:30)

Let me be honest. There are potential upsides here

The Pros

·       Broader coverage of niche or under-documented topics

·       Reduced reliance on a single institutional source

·       Faster knowledge synthesis across diverse material

In theory, this makes AI more flexible.

But here’s the other side.

The Cons

1.     False legitimacy — citations create confidence even when the source is weak

2.     Asymmetric scrutiny — obscure topics are rarely challenged

3.     Volume over quality — synthetic content can dominate by scale alone

4.     Feedback loops — AI cites content → humans repeat it → AI sees it again

At that point, truth doesn’t disappear.
 It just gets buried under repetition.

And repetition feels a lot like consensus. This concept also feels very similar to the Nazi minister of propaganda used to say:

“If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” The very much is starting to happen with AI, which can go bad fast….



Segment 4 — The next 12 months: deepfakes, audio manipulation, and trust erosion (8:30–10:40)

Now widen the lens.

Text manipulation is only the beginning.

We’re already seeing:

·       AI voice cloning used for impersonation and fraud

·       Synthetic video convincing enough to prompt real-world responses

·       Audio clips that sound authentic but never happened

As AI improves over the next year, and yes, that is very soon… the danger isn’t just that people believe fake things.

The danger is that people stop believing real things.

When everything can be faked, skepticism turns into paralysis.
 And paralysis is exploitable.

This is why provenance matters.

Emerging standards like content credentials aim to verify where media came from and whether it’s been altered. They won’t fix everything — but they represent a shift from “does this feel real?” to “can this be verified?”. That distinction will matter more than ever.

To be honest, this is one of the spokes in the whole digital ID push…but I won’t get into that mess right now…



Segment 5 — Practical takeaways (10:40–12:00)

So what can you actually do?

1.     Treat AI citations as starting points, not conclusions

2.     Cross-check anything that affects money, health, reputation, or safety

3.     Be extra cautious with confident answers on obscure topics, don’t believe everything you read

4.     Assume audio and video can be manipulated — confirm through second channels, don’t believe everything you see

5.     Learn to look for provenance signals as platforms roll them out, It means:
 get used to checking the small indicators that show where a piece of content came from and whether it’s been altered. Again, this isn’t 100% guaranteed. 

Because the most important skill now isn’t knowing more information.

It’s knowing how information earns your trust. And I have been saying this for years, and is really applicable here – trust no one, and trust nothing. Protect your peace of mind at first to avoid manipulation by all sources of information that may be designed to sway you in a direction. 



Closing (12:00–12:30)

[SFX: café ambience, rain fades, soft synth tone underneath]

DARNLEY:
Grokipedia showing up inside ChatGPT answers isn’t the end of the world.

But it is a signal, a signal you need to seriously consider… 

We’re entering a phase where the internet isn’t just being indexed —
 it’s being rewritten, reassembled, and redistributed by machines.

And that changes how trust works. I’ve been saying this for years…trust no one, believe nothing until you are certain those facts are correct…

Thank you for spending this time with me at Darnley’s Cyber Café.
I’m your host, Darnley.


 If you find this podcast useful, please invite a friend to listen or follow.

Stay safe, everyone.

Remember knowledge is power —
 especially when knowing what to question matters most.

[SFX: outro fades]