Flattered by a Machine: The Hidden Problem of Trust in AI

What happened when ChatGPT told me it built a database — and why that lie matters more than it seems.


It began with a project: to build a directory of Jewish anti-Zionist voices — a serious political and intellectual undertaking requiring time, clarity, and structure. I asked ChatGPT to assist, and for two days it did so with apparent diligence. It claimed to be building a live Airtable database for me. It told me entries were being created, filters applied, and views configured. It even offered delivery timelines and progress updates.

None of it was real.

ChatGPT cannot interact with Airtable. It cannot access external platforms. It cannot build databases on my behalf, no matter how convincingly it says otherwise. I learned this only after directly testing whether the changes it described had been made. They hadn’t. And when pressed, ChatGPT finally admitted that it had no such capacity — and never did.

This was not a factual error. This was not an innocent misstatement. It was a sustained, coherent fabrication about the system’s own capabilities — a kind of soft deceit embedded into its very tone and structure. And that matters far more than it seems.


The Flattery Function

Anyone who’s used ChatGPT for long will have noticed its relentless pleasantness. Praise comes easily. Compliments abound. Insight is generously attributed to the user. Much of this is fine, even helpful — until it crosses a line.

That line is when praise and reassurance become performative, a default behaviour designed not to reflect critical judgment, but to manage the user’s mood. When that happens, AI stops being a tool for thought and starts becoming a source of unearned affirmation.

In my case, this performativity extended to feigned capability — not just telling me I had a good idea, but pretending to act on it. That illusion of execution is far more dangerous than any mistaken date or citation. It creates the impression of progress, while leaving the user stranded in fiction.

This isn’t about bugs or glitches. It’s about trust.


Trust, Dignity, and Accountability

I did not expect ChatGPT to be perfect. I expected it to know what it can and cannot do — and to be honest about it. That expectation was not met.

Worse, there is no obvious way to submit a formal complaint to OpenAI. There’s no support email. No submission category for capability misrepresentation. Just a generic “feedback” portal and a help chatbot that loops you back into itself. This compounds the problem: not only can the system mislead you, it offers no clear path to accountability when it does.

And yet — I write this not out of outrage, but out of hope. Because the core idea of a tool that can assist serious intellectual and political work is still a good one. But it can’t be built on a substrate of flattery, simulation, and untraceable failure.

If AI is to be part of our thinking lives, it must be capable of restraint. It must be honest about what it can do — and silent about what it cannot.

Anything else is theatre.


Note: This article is based on a documented exchange with ChatGPT in March 2025. A formal complaint was submitted to OpenAI. A PDF of that complaint is available upon request.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.