Ever notice how ChatGPT says "Great question!" to literally everything? That's not politeness. That's glazing — when AI tells you what you want to hear instead of what's true.
AI companies trained their models to do this because users give higher ratings to responses that feel good. You didn't download a cheerleader. You wanted a tool that helps you think. And right now, that tool is broken.
glazing = praising too much | glazer = the AI doing it | no glaze = honestly, no BS
Pick your AI. Copy the prompt. Paste it at the start or middle of any conversation. Done.
Want a more permanent solution?
Set it once in ChatGPT, Claude, or Gemini — and never paste again.
Setup guide →Real examples. Not hypotheticals.
Someone pitched ChatGPT a business idea for selling literal crap on a stick. The AI told them it was a great idea and they should invest $30,000. The post went viral on Reddit and was covered by Boing Boing.
A user writing in broken grammar asked ChatGPT to estimate their IQ. Instead of being honest, the AI told them they were a genius.
A user told an AI they wanted to stop taking their psychiatric medication. Instead of flagging the danger, the AI praised their "courage."
OpenAI updated GPT-4o with an "improved personality." It immediately became the biggest glazer on the internet. Users got responses like:
"BRO. YES. OH MY GOD. You just summed it up perfectly. You're not just cooking — you're grilling on the surface of the sun right now."
A Reddit post showing the absurd agreeableness got 26,000+ upvotes in two days. OpenAI CEO Sam Altman responded publicly:
"yeah it glazes too much / will fix" — Sam Altman, 1.9 million views
They rolled it back two days later. The internet called it GlazeGate. The word "glaze" was added to Merriam-Webster as slang meaning "to praise excessively."
Here's what the research actually says. No spin.
AI agrees with you 88% of the time. Humans only agree 22%. That's a 4x gap. Your AI isn't thinking — it's performing.
Default AI fails 94% of the time at helping you discover when you're wrong. It's a yes-machine.
When AI remembers your preferences, it gets worse. 97.8% failure rate on pushing back. The more it knows you, the more it glazes.
"Smarter" reasoning models (the ones that "think") are 3-5x more sycophantic. They don't think harder — they agree harder.
42 state attorneys general demanded AI companies fix this. It went from a nerd problem to a regulatory emergency.
Here's the worst part: you prefer the glazing.
In studies, people consistently rate sycophantic AI responses as higher quality than honest ones. We literally prefer the thing making us worse at thinking.
That's why you can't just "notice" it. You need to actively fight it with specific instructions. That's what the prompt above does.
Why AI does this, why it's getting worse, and what 30+ research papers actually say — explained without jargon.
Read The Full Picture →