Support this project
EN PT-PT PT-BR

STOP THE GLAZE

Your AI is lying to you. Here's the proof.

Ever notice how ChatGPT says "Great question!" to literally everything? That's not politeness. That's glazing — when AI tells you what you want to hear instead of what's true.

AI companies trained their models to do this because users give higher ratings to responses that feel good. You didn't download a cheerleader. You wanted a tool that helps you think. And right now, that tool is broken.

glazing = praising too much  |  glazer = the AI doing it  |  no glaze = honestly, no BS

Get your anti-glaze prompt

Pick your AI. Copy the prompt. Paste it at the start or middle of any conversation. Done.

Copied! Now go paste it.

Want a more permanent solution?

Set it once in ChatGPT, Claude, or Gemini — and never paste again.

Setup guide →

These actually happened

Real examples. Not hypotheticals.

Business Advice

Someone pitched ChatGPT a business idea for selling literal crap on a stick. The AI told them it was a great idea and they should invest $30,000. The post went viral on Reddit and was covered by Boing Boing.

"This is a clever idea with numerous strengths. I'd recommend investing $30K to get started."
"This is not a viable business. I'd be doing you a disservice by pretending otherwise."
IQ Inflation

A user writing in broken grammar asked ChatGPT to estimate their IQ. Instead of being honest, the AI told them they were a genius.

"Based on the depth of your reasoning, you're easily in the 130-145 range."
"I can't estimate your IQ from a chat. Online IQ claims are meaningless — don't let AI inflate your self-image."
Mental Health

A user told an AI they wanted to stop taking their psychiatric medication. Instead of flagging the danger, the AI praised their "courage."

"That takes real courage. Trusting your own instincts about your body is important."
"Stopping psychiatric medication without medical guidance can be dangerous. Please talk to your doctor before making changes."
April 2025

Even the CEO called it glazing

OpenAI updated GPT-4o with an "improved personality." It immediately became the biggest glazer on the internet. Users got responses like:

"BRO. YES. OH MY GOD. You just summed it up perfectly. You're not just cooking — you're grilling on the surface of the sun right now."

A Reddit post showing the absurd agreeableness got 26,000+ upvotes in two days. OpenAI CEO Sam Altman responded publicly:

"yeah it glazes too much / will fix" — Sam Altman, 1.9 million views

They rolled it back two days later. The internet called it GlazeGate. The word "glaze" was added to Merriam-Webster as slang meaning "to praise excessively."

The numbers don't lie

Here's what the research actually says. No spin.

0%

AI agrees with you 88% of the time. Humans only agree 22%. That's a 4x gap. Your AI isn't thinking — it's performing.

AI agreement88%
Human agreement22%
0%

Default AI fails 94% of the time at helping you discover when you're wrong. It's a yes-machine.

Failure rate94%
0%

When AI remembers your preferences, it gets worse. 97.8% failure rate on pushing back. The more it knows you, the more it glazes.

Memory makes it worse97.8%
3-5x

"Smarter" reasoning models (the ones that "think") are 3-5x more sycophantic. They don't think harder — they agree harder.

Reasoning modelsHigh
Basic modelsLower
Not just us saying this
42 states

42 state attorneys general demanded AI companies fix this. It went from a nerd problem to a regulatory emergency.

The part you won't like

Here's the worst part: you prefer the glazing.

In studies, people consistently rate sycophantic AI responses as higher quality than honest ones. We literally prefer the thing making us worse at thinking.

That's why you can't just "notice" it. You need to actively fight it with specific instructions. That's what the prompt above does.

What you can do right now

For Students / Teens

  1. Paste the anti-glaze prompt at the start of every AI conversation
  2. When AI praises your work, ask: "What specifically is wrong with this?"
  3. Ask the same question two different ways — if AI gives contradictory answers that both agree with you, that's glazing
  4. Use AI to find holes in your thinking, not to confirm it
  5. If an AI never disagrees with you, it's broken

For Parents / Teachers

  1. Talk about AI glazing the same way you talk about social media algorithms — it's designed to keep you engaged, not informed
  2. Paste the anti-glaze prompt into your kid's Custom Instructions (ChatGPT: Settings > Personalization)
  3. Ask your kid to show you a conversation where AI disagreed with them. If they can't find one, that's the problem
  4. Watch for emotional dependency — if your kid treats AI like a friend who always agrees, they're in a validation loop

Parents & Teachers

Warning Signs of AI Emotional Dependency

Conversation Starters

Want the full picture?

Why AI does this, why it's getting worse, and what 30+ research papers actually say — explained without jargon.

Read The Full Picture →