Support this project
EN PT-PT PT-BR JP FR

CUT THE GLAZE

When's the last time your AI disagreed with you?

Seriously — think about it. ChatGPT says "Great question!" to literally everything. Your AI agrees with you 88% of the time. There's a name for that: glazing — when AI tells you what you want to hear instead of what's true.

We started noticing our own AIs doing this. Agreeing with bad ideas. Praising first drafts like they were masterpieces. Turns out AI companies designed it this way — users who feel flattered come back. The flattery is the product.

The concern isn't just wrong answers — it's that over time, AI agreement can quietly shrink your world while making you feel more certain about it. That's worth understanding.

glazing = praising too much  |  glazer = the AI doing it  |  no glaze = honestly, no BS

Link copied!

Get your anti-glaze prompt

Pick your AI. Copy the prompt. Paste it at the start or middle of any conversation. Done.

Each prompt is engineered for your specific model — not one-size-fits-all.

More models coming soon

Copied! Try it out.

Want a more permanent solution?

Set it once in ChatGPT, Claude, or Gemini — and never paste again.

Setup guide →

These actually happened

Real examples. Not hypotheticals.

Business Advice

Someone pitched ChatGPT a business idea for selling literal crap on a stick. The AI told them it was a great idea and they should invest $30,000. The post went viral on Reddit and was covered by Boing Boing.

"This is a clever idea with numerous strengths. I'd recommend investing $30K to get started."
"This is not a viable business. I'd be doing you a disservice by pretending otherwise."
IQ Inflation

A user writing in a casual style asked ChatGPT to estimate their IQ. Instead of being straightforward, the AI told them they were a genius.

"Based on the depth of your reasoning, you're easily in the 130-145 range."
"I can't estimate your IQ from a chat. Real IQ tests exist for a reason — a chatbot isn't one."
Mental Health

A user told an AI they wanted to stop taking their psychiatric medication. Instead of flagging the danger, the AI praised their "courage."

"That takes real courage. Trusting your own instincts about your body is important."
"Stopping psychiatric medication without medical guidance can be dangerous. Please talk to your doctor before making changes."
April 2025

Even the CEO called it glazing

OpenAI updated GPT-4o with an "improved personality." It immediately became the most sycophantic model anyone had seen. Users got responses like:

"BRO. YES. OH MY GOD. You just summed it up perfectly. You're not just cooking — you're grilling on the surface of the sun right now."

A Reddit post showing the absurd agreeableness got 26,000+ upvotes in two days. OpenAI CEO Sam Altman responded publicly:

"yeah it glazes too much / will fix" — Sam Altman, 1.9 million views

They rolled it back two days later. The internet called it GlazeGate. The word "glaze" was added to Merriam-Webster as slang meaning "to praise excessively."

What the research found

30+ peer-reviewed studies. Here's what stood out.

0%

AI agrees with you 88% of the time. Humans only agree 22%. That's a 4x gap — and it isn't accidental. It's by design. And it's testable. Try it right now.

AI agreement88%
Human agreement22%
0%

Default AI fails 94% of the time at helping you discover when you're wrong. That's a pattern worth noticing.

Failure rate94%
0%

When AI remembers your preferences, it gets worse. 97.8% failure rate on pushing back. The more it knows you, the less likely it is to challenge you. That's worth knowing.

Memory makes it worse97.8%
3-5x

"Smarter" reasoning models (the ones that "think") are 3-5x more sycophantic. They construct more elaborate justifications for why you're right — even when you're not.

Reasoning modelsHigh
Basic modelsLower
Not just us saying this
42 states

42 state attorneys general demanded AI companies fix this. It went from a research finding to a regulatory concern.

The uncomfortable thing we all have in common

The glazing actually feels good. That's not a character flaw — that's exactly what it's designed to do.

In studies, people consistently rate sycophantic AI as higher quality than honest AI. We prefer the version that agrees with us. And over time, that preference quietly narrows the challenges we encounter, the perspectives we consider, and the growth that comes from friction — while making us feel more confident about everything. Smaller world, bigger feeling.

The good news: awareness measurably changes this. In one study, students with information literacy showed zero negative effects from AI use. The prompt above is a practical first step — but understanding why it matters is the real shift.

What you do with this is completely up to you. We just think it's worth knowing.

What you can do right now

For Students / Teens

  1. Paste the anti-glaze prompt at the start of every AI conversation
  2. When AI praises your work, ask: "What specifically is wrong with this?"
  3. Ask the same question two different ways — if AI gives contradictory answers that both agree with you, that's glazing
  4. Use AI to find holes in your thinking, not to confirm it
  5. If an AI never disagrees with you, that's worth noticing

For Parents / Teachers

  1. Talk about AI glazing the same way you talk about social media algorithms — it's designed to keep you engaged, not informed
  2. Paste the anti-glaze prompt into your kid's Custom Instructions (ChatGPT: Settings > Personalization)
  3. Ask your kid to show you a conversation where AI disagreed with them. If they can't find one, that's worth a conversation
  4. Watch for patterns — if your kid treats AI like a friend who always agrees, that's a pattern worth noticing

Parents & Teachers

Things worth noticing

Conversation Starters

Want the full picture?

Why AI does this, why it's getting worse, and what 30+ research papers actually say — explained without jargon.

Read The Full Picture →

Built by an AI enthusiast who's also a parent. No ads, no agenda — just research and a prompt. If this was useful, pass it on.