China's GLM-5 Just Shattered AI's Biggest Problem (While OpenAI Dismantles Its Ethics Team)
NotionThe Week AI Got Real (And a Little Scary)
While everyone was arguing about ChatGPT's latest features, a Chinese AI startup just solved the problem that's been haunting every enterprise AI deployment: hallucinations. You know, when your AI confidently tells you that Abraham Lincoln invented the iPhone?
z.ai's new GLM-5 model achieved a -1 score on the AA-Omniscience Index—a 35-point improvement that essentially means it stopped making stuff up. For context, that's like going from a chronic liar to the most reliable person in the room.

The secret sauce? A new reinforcement learning technique they're calling "slime." (Yes, really. Scientists have a sense of humor too.)
Why This Actually Matters for Your Business
Here's the uncomfortable truth: most companies haven't deployed AI at scale because they can't trust it not to embarrass them. When your chatbot tells customers the wrong shipping policy or your AI assistant invents legal precedents, you've got a lawsuit waiting to happen.
GLM-5 changes the game because it's open source with an MIT license. Translation? Enterprises can actually deploy this without vendor lock-in or wondering what's happening under the hood.
Meanwhile at MIT, researchers cracked another enterprise AI headache with their self-distillation fine-tuning (SDFT) technique. Think of it this way: until now, teaching your AI model a new skill was like giving someone brain surgery—there was a good chance they'd forget how to tie their shoes.

SDFT lets models learn new capabilities while retaining old ones. No more maintaining separate models for every task. No more catastrophic forgetting.
Old Approach: New SDFT Approach:
Task A → Model 1 Task A ──┐
Task B → Model 2 ├─→ Single Model
Task C → Model 3 Task B ──┤ (Remembers All)
Task C ──┘
The Uncomfortable Subplot Nobody's Talking About
While innovation accelerates in the open source world, OpenAI just disbanded its mission alignment team—the group responsible for ensuring AI development stays safe and trustworthy.
Their leader got promoted to "chief futurist" (which sounds like being made CEO of a division that doesn't exist), and everyone else got reassigned. Draw your own conclusions.
Hot take: When the company closest to AGI dismantles its ethics team, that's not a restructuring—that's a statement of priorities.
Oh, and they're also abandoning the "io" branding for their AI hardware that won't ship until 2027 anyway. So there's that.
The Security Wake-Up Call You Can't Ignore
Amid all this AI excitement, Microsoft just issued an urgent warning about critical zero-day exploits targeting Windows and Office users. Hackers can take complete control of your computer by getting you to click a malicious link or open a file.
If you're reading this and haven't patched your systems this week, close this tab and do that now. Seriously. I'll wait.
The Bottom Line
We're watching a fascinating divergence: open source AI is solving real enterprise problems (hallucinations, continuous learning), while Big Tech seems more interested in hardware that doesn't exist and reorganizing away from safety concerns.
The companies winning at AI in 2026 won't be the ones with the biggest models—they'll be the ones that can actually trust their deployments.
So here's my question for you: Would you rather deploy a closed-source model from a company that just disbanded its ethics team, or an open source model with record-low hallucinations that you can actually inspect and control?
The answer might determine whether your AI strategy becomes a competitive advantage or a compliance nightmare.