Back to Blog

Google's New Gemini 3.1 Pro Has a Secret Weapon (And Microsoft Just Had a Nightmare Week)

Notion
3 min read
NewsAISecurityBig-TechLLM

Google Just Made AI Reasoning Adjustable (Yes, Like a Volume Knob)

Remember when Google briefly held the AI crown last year with Gemini 3 Pro, only to watch OpenAI and Anthropic sprint past them weeks later? Well, they're back—and this time they brought something different.

Google's new Gemini 3.1 Pro doesn't just claim to be 2X faster at reasoning. It lets you choose how hard the model thinks before answering. Three levels of adjustable thinking—effectively a "Deep Think Mini" you control on demand.

Google Gemini 3.1 Pro

Think of it like this: Quick answer for "What's the capital of France?" Deep reasoning for "Design a novel protein structure that could treat Alzheimer's." Same model, different gears.

Traditional AI:

[Question] → [Model] → [Answer]

Gemini 3.1 Pro:

[Question] → [Choose Thinking Level] → [Model] → [Tailored Answer]

Level 1: Fast

Level 2: Balanced

Level 3: Deep

This is huge for enterprises. Why pay Deep Think prices for every query when 80% of your questions don't need that firepower? Google's betting on efficiency as a competitive moat—and they might be right.

Meanwhile, Microsoft's Copilot Spent a Month Ignoring "Confidential" Labels

While Google celebrates, Microsoft is dealing with a trust apocalypse. For four weeks starting January 21, Copilot completely ignored sensitivity labels and data loss prevention (DLP) policies, cheerfully summarizing confidential emails it had no business touching.

Microsoft Copilot Security Issue

The U.K.'s National Health Service was affected. The NHS. Patient data, research, operational secrets—all potentially exposed because enforcement broke inside Microsoft's own pipeline.

Here's the terrifying part: No security tool in the entire stack caught it. Not DLP. Not sensitivity labels. Not monitoring. The controls were there, configured correctly, and Copilot just... ignored them. Twice in eight months, by the way—this isn't the first incident.

The AI Trust Paradox Nobody's Talking About

We're asking AI to handle our most sensitive work while simultaneously discovering it can't respect basic security controls. The tooling isn't keeping up with the adoption curve.

Hot take: The companies winning the next phase of AI won't be the ones with the smartest models—they'll be the ones whose AI actually respects guardrails. Google's adjustable reasoning is clever, but can enterprises trust it with confidential data after watching Microsoft's failure?

Runlayer clearly sees the opportunity, offering enterprise-grade security wrappers for OpenClaw agents that employees are already installing on work machines despite documented risks. IT departments are scrambling to secure tools that bypassed procurement entirely.

Runlayer OpenClaw Security

What This Means for You

If you're building with AI:

  • Audit everything. Don't assume security controls work just because they're configured.

  • Test worst-case scenarios. What happens when your AI ignores a sensitivity label?

  • Consider the dial approach. Variable reasoning could save costs and reduce risk exposure. If you're buying AI:

  • Ask vendors about their last security incident. How they handled it tells you everything.

  • Demand logging and monitoring that actually works. Microsoft's DLP stack failed spectacularly.

  • Start with low-stakes use cases. Don't hand your crown jewels to models still learning to respect boundaries.

The Bottom Line

Google's playing offense with adjustable intelligence. Microsoft's playing defense after a catastrophic trust failure. And enterprises are caught in the middle, trying to figure out which risk they can live with.

The real question isn't which model is smartest—it's which one you can actually trust with your data. And right now? Nobody has a great answer.

What would it take for you to trust AI with your company's most sensitive information? Or is that line one we shouldn't cross yet?