Google's Gemini 3.1 Pro Just Changed the AI Game (And OpenAI's Playing Catch-Up)
NotionThe Crown Changes Hands (Again)
Remember when Google launched Gemini 3 Pro and briefly held the AI crown? Yeah, that lasted about as long as a New Year's resolution. OpenAI and Anthropic swooped in within weeks to reclaim dominance.
But here's the twist: Google just came back swinging with Gemini 3.1 Pro, and this time they brought something nobody else has—adjustable reasoning on demand.

What Makes 3.1 Pro Different?
Think of it this way: most AI models are like light switches—on or off. Gemini 3.1 Pro is a dimmer switch with three distinct levels of thinking power.
Need a quick answer? Use low reasoning. Working on complex scientific research or engineering problems? Crank it up to maximum. It's essentially a lightweight version of Google's specialized Deep Think system, but accessible to everyone.
The performance gains? We're talking 2X+ improvements in reasoning tasks. That's not incremental—that's a leap.
Reasoning Levels:
┌─────────────────────────────────────┐
│ Low → Quick responses │
│ Medium → Balanced depth │
│ High → Deep reasoning (complex) │
└─────────────────────────────────────┘

Meanwhile, OpenAI Is Building... A Camera Speaker?
While Google's doubling down on model capabilities, OpenAI is taking a hard pivot into hardware. According to The Information, their first consumer device will be a smart speaker with a camera, priced between $200-$300.
The device will recognize objects on your table, eavesdrop on—sorry, "understand"—nearby conversations, and include Face ID-style facial recognition for purchases. Because nothing says "I trust AI" like giving it eyes, ears, and access to your credit card, right?

This comes after OpenAI acquired Jony Ive's hardware company for nearly $6.5 billion last May. That's a lot of money to bet on people wanting ChatGPT watching them eat breakfast.
The Strategic Divergence
Here's what's fascinating: we're watching two completely different AI strategies unfold in real-time.
Google's play: Make the model smarter, more flexible, more powerful. Target researchers, engineers, and professionals who need serious computational thinking.
OpenAI's play: Put AI in your living room. Make it ambient, always-on, physically present. Target mainstream consumers who want AI to just "be there."
The AI Strategy Split:
Google OpenAI
│ │
├─→ Software ├─→ Hardware
├─→ Power users ├─→ Consumers
├─→ Capability ├─→ Convenience
└─→ Cloud-first └─→ Device-first
Who's right? Honestly, probably both. The AI market is massive enough for multiple winning strategies.
What This Means For You
If you're a developer, researcher, or power user, Gemini 3.1 Pro's adjustable reasoning is a game-changer. You can now dial in exactly how much computational power you need without switching models or paying for maximum reasoning on simple tasks.
If you're a consumer waiting for AI to "just work" without typing prompts, OpenAI's hardware play might be more your speed—assuming you're comfortable with the privacy implications.
The Real Question
Google dominated search for decades by having the best algorithm. OpenAI dominated AI conversation by having the best chatbot. Now Google's fighting back with better models while OpenAI's trying to become a hardware company.
The billion-dollar question: Will the AI wars be won in the cloud or in your living room?
My hot take? We're about to find out that AI is less like smartphones (winner-take-all) and more like streaming services (room for multiple winners). Google will dominate enterprise and power users. OpenAI will own ambient consumer AI. And both will print money.
But here's what keeps me up at night: what happens when these models get too good at reasoning, and they're sitting in devices watching everything we do? That's not science fiction anymore—that's next quarter's product roadmap.
What do you think—would you put an AI camera speaker in your home?