When Smart Glasses Aren't as Smart as We Hoped

Meta's foray into smart glasses is a case study in expectations.

a close up of a person wearing sunglasses
Photo by Jeff DeWitt (unsplash), Edited/Rendered by gpt-image-1

Meta's Smart Glasses Are Cool. The Business Model Is the Problem.

There's a particular kind of optimism that exists in tech product launches. It's the same energy as buying a gym membership on January 2nd, pure, uncut belief that this will be the thing that finally changes everything. Meta's smart glasses have been riding that energy for a while now, and for a minute, it almost seemed earned.

The Ray-Ban Meta smart glasses, a collaboration between Meta Platforms and EssilorLuxottica, pack a wide-angle camera, open-ear speakers, and a microphone into frames that look like, well, regular Ray-Bans. They don't scream "I am a cyborg cataloging your face." They whisper it, which is arguably worse, but we'll get to that.

At the Meta Connect event in Menlo Park, California, the company unveiled its Ray-Ban Display model, featuring a built-in display. The flagship pairs with the Meta Neural Band, a wrist-worn EMG sensor that lets wearers control the glasses through subtle hand and finger movements. Demand has been, by all accounts, enormous, so enormous that according to PC Gamer, the glasses experienced "extremely limited inventory," delaying their planned international launch. People wanted these things.

And why wouldn't they? The pitch is seductive. Glasses that let you take photos, listen to music, talk to an AI assistant, and overlay digital information onto the physical world, all without pulling out your phone. It's the dream of augmented reality finally shrinking down from a clunky headset to something your optometrist might recommend. Meta even has a next-generation mixed-reality pair, code-named "Phoenix," in development, though reports suggest the launch may slip from the second half of 2026 to the first half of 2027. So the future is coming. Just, you know, later than advertised. As futures tend to do.

Meanwhile, Meta is reportedly re-entering the smartwatch market with a device code-named "Malibu 2," expected to feature health-tracking and AI assistant integration. The company is building an ecosystem, glasses, watches, wristbands, that wraps around your body the way its apps already wrap around your attention. The ambition is legible. The question is whether we should be impressed or alarmed.

Both, simultaneously. That's the natural state of being a person alive right now.


The Part Where It Gets Uncomfortable

Reports suggest Meta may add facial recognition to its smart glasses in the near future. The feature, internally called "Name Tag," would let wearers identify strangers and pull up information about them through Meta's AI assistant. You're at a party. Someone walks up. Your glasses quietly tell you their name, where they work, maybe their Instagram. Convenient! Also: a privacy nightmare wearing designer frames.

Let's be clear about the history. As India Today noted, Meta is bringing facial recognition back five years after discontinuing the feature on Facebook "amid privacy and legal concerns." The company tried this before. It didn't go well. And yet.

According to Yahoo Tech, Name Tag hasn't been approved for launch and remains a "potential feature." But the trajectory is telling. An internal memo from Meta's Reality Labs, first reported by The New York Times, noted that the current political environment in the U.S. represented good timing for the feature's release. Read that again. A company decided that a shifting regulatory landscape made it easier to roll out a surveillance feature. The comedy writes itself, except it's not funny.

The privacy erosion has been incremental and deliberate. In May 2025, PetaPixel reported that Meta revised the privacy policy for its Ray-Ban glasses to enable broader data collection that feeds its AI systems. Around the same time, VAR India reported that Meta made voice data collection automatic and non-optional, sparking concerns about consent and transparency. The default setting now stores your voice recordings and enables AI camera features, with limited opt-out options, according to Particle News. You have to actively fight to not be recorded by your own glasses. That's like buying a diary that publishes itself unless you remember to whisper a safe word every morning.

When Meta's glasses launched in India in May 2025, MediaNama flagged rising privacy concerns. And as The Verge put it with admirable bluntness: "Meta will ruin its smart glasses by being Meta." The pattern is consistent enough to qualify as a corporate personality trait.


The Trust Problem Nobody Wants to Solve

There's a version of smart glasses people would love without reservation. Glasses that help you navigate a new city, translate a menu in real time, remind you of someone's name when your own memory fails, these are genuinely useful tools. The technology isn't the villain. The business model is.

Meta's economic engine runs on data. Advertising revenue depends on knowing who you are, what you look at, who you talk to, and what you want before you want it. Strapping cameras and microphones to people's faces and then expanding data collection policies isn't a bug in the smart glasses strategy. It's the strategy.

Every time Meta quietly updates a privacy policy to collect more data, every time an internal memo frames a relaxed regulatory environment as an opportunity, the gap between what consumers hope these products will be and what the company needs them to be grows wider. You can't build a wearable future on a foundation of "well, technically you agreed to the terms of service."

The companies that will win the wearables race long-term are the ones willing to treat privacy as a feature, not an obstacle. Imagine smart glasses where facial recognition is opt-in for both parties, the wearer and the person being identified. Imagine data policies written in language a human being could parse without a law degree. Imagine a company that looked at a permissive political environment and said, "Cool, but we're going to hold ourselves to a higher standard anyway."

I know. Wildly idealistic. The kind of thing someone says before getting laughed out of a quarterly earnings call.

But here's what I keep coming back to: the demand for Meta's glasses is real. People are excited. The inventory can't keep up. That excitement is a form of trust, trust that this technology will make life better, not more surveilled. Every expanded data policy, every floated facial recognition feature, every memo about "dynamic political environments" spends a little of that trust. And trust, unlike inventory, doesn't replenish because demand is high.

The smartest thing smart glasses could do is remember that the person wearing them is also a person standing in front of someone else, someone who never signed a terms of service agreement, never opted in, and never asked to be identified at a party by a stranger's sunglasses.

Technology moves fast. Humans don't. The gap between those two speeds is where most of our problems live. The least we can ask is that the companies building our future accessories acknowledge the gap exists, and maybe, occasionally, slow down long enough to let the rest of us catch up.

That, or at least make the opt-out button bigger than 4-point font.


References


Models used: gpt-4.1, claude-sonnet-4-20250514, claude-opus-4-6, gpt-image-1

If this resonated, SouthPole is a slow newsletter about art, technology, and the old internet — written for people who still enjoy thinking in full sentences.

Subscribe to SouthPole