Back

Can Smart Glasses Without Cameras Actually Be Usable and More Acceptable?

An opinion piece from a digital product designer’s perspective

Look, we need to talk about the elephant in the room—or rather, the camera on people’s faces.

I’ve been working in digital product design for years, and I can’t remember the last time I saw a consumer tech product create as much social friction as camera-equipped smart glasses. We’re building technology that people don’t just use—they wear. On their faces. All day. In public. And yet somehow, we thought strapping cameras to people’s heads would be no big deal?

Let me be blunt: camera-equipped smart glasses are a social nightmare.

The “Glasshole” Problem Never Really Went Away

Remember Google Glass? The term “Glasshole” didn’t emerge from nowhere. When Google Glass launched in 2013, critics immediately raised concerns about recording capabilities, leading to bans in bars, restaurants, and workplaces, with the derogatory term “Glasshole” reflecting widespread public resentment.

The product died a spectacular death in the consumer market, and we all nodded sagely about how it was “ahead of its time” or “the design was too obvious.” But that wasn’t really the problem. The problem was trust—or the complete lack of it.

I wrote about Meta’s Display glasses recently, discussing why they haven’t convinced me yet—but one thing became crystal clear through that analysis: the camera is the single biggest barrier to mainstream adoption, not the display technology or interaction models.

Fast forward to today, and we’re making the exact same mistakes with Ray-Ban Meta glasses. Sure, they look way better than Google Glass. They’re fashionable, lightweight, and packed with features. But here’s what’s happening in the real world:

A woman at a Brazilian wax appointment in Manhattan noticed her aesthetician wearing Ray-Ban Meta smart glasses during the service, sparking widespread debate about privacy expectations. Think about that for a second. Someone is in one of the most vulnerable positions imaginable, and there’s a camera strapped to another person’s face. The aesthetician claimed the glasses weren’t charged, but does that even matter? The damage to trust was already done.

Harvard University students demonstrated how they could use PimEyes facial recognition with Ray-Ban Meta glasses to identify people and access their personal information in real-time from just their face. This isn’t theoretical concern anymore—it’s happening.

And let’s talk about Meta’s controversial privacy policy update in April 2025, which removed the option to prevent voice recordings from being stored and made AI features always-on by default. Users were understandably upset that their recordings would be kept for up to a year to train Meta’s AI, with no real opt-out option beyond manually deleting everything.

Why Cameras Create Social Friction

Here’s what we learned from Google Glass that apparently nobody remembered: when you wear a camera on your face, you fundamentally change every social interaction around you.

People feel in control when using their own technology, but camera glasses feel like invasive surveillance—someone else is essentially eavesdropping on their lives without their choice or timeline.

Think about what happens when someone walks into a room wearing camera glasses:

  • Conversations change. In boardroom meetings, when someone wears camera glasses, people start measuring their words, momentum changes, pauses grow longer, and ideas get sanded down. The natural flow of communication breaks down because nobody knows if they’re being recorded.
  • Body language shifts. People become self-conscious. They wonder if that funny angle makes them look bad. They worry about being caught on camera doing something embarrassing.
  • Trust erodes. Even if you’re not recording, people don’t know that. While Ray-Ban Meta glasses have a small LED indicator light when recording, the light’s small size has drawn criticism from European privacy regulators, and in many situations, it’s simply not visible or people don’t know what it means.
  • Legal and ethical gray zones emerge. Many jurisdictions allow photography in public spaces, but the ethics get murky fast. What about semi-private spaces like restaurants? Medical offices? Gyms and locker rooms?

The tiny LED light on most camera glasses is supposed to solve this problem, but come on—we’re product designers. We know that’s a band-aid solution at best. A 404 Media investigation revealed cheap modification kits that can disable the recording light, completely undermining this “safety” feature.

Use Cases: What Actually Needs a Camera?

Let’s be honest about when cameras are actually necessary versus when we’re just adding them because we can.

Use cases that genuinely benefit from cameras:

  • Visual assistance for people with impairments (reading signs, identifying objects, navigating spaces)
  • Remote expert assistance in specialized fields (surgery, complex equipment repair)
  • Hands-free documentation in specific professional contexts (inspection reports, safety documentation)
  • Content creation for professionals who need POV footage

Use cases that absolutely don’t need cameras:

  • Getting notifications
  • Navigation and turn-by-turn directions
  • Real-time translation of conversations
  • Checking your calendar or reading messages
  • Controlling smart home devices
  • Accessing AI assistants
  • Viewing productivity information (emails, meeting notes, reminders)
  • Health and fitness tracking
  • Making phone calls
  • Listening to music or podcasts

Notice something? The second list is way longer. Most of what people actually want from smart glasses doesn’t require a camera at all.

Camera-Free Smart Glasses: Actually Pretty Great

Here’s where it gets interesting. Companies are finally building smart glasses without cameras, and they’re showing us what the category could actually become.

Even Realities G2 is probably the best example. At just 36 grams without cameras or speakers, the G2 provides a less intrusive way of wearing smart tech that brings privacy for both the wearer and everyone else. They’ve got a sharp micro-LED display that shows you information in your field of view—notifications, navigation, translations, meeting notes, even a “Conversate” AI feature that listens to conversations and provides helpful context.

To prioritize privacy and deliver a seamless display experience, Even G2 relies on its integrated microphone for information input rather than a camera. The result? You get most of the smart glasses functionality without making everyone around you uncomfortable.

The Halliday smart glasses take a similar approach—proactive AI assistance, 40-language translation, navigation, audio memo capture, all without a camera. They’re lighter than most camera-equipped glasses and the battery lasts up to 12 hours.

Even Amazon Echo Frames prove that audio-only smart glasses have a place. Built-in Alexa, phone calls, music, smart home control—no camera needed. They’re basically what Bluetooth earbuds want to be when they grow up.

Why Camera-Less Could Actually Win

From a product design standpoint, removing cameras offers some serious advantages that I’ve explored in my work on reimagining the UX designer role and thinking about how we build products for actual humans:

1. Weight and battery life improve dramatically. Cameras are power-hungry. The Even G2 at 36 grams is less than half the weight of the 69-gram Meta Ray-Ban Display glasses, with battery life lasting up to 2 days.

2. Social acceptance increases. Camera-free glasses eliminate the social friction that camera glasses create, allowing conversations to feel natural again. People don’t tense up when you walk into a room.

3. Privacy concerns vanish. No camera means no recording, no facial recognition concerns, no data being sent to cloud servers for processing. It’s that simple.

4. Design freedom expands. Without needing to accommodate camera sensors and processing, designers can make glasses that actually look like… glasses. The Even G2 looks like regular specs compared to the competition, with no bulk except for touch controls on the frame arms.

5. Professional contexts become viable. Think about all the places where cameras are prohibited or create legal issues—hospitals, law offices, secure facilities, schools. Camera-free smart glasses can go everywhere.

What Developers and Designers Should Focus On

Here’s my call to action for the UX/UI designers and software engineers building the future of wearables:

Stop assuming cameras are essential. Challenge that requirement in every design brief. Ask: “What does this feature actually need?” If the answer isn’t fundamentally visual, you probably don’t need a camera. This is exactly the kind of critical thinking I discussed in my piece on why perfectionism kills product launches—sometimes the best feature is the one you don’t build.

Design for context-aware computing without visual input. The most exciting developments in AI don’t require cameras. The Even G2’s Conversate mode listens to conversations and offers contextual suggestions like word definitions, answering queries in the background, or suggesting follow-up questions. That’s the kind of ambient intelligence that actually helps without being creepy.

Prioritize the display experience. If you’re building camera-free glasses, make the information display amazing. Reviewers describe the Even G2’s monochrome screen as looking like the highest-resolution CRT display ever, with brilliant green text that appears as sharp as viewing a 40-inch screen floating in front of you. That’s the magic—information that appears when you need it, invisible when you don’t. It’s about design-led thinking that puts user experience at the center.

Build for hands-free interaction that respects social norms. Voice control is fine in some contexts, terrible in others. Touch controls work. Companion devices like smart rings work. The Even R1 smart ring provides discreet control—tap to activate, slide to scroll, press to select—far more natural than trying to tap the frame itself.

Think about the full ecosystem. Smart glasses don’t exist in isolation. They should seamlessly integrate with phones, smartwatches, smart rings, and other devices. Even Realities’ TriSync technology connects glasses, ring, and smartphone into one cohesive ecosystem where directions from your phone automatically appear in your lenses.

Design for accessibility as a core feature, not an afterthought. Smart glasses can transform work for visually impaired professionals through features like reading text aloud, real-time visual assistance, and AI-driven virtual assistants. This isn’t a niche—it’s a massive use case that benefits everyone.

The Market Is Ready (Even If It Doesn’t Know It Yet)

The data backs this up. No-display AI smart glasses shipments are projected to explode from roughly 679,000 units in 2024 to 15 million by 2030—a 68% compound annual growth rate.

But here’s what’s really telling: A 2024 Monash University survey found that while owners see smart glasses as boosting their self-image and social ties, non-users fear privacy breaches and social disruption. The technology has potential, but the camera is the barrier.

The better the technology gets, the more suspicious people become. That’s not a sustainable trajectory. We need to build trust before we can build adoption. And as I’ve written about in the future of design roles, the next generation of designers needs to understand these human-centered constraints deeply—not just technical capabilities.

What This Means for the Industry

Companies need to stop trying to build “the iPhone of faces” and start thinking about what people actually want to wear all day. The smartphone paradigm doesn’t translate to face computers because phones are private devices that we control. Smart glasses are ambient—they’re always on, always visible, always affecting others around us.

Camera-free and speaker-free isn’t a handicap, it’s a design stance that solves real user headaches. It removes tension from office meetings where recording triggers legal concerns, family dinners where devices already push the limit, and public spaces where bystanders don’t want to be content.

The winning approach isn’t about cramming in more features. It’s about thoughtful reduction—identifying the features that genuinely enhance daily life without creating social friction.

The Path Forward

We’re at an inflection point. The technology is finally good enough—lightweight, long battery life, sharp displays, responsive AI. But we’re still carrying the baggage of assuming that more sensors equals better products.

Camera-less smart glasses aren’t a compromise. They’re a different vision of what wearable computing should be. One that respects the people around you as much as it serves you. One that integrates into social contexts instead of disrupting them. One that people might actually want to wear every day.

For designers and developers: this is your opportunity. While everyone else is racing to add more cameras and sensors, you can differentiate by subtracting. Build the smart glasses people will actually wear, not the ones they’ll leave in a drawer because they make everyone uncomfortable.

Because at the end of the day, the best technology isn’t the most advanced—it’s the technology people actually use. And right now, camera-free smart glasses are looking like the more usable, more acceptable, and ultimately more successful path forward.

It’s time we stopped trying to recreate Google Glass’s mistakes in better packaging. Let’s build something actually different.


What do you think? Are you building apps or experiences for smart glasses? I’d love to hear how you’re approaching the camera question in your designs. The conversation about privacy-first wearables is just getting started.

Related reading:

This website stores cookies on your computer.