DSUPOST

Independent global news · Daily, by named correspondents

AI, Privacy, and the Tensions in Emerging Technologies

Smart glasses and private AI conversations are testing privacy rights, prompting calls for tighter regulation and renewed focus on user safety.

By Jonas Lindqvist··3 min read
Retro typewriter with 'AI Ethics' on paper, conveying technology themes.
· Markus Winkler (Pexels License)

A woman outside a café becomes the subject of an unsolicited conversation. The man addressing her wears Meta's Ray-Ban smart glasses, recording her reactions without her consent. Days later, she discovers the video circulating online. Incidents like this are increasingly common. In September 2023, Meta Platforms Inc. began selling its latest smart glasses. Privacy advocates argue these devices violate personal privacy.

"It's essentially legalised surveillance," said Jennifer King, Privacy and Data Policy Fellow at the Stanford Institute for Human-Centered AI. She highlighted the absence of clear rules distinguishing public from private spaces regarding wearable cameras. Current laws in many areas permit photography in public spaces, leaving individuals with limited recourse when their likeness becomes content without their knowledge.

Meta's glasses are not the only AI technology raising concerns. WhatsApp, owned by Meta, introduced an AI private chat feature in October 2023. Dubbed 'incognito mode,' it ensures conversations with the app’s chatbot are end-to-end encrypted, with no record retained on the platform’s servers. Will Cathcart, head of WhatsApp, stated, "Privacy is essential for trust." However, experts warn this feature complicates accountability.

Professor Alan Woodward, a cybersecurity expert at the University of Surrey, expressed concerns about how privacy-preserving systems might inadvertently protect bad actors. "Without any oversight mechanism in place, we're essentially blind to potential misuse," Woodward noted. This highlights a broader tension in AI ethics: balancing user privacy with safeguards against exploitation.

The regulatory landscape surrounding these technologies lags behind their rapid growth. In the European Union, efforts to update the General Data Protection Regulation (GDPR) to address AI usage began in 2021 but remain under negotiation as of late 2023. Meanwhile, the US Federal Trade Commission (FTC) has initiated inquiries into biometric data collection via wearables, yet comprehensive legislation has not materialised. This patchwork approach leaves significant gaps, especially in areas like real-time facial recognition, increasingly possible with devices like Meta’s glasses.

User safety also enters the debate. A 2023 study from the University of Cambridge found that many individuals were unaware they could be recorded by smart glasses in public. This lack of awareness exacerbates the power imbalance between device users and unknowing bystanders. "Any regulatory framework must include transparency obligations," said Dr. Laura Green, the lead author of the study.

The commercial incentives for companies like Meta to develop these technologies remain strong. Global shipments of augmented reality (AR) glasses are projected to exceed 10 million units annually by 2025, according to a market analysis report from IDC. The underlying AI systems powering these devices are also advancing rapidly, complicating the ethical calculus.

For technology developers, adopting privacy-by-design principles could mitigate emerging risks. This approach embeds data protection into the engineering process rather than treating it as an afterthought. However, a 2024 report from the Electronic Frontier Foundation noted that only a minority of major technology firms had implemented robust privacy safeguards in their next-generation AI products.

Whether consumer demand for privacy will lead to self-regulation or if governments will need to intervene more forcefully remains unresolved. History suggests that market-driven solutions may be insufficient. Data collection practices over the past decade show companies often prioritise profit over privacy unless compelled otherwise.

The social implications of these trends raise philosophical questions about technology's role in society. When asked about AI's long-term implications, Sam Altman, CEO of OpenAI, told a US federal jury in October 2023, "We must design systems that we are willing to live with for decades." As policymakers, developers, and advocacy groups grapple with AI privacy and safety, the rapid pace of technological adoption continues to outstrip public debate. Without a sustainable model for AI governance, emerging technologies risk becoming tools of harm rather than progress.

#ai#privacy#smart glasses#technology#security
Jonas LindqvistJonas Lindqvist covers AI, semiconductors and platform regulation from Stockholm. Background in ML research at KTH; now reports on the industry's claims with the receipts.
Continue reading