Advocacy groups warn against adding facial recognition to Meta AI glasses

As Meta continues to advance its plan to make artificial intelligence-powered glasses a key factor for digital connection, a group of more than 70 advocacy organizations have issued a warning about the potential invasion of privacy that these devices could facilitate. The alarm for regulators comes ahead of a broader launch of Meta’s latest update.
As reported by Wired, a coalition of more than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor and immigrant advocacy organizations issued a demand that Meta abandon its plans to deploy face recognition in its AI glasses, due to concerns that this could enable stalkers, abusers and federal agents to covertly identify strangers in public.
In February, a report from the New York Times suggested that Meta is planning to quietly roll out facial ID in its AI glasses, in order to enhance connection between users of the device. The report, which was based on leaked internal communications from Meta, suggested the company is looking to launch the update amid broader political turmoil in order to get this tool through with limited resistance.
But many users are rightfully concerned that such technology could lead to harmful impacts, because people could unwittingly share personal information with glasses wearers.
That, according to advocacy groups, could lead to dangerous situations in many contexts, which is why this new coalition is calling for Meta to halt the rollout until more controls can be implemented.
Though Meta would prefer to push ahead. The company is looking to advance its AI plans as fast as possible, in order to take on rising competition in the space. As reported by Politico, Meta has already sought to reduce U.S. regulatory rules on AI development through direct consultation with the White House, with a view to ensuring that the U.S. more broadly remains the leader in the AI race.
Fewer regulatory barriers means faster implementation, harking back to Meta’s “Move Fast and Break Things” motto of times past. When it comes to technological development, clearly, Meta would prefer to stick to this approach, but as with many aspects of AI, the pace of technology is indeed moving faster than safety assessment can keep up with, which will ultimately put more people at risk.
That was certainly true with VR, after Meta was forced to implement personal space zones and additional safety measures to combat abuse within interactive VR spaces. It also happened with AI, as AI tools provided dangerous recommendations to users, sometimes against professional advice.
The current wave of AI tools are actually not intelligent at all. They’re not thinking and providing a response based on considered perspective, but rather matching the context of queries with the relevant conversational notes they have within their data banks.
The presentation of this information may look authoritative and sound descriptive. But there’s no actual thought being put into these responses, and no oversight into what they’re sharing.
Still, Meta and other AI developers have pushed ahead with a broader launch of AI tools, despite the potential risks. As of right now, there isn’t any context as to what the long-term implications are of, say, developing personal relationships with AI bots. Nonetheless, providers are keen to get these tools to consumers, with a view to winning the AI race and ultimately making more money for their businesses.
Adding ID recognition to Meta AI glasses is another element of this broader concern, and advocacy groups are right to raise this as an issue. It’s certainly something that should get the attention of regulatory groups.
Will regulatory groups listen?
Meta may push ahead either way, and the U.S. government seems keen to accelerate AI progress however it can. The first element of its AI Action Plan, which was launched in July, was “Removing Red Tape and Onerous Regulation.”
The race for AI supremacy looks set to win out, and as usual, society will deal with the harms in retrospect.
Source link



