Breaking News
Menu
Advertisement

Meta's New AI Scans Bone Structure to Catch Underage Instagram Users

Meta's New AI Scans Bone Structure to Catch Underage Instagram Users
Advertisement

Table of Contents

Meta is deploying a new artificial intelligence system that scans photos and videos for physical traits like bone structure and height to identify and remove underage users from Instagram and Facebook. The company announced the rollout of this AI visual analysis tool, which aims to catch children under 13 who bypass age restrictions using false birthdays. This marks a significant shift from relying solely on user-provided data to actively analyzing the media uploaded to its platforms.

To address immediate privacy concerns, Meta clarified that this technology is not facial recognition. The AI does not identify specific individuals; instead, it evaluates general visual cues and physical proportions to estimate a broad age range. This visual scan operates alongside Meta's existing text-based detection systems, which flag contextual clues such as birthday mentions, school grades, and specific keywords in bios, captions, and comments. Moving forward, Meta plans to expand this text analysis to cover Instagram Reels, Instagram Live, and Facebook Groups.

If the AI flags an account as potentially belonging to an underage user, the profile is immediately deactivated. The user must then complete a formal age verification process to regain access, or the account will be permanently deleted. The visual analysis feature is currently live in select countries, with a broader global rollout planned in the coming months.

Expansion of the Teen Accounts System

Alongside the new visual scanning technology, Meta is aggressively expanding its Teen Accounts system. This automated framework places users suspected to be between 13 and 15 years old into a highly restricted platform experience. Key features of this system include:

  • Private by Default: Accounts are automatically set to private, requiring manual approval for new followers.
  • Restricted Messaging: Direct messages (DMs) are strictly limited to users they are already connected with.
  • Content Filtering: Harmful comments and sensitive content are automatically hidden from the user's feed.
  • Parental Oversight: Parents are granted visibility into their children's AI chats and interactions.

This expansion now covers Instagram users in Brazil and 27 European Union countries. Additionally, the Teen Accounts framework is launching on Facebook in the United States for the first time, with the UK and EU scheduled to follow in June.

The Privacy Tightrope of Visual Scanning

Meta’s aggressive push into AI-driven age verification is a direct response to mounting legal and regulatory pressure. With a recent $375 million penalty in New Mexico and an ongoing European Commission investigation into child safety on its platforms, the company is forced to prove it can effectively police its user base. By shifting from passive text analysis to active visual scanning, Meta is signaling to regulators that it is taking proactive, technologically advanced steps to keep children off its apps.

However, analyzing bone structure and physical proportions introduces a complex privacy dilemma. While the system avoids strict facial recognition, scanning millions of user photos for physical traits normalizes a highly intrusive form of algorithmic surveillance. The success of this initiative will depend entirely on the AI's accuracy - false positives could lock legitimate users out of their accounts, while false negatives will leave Meta vulnerable to further regulatory fines. As this technology rolls out globally, the balance between child safety and user privacy will remain under intense scrutiny.

Sources: digitaltrends.com ↗
Did you like this article?
Advertisement

Popular Searches