By Kingsley Samuel

Meta has called for a unified, industry-wide approach to age verification, even as it rolls out new artificial intelligence-powered tools aimed at strengthening safety for teenagers across its platforms.

The company said while it continues to improve its internal systems for detecting underage users, long-term effectiveness will depend on broader collaboration across the digital ecosystem, particularly at the level of operating systems and app stores.

“Age assurance is a complex challenge that no single company can solve alone,” Meta stated, noting that fragmented approaches across apps often lead to inconsistencies and gaps in enforcement.

Meta is advocating a model where age verification is handled centrally at the device or app store level, enabling developers to apply consistent, age-appropriate protections without requiring users to repeatedly verify their age across multiple platforms.

According to the company, such an approach would not only streamline user experience but also enhance privacy by reducing the need for repeated sharing of sensitive data.

The push comes as Meta expands its own AI-powered enforcement systems designed to identify accounts that may belong to underage users.

These include tools that analyse user activity, profile information, and content signals such as posts and comments to detect age-related indicators.In addition, the company has introduced visual analysis technology capable of estimating age ranges from photos and videos based on general characteristics, without relying on facial recognition.

Meta said accounts flagged through these systems are subjected to verification checks, and may be removed if users are unable to confirm their age.

The company also disclosed improvements to its reporting mechanisms, making it easier for users to flag suspected underage accounts, while AI-supported moderation systems are being used to improve the speed and consistency of enforcement decisions.

Beyond enforcement, Meta said it is scaling its Teen Account protections across Instagram, Facebook and Messenger, where millions of young users are automatically placed into safer, age-appropriate environments.

These protections, the company explained, include default settings that limit exposure to sensitive content and restrict unwanted interactions.

Meta also noted that it is expanding technology that can identify users who may be teenagers even when they register with adult birthdates, ensuring such accounts are automatically placed under stricter safety settings.

On parental involvement, the company said it is strengthening its Family Center resources with new guidance and notifications to help parents verify their children’s ages and better understand online safety tools.

It added that users attempting to change their age to bypass protections are required to undergo identity verification, including the use of facial age estimation technology.

Meta maintained that a coordinated, cross-industry framework remains critical to addressing the broader risks facing young users online, stressing that collaboration between tech companies, regulators, and platform providers will be key to building a safer digital environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.