YouTube to launch AI-based age verification feature from August 13:
YouTube will introduce an artificial intelligence-powered age verification system from August 13, 2025, in an effort to provide a safer and more age-appropriate viewing experience for its users. The update, announced by the platform, will first be tested on a limited group of users in the United States before expanding to other countries based on the outcome of the initial trial.
According to YouTube, the system will use advanced machine learning models to estimate a user’s age by analysing multiple data points, including account history, types of videos watched, and search activity. If the AI determines that a user may be under the minimum required age for certain content, the platform will automatically apply restrictions to limit access.
In cases where the AI system cannot accurately determine a user’s age, YouTube will provide an option for manual verification. Users will be able to confirm they are over 18 by submitting official identification, such as a government-issued ID, or by using a valid credit card for age confirmation.
The company says the new system is designed to enhance user safety, particularly for younger audiences, by reducing exposure to inappropriate or mature content. This move comes as part of YouTube’s broader strategy to strengthen digital safety and comply with global regulatory requirements regarding child protection online.
While YouTube claims the AI-powered feature will ensure a more appropriate experience regardless of the date of birth listed on an account, it has also acknowledged that the system may impose certain restrictions on younger users. Some may find these limitations inconvenient, especially if they affect access to popular videos or features previously available to them.
The feature’s gradual rollout to other regions will depend on the results of the US trial, as YouTube evaluates the accuracy of its AI algorithms and the overall reception among users. Industry analysts note that this step is in line with a growing trend among tech giants to implement AI-driven safety measures, though questions remain about how the technology will handle false positives and privacy concerns.


