YouTube has expanded access to its Likeness Detection tool to journalists, government officials, and political leaders as part of efforts to curb the misuse of artificial intelligence-generated deepfakes on the platform.
The tool, first unveiled in December 2024, was initially rolled out to creators in October 2025 to help detect videos that mimic a person’s face or voice without consent. The platform said the latest expansion is aimed at protecting public figures whose likeness could be used to spread misinformation or unrest.
In a blog post, YouTube said civic leaders and journalists will be able to enrol in the programme and request the removal of videos that imitate their appearance or voice. However, the feature remains in a pilot phase and will initially be available only to a limited group within these professions.
The company said it is starting with this group to ensure the tool meets their specific needs before expanding access more widely in the coming months.
Eligible users must complete a verification process before enrolling in the system. The process requires individuals to submit a photo identification document and record a video of their face. These materials are used as a reference to verify identity and help detect videos that replicate a user’s likeness.
Applications are reviewed by human moderators before approval. Once verification is complete, users receive confirmation from YouTube and can begin monitoring the platform for potential deepfakes.
YouTube said the information collected during enrolment will only be used for identity verification and will not be used to train generative artificial intelligence models developed by its parent company, Google.
The company added that the data will be stored in YouTube’s internal systems for up to three years from the user’s last sign-in and handled in accordance with Google’s privacy policies.