Subscribe & Follow
Jobs
- Social Media Journalist Intern Paarl
- Social Writer Johannesburg
- Pioneering Coordinator Cape Town
- Junior Social Coordinator and Content Creator Cape Town
- Social Media Manager Cape Town
- Video Editor for Social Media Content Cape Town
- Social Media Manager and Strategist Cape Town
- Junior Digital Art and Social Media Marketing Coordinator Johannesburg
- Multi Media Journalist | South Coast Sun Durban
- Recruitment Consultant Work From Home
How Meta’s new technology aims to protect users from fake celebrity ads
Tackling the 'celeb-bait' scam
One of the most prominent scams Meta is addressing involves the misuse of images of celebrities and public figures—commonly referred to as "celeb-bait." Scammers leverage these images in ads designed to mimic legitimate promotions, luring users to fraudulent websites where they are asked to provide personal information or send money. The company’s automated ad review system already uses machine learning to scan millions of ads daily, identifying potential violations, but detecting these scams has proven challenging.
To combat this, Meta is now testing the use of facial recognition technology to compare the faces in suspicious ads with the profile pictures of public figures on Facebook and Instagram. If a match is confirmed and the ad is determined to be a scam, Meta will block it. The company is taking steps to address privacy concerns by ensuring that any facial data generated during this process is deleted immediately after the
comparison, whether or not a match is found.
Initial trials with a small group of celebrities have shown promising results, prompting Meta to expand the protection to a broader range of public figures in the coming weeks. Those who are affected will receive in-app notifications and have the option to opt out of the program at any time.
Impersonation: A persistent threat
Beyond celeb-bait, scammers often create fake accounts that impersonate public figures, hoping to deceive users into engaging with scam content or sharing sensitive information. These impersonators may claim that a celebrity endorses a specific product or investment, furthering the scam’s credibility.
While Meta already employs detection systems and relies on user reports to identify impersonators, the company is now exploring the use of facial recognition to compare the profile pictures of suspicious accounts with those of public figures. This added layer of defense could significantly improve the speed and accuracy of identifying and removing these fake accounts.
Streamlining account recovery with video selfies
In addition to protecting users from scams, Meta is also experimenting with ways to make account recovery more efficient. Users who lose access to their accounts—whether through forgotten passwords, lost devices, or phishing scams—are currently required to verify their identity by uploading official documents. However, Meta is testing an alternative: video selfies.
By uploading a video selfie, users can verify their identity, as the system compares the selfie to the account’s profile picture using facial recognition technology. Meta assures users that the video will be encrypted and stored securely, visible only for verification purposes and deleted once the process is complete.
This new verification method, which mirrors techniques already used in other digital applications, is designed to be more secure and harder for hackers to exploit than traditional document-based verification. Early indications suggest that it offers a quicker and more convenient way for users to regain access to their accounts.
Recognising the adversarial nature of the digital landscape, Meta acknowledges that scammers will continue to evolve their tactics. However, the company remains committed to staying ahead by refining its detection and enforcement capabilities. Meta plans to engage regulators, policymakers, and experts in ongoing discussions as it continues to invest in technologies designed to protect users and their accounts from fraud and impersonation.