03/12/2024

Meta’s Deepfake Dilemma



This week’s key terms/concepts:

Solicitors Regulation Authority (SRA): The regulator of solicitors and law firms in England & Wales.

Vendor Fraud: A type of fraud where false information is provided by a vendor to scam a company’s accounts into releasing illegitimate payments.

Regulatory Compliance: An organisation’s adherence to the relevant laws, regulations and guidelines that affects its business processes.

A few weeks ago, Meta announced plans to reintroduce facial recognition technology in response to a surge of deepfake scams – referred to as ‘celeb-bait’ – where realistic computer-generated images are used to falsely depict celebrities endorsing products or services. Regulators have flagged the risks posed by AI and deepfakes to the legal sector, and whilst this is not Meta’s first use of facial recognition (Facebook discontinued it in 2021 over privacy concerns) its reintroduction aims to combat these deepfake scams in particular. 

What is the importance of this? 


Deepfakes have grown to an alarming level, with the persistent misuse of public figures such as Martin Lewis illustrating this issue. Lewis has been involved in various deepfake scams over the years, most notably being reported last month after a Facebook bitcoin investment scam had used his face and name to mislead audiences into large payments.

Already, Meta uses an AI-powered ad review system in order to detect fake celebrity endorsements, but the company is now enhancing this with facial recognition technology. Early testing of their systems has proven to show promising results, and Meta plans to start showing notifications within the app to affected public figures impacted by the scam. The effectiveness of this technology and its privacy safeguards remain under similar scrutiny that led to Facebook’s earlier withdrawal of the technology, but Meta have emphasised that their data will be encrypted and securely stored, with facial data deleted after the comparison check. 

What does this reveal to the legal sector? 


Meta’s call for action reveals that deepfakes present growing risks to the legal profession, particularly in fraud and regulatory compliance. Last year, the Solicitors Regulation Authority (SRA) issued an updated sector risk assessment warning of fraud facilitated by deepfakes, such as identity fraud and money laundering. The SRA also stated the ‘increasing numbers’ of law firms facilitating vendor frauds, and that firms not meeting clients face-to face where they seem ‘unnecessarily reluctant or evasive’ could be a cause for concern. Firms failing to mitigate these risks associated with deepfakes may face penalties, particularly in finance and real estate which are high-risk sectors for regulatory compliance.  

There may also be broader legal challenges regarding IP, privacy and defamation, where deepfakes can infringe upon rights of copyrights and trademarks or an individual’s privacy/reputation. 

Meta has yet to release an exact timeline for the rollout of their facial recognition technology- law firms must remain vigilant about the general risks involved with deepfakes and AI technologies.  


Subscribe for Lex Weekly articles in your inbox – stay commercially aware on the go.