As AI tools become more commonplace, so does misuse of them. Several celebrities have announced recently that online ads featuring them endorsing various products are “deepfakes,” meaning the ad was generated by AI and without the celebrity's involvement or authorization. Essentially, the ad forges an endorsement from the putative spokesperson.
AI can be used to create such illicit content because the AI draws on publicly available images, photos, recordings, and other material as it “learns” about the real world. The material it ingests “teaches” the system how to create similar images, recordings, and other content as output, resulting in fake or manipulated content that looks fairly authentic. Two draft bills in Congress – one in each house – would impose civil or criminal liability on creating deepfakes.
Why It Matters
As noted in the attached Wall Street Journal article, this issue is primarily (for now) one that affects celebrities, who of course can capitalize on their images and likenesses for lucrative endorsement deals. The deception affects consumers as well, though: if a spokesperson is deemed trustworthy, a fake depiction of that person could influence buyers to purchase faulty goods or services. Finally, with each of us carrying a camera on us at all times and posting photos to professional and personal social media networks, there is no telling yet how deepfakes of private citizens might be used to harass those citizens or bring harm to others. These reasons are why some personnel in Congress have started looking at the issue.