In Honor of April Fools’ Day: Diving Into Deepfakes

“While the California bill is chiefly aimed at criminalizing deepfakes, it has implications for intellectual property  in that it reaches conduct that may not be easily addressed by the enforcement of existing IP law.”

https://depositphotos.com/124433204/stock-photo-fake-mustache-nose-and-eyeglasses.htmlDeepfake technology has made headlines recently for its use in creating fake portrayals of celebrities, but the long term implications could be much more sinister than phony renderings of Scarlett Johansson appearing in porn videos or President Barack Obama calling Trump a profanity.

One deepfakes website, ThisPersonDoesNotExist.com, includes a large gallery of images of faces that are entirely computer-generated. The site can create these artificial intelligence (AI)-based faces using something called a generative adversarial network (a GAN), which is described this way by TheNextWeb.com:

A GAN is a neural network comprised of two arguing sides — a generator and an adversary — that fight among themselves until the generator wins. If you wanted to create an AI that imitates an art style, like Picasso’s for example, you could feed a GAN a bunch of his paintings.

According to Business Insider, a developer behind one of these websites explained that he took on the project to demonstrate an important point about AI and neural networks: this technology can be used to easily fool people into believing fake and doctored images. Experts have raised concerns that these sophisticated tools could be weaponized for furthering fake news and hoaxes.

“This means that just about anyone with a couple hours to kill could create something just as compelling as I did,” Chris Schmidt wrote on ThisAirbnbDoesNotExist.com. “[AI is] now sufficiently advanced that they can often fool folks, especially if they’re not looking very hard.”

Legislative Solutions

Marc Berman, California Assembly member, recently introduced a bill into the California Legislature to address the serious issue of deepfakes. Cynthia Cole, special counsel at Baker Botts, said that the new bill criminalizes the creation and distribution of deepfakes while continuing to protect internet service providers (ISPs) that would be the vehicle for distribution. It notably contains no requirement for the distribution vehicle to remove the criminal video. “It has a very clear intent and defined purpose, and I think it will prompt further analysis and laws to address deepfakes,” Cole said.

“People use deepfakes as weapons to demonize individuals, often for political and sexual revenge i.e., revenge porn,” explained Cole. Deepfakes disproportionately affect women who become victims of embarrassing and often horrifying acts of online assault or revenge. ISPs are protected under the Communications Decency Act and so removing these images has proven to be a lengthy, expensive and very difficult task for victims.

Cole added, “Like data privacy regulations, individual states are acting to put in place laws to stop this activity, but the laws are disparate, slow moving and subject to preemption.”

Striking the Right Balance

There are also challenges with combating AI-altered video in an era where technology has progressed more rapidly than the law. Aside from the issues related to protecting ISPs and other entities whose platforms are hijacked, the biggest challenge Cole sees is the ability to define and protect free speech.

Andrew J. Thomas, partner at Jenner & Block, said, “It is important that laws addressing deepfakes be narrowly drafted in order to ensure that First Amendment rights to comment on or criticize public figures are not restricted. The proposed California law attempts to do this by carving out satire and parody, which is a good start, but the law should make clear that other kinds of speech protected by the First Amendment—such as a dramatization of historical events or a biopic about a politician or entertainer—are also beyond the reach of the statute.”

The bill actually criminalizes conduct that is deceptive, regardless of whether it is used in an effort to defraud anyone and regardless of whether it defames or otherwise harms anyone. Thomas said, “Merely causing embarrassment—or in this case, likely embarrassment—has never been a crime or even a tort.”

The Upshot for IP

While the California bill is chiefly aimed at criminalizing this particular type of technological deception, it has implications for intellectual property (IP)  in that it reaches conduct that may not be easily addressed by the enforcement of existing IP law.

Specifically, the owner of a photo or video could potentially have a copyright infringement claim for copying the video and creating an unauthorized derivative work. But there could be fair use defenses if the deepfake is used for parody, satire, or political commentary or criticism. The person depicted in the deepfake would only have a copyright claim if he or she happened to be the owner of the photo or video being manipulated. And the depicted person could have a trademark, misappropriation or right of publicity claim if the video falsely makes them appear to be endorsing a product.

Doug Lipstone, partner at Weinberg Gonser LLP, said Deepfakes can constitute a violation of the right of privacy if the target is a private individual or the right of publicity if the target is a public figure. And deepfakes can constitute an infringement of the copyright in the original work, as the deepfake is an unauthorized derivative work. Those are all civil causes of action, which can result in monetary damages.

“One thing to note is that the copyright may not reside in the person who is the target of the deepfake. For example, the copyright could be owned by the cameraman for the original film clip, and not the person appearing in the film clip. So, the remedy for copyright infringement may not be available to the target. This bill, however, makes the creation and/or willful distribution of the deepfake a misdemeanor, which exposes the defendant to criminal liability for his/her actions, including possible jail time,” he explained.

Finally, with respect to ISP liability, as long as the party uploading or posting the content is not employed by or affiliated with the ISP, then the ISP is immune from prosecution under the proposed act, said Lipstone. Additionally, the subsection on distribution limits distributor liability to those who know or reasonably should know that the content is a deepfake.

A Solution in Innovation?

“In the future, the danger is that there could be such a proliferation of these videos that no one will believe anything,” said Thomas. “Like fake news on steroids. But maybe technology will provide a solution if AI is able to detect deepfakes moving forward.”

 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

No comments yet.