Posts Tagged: "AI"

Indie Filmmakers Urge Senate IP Subcommittee to Take Caution in Considering Federal Right of Publicity

The Film Independent and the International Documentary Association (IDA) sent a letter to the  Senate Subcommittee on Intellectual Property Tuesday, asking the Subcommittee to ensure that any federal right of publicity it may be considering as an answer to problems raised by generative AI artificial intelligence (AI) include an express exemption for creative works. The letter, penned by the law firm Donaldson Callif Perez, came in response to the Subcommittee hearing held on July 12, 2023, during which witnesses floated the idea of creating a federal right of publicity or an anti-impersonation right as a solution to concerns that generative AI could mimic artistic styles.

Driving Forward: Autonomous Vehicles, Artificial Intelligence and Intellectual Property in Brazil

Autonomous vehicles were designed with the purpose of minimizing accidents on urban roads and providing more safety and comfort, assisting or performing independently some tasks that are the driver’s responsibility. The Society of Automotive Engineers (SAE) has developed a classification of autonomous vehicles, creating six categories for autonomous driving. Level zero refers to conventional cars without any technology of this type, while at the other extreme, at level five, the driver becomes a passenger, needing only to activate the vehicle and indicate the destination. In such case, it is up to the vehicle control system to carry out in a fully autonomous way the driving of the vehicle throughout the route and to carry out any emergency decision-making. The intermediate levels of autonomous driving include systems already found on the market, such as parking assistance, emergency braking and lane change assistance, among others.

Google Tells USPTO Proposed IPR Changes Would Stifle AI Innovation

On Thursday, Reuters reported that Google sent a letter to the U.S. Patent and Trademark Office (USPTO) criticizing proposed rule changes that the tech firm believes will stifle U.S. innovation. The internet giant expressly pointed to the field of artificial intelligence as a weak point for the USPTO and its patent examiners. The letter was signed by Halimah DeLaine Prado, General Counsel for Google.

DC Court Says No Copyright Registration for Works Created by Generative AI

On Friday, Judge Beryl Howell issued an opinion in Dr. Stephen Thaler’s challenge against the U.S. Copyright Office (USCO) over the denial of his application for a work generated entirely using generative artificial intelligence (AI) technology. The opinion supports the USCO’s refusal to register a work in which the claimant disclosed in the application that the image was the result of an AI system, called The Creativity Machine. The case is Stephen Thaler v. Shira Perlmutter and The United States Copyright Office (1:22-cv-01564) (June 2, 2022).

Accelerated Innovation: In Less Than a Year, We’ve Seen a Decade’s Worth of AI and IP Developments

The past year has provided decades’ worth of developments across law and policy in the areas of artificial intelligence (AI) and machine learning (ML) technologies. If 2022 was the breakthrough year for accessible AI, then 2023 can so far be deemed as the first year of likely many more to come in the era of an AI inquisition. “After years of somewhat academic discourse,” reflects Dr. Ryan Abbott, “AI and copyright law have finally burst into the public consciousness—from contributing to the writer’s strike to a wave of high-profile cases alleging copyright infringement from machine learning to global public hearings on the protectability of AI-generated works.” Both the U.S. Copyright Office (USCO) and the U.S. Patent and Trademark Office (USPTO) are in active litigation over the eligibility of generative AI outputs for statutory protection. Additionally, both offices have held numerous webinars and listening sessions and conducted other methods of collecting feedback from the public as they work through policy considerations surrounding AI.

International Perspectives: R&D and AI Policies in the Global Landscape

Everyone’s talking about artificial intelligence (AI), but not everyone’s talking about it the same way. The tenor of the global conversation on AI ranges from dystopian fearmongering to evangelistic optimism. It’s vital to know the prevailing mood in the territory where you plan to launch your AI-powered service, app, or consultancy. In this article, we’ll briefly tour recent legislation, ethical conversations, and economic strategies to demonstrate how varied current thinking is on this revolutionary new technology. We’ll look at the current situation in the United States, Canada, Europe, China, Japan and beyond, as countries develop the policies, guidelines and laws necessary to regulate AI innovation without stifling creativity.

AI Voice Cloning – and Its Misuse – Has Opened a Pandora’s Box of Legal Issues: Here’s What to Know

Voice cloning, a technology that enables the replication of human voices from large language models using artificial intelligence (AI), presents both exciting possibilities and legal challenges. Recent machine-learning advances have made it possible for people’s voices to be imitated with only a few short seconds of a voice sample as training data. It’s a development that brings exciting possibilities for personalized and immersive experiences, such as creating realistic voiceovers for content, lifelike personal assistants and even preserving the voices of loved ones for future generations. But it’s also ripe with potential for abuse, as it could easily be used to commit fraud, spread misinformation and generate fake audio evidence.

Comedian Sarah Silverman Takes Aim at OpenAI and Meta for Copyright Infringement

Last week, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey sued OpenAI in a U.S. district court, alleging the company’s generative AI product, ChatGPT, infringes on their copyrighted content. In addition to copyright infringement, the trio also claimed that the AI company violated the Digital Millennium Copyright Act (DMCA), unfair competition laws and unjustly enriched the company. The lawsuit accuses OpenAI of “copying massive amounts of text” used to train ChatGPT to produce new text from prompts. Language models like OpenAI rely on datasets of text or other media to train its generative capabilities.

Defining Data: Improving Terminology Around Generative AI Models

The generative artificial intelligence (AI) revolution the world is currently experiencing is powered by data. But what exactly are “data” and how can we make the term fit for use in the complex landscape of generative AI? In simple terms, data in this context can be any digitally formatted information. However, there is an inconsistency in the usage and understanding of the term when it comes to what is encompassed in a dataset used for training a generative AI model. Often, there is metadata or even identifiable information which, although possibly unintended, ends up being part of the training data. There can also be legal implications linked to the data, including systems trained on copyrighted or licensed works, or, for example, systems trained with any visual or textual information containing personal health information.

Contemplating Intellectual Property Rights in the Metaverse: Statutory Change is Inevitable for AI Creations

In the first installment of this two-part series, we posed a question: What is at the intersection of name, image, likeness rights (NILs), non-fungible tokens (NFTs), artificial intelligence (AI) creations, big data, blockchain, and the metaverse? The answer is – intellectual property. Our hypothetical described a high school basketball star, Sky-Freeze, who sought to leverage their name, image, and likeness (NIL) on a metaverse platform, illustrating how a digital avatar, corresponding NFTs in the metaverse, AI, and big data intersect. This second installment explores how AI impacts the intersection, giving rise to legal issues concerning intellectual property rights.

Warhol’s Ghost in the Machine: What Warhol v. Goldsmith Means for Generative AI

On May 18, 2023, the U.S. Supreme Court answered an exceedingly narrow question of copyright law with potentially sweeping impact: did the purpose and character of Andy Warhol’s below ‘Orange Prince’ work—as used on a 2016 Condé Nast magazine cover—support fair use of Lynn Goldsmith’s photograph of famed musician Prince Rogers Nelson a/k/a Prince?  In a 7-2 decision, the Court found that it does not, calling into question nearly 30 years of fair use jurisprudence, arguably narrowing the scope of that doctrine, and potentially threatening disciplines that rely on it, e.g., appropriation art. The decision is also sure to impact generative artificial intelligence (“AI”), an emerging technology that is also likely to rely heavily on fair use.

The Ethics of Using Generative Artificial Intelligence in the Practice of Law

The use of Artificial Intelligence (AI) has taken center stage in popular culture thanks to the significant advances of tools like ChatGPT. Of course, the use of these new, high-powered AI tools presents real issues for businesses of all types and all sizes. Notably, Samsung employees shared confidential information with ChatGPT while using the chatbot at work. Subsequently, Samsung decided to restrict the use of generative AI tools on company-owned devices and on any device with access to internal networks. Concerned about the loss of confidential information, Apple has likewise restricted employees from using ChatGPT and other external AI tools. The actual or potential loss of confidential information is a matter of critical importance to technology companies, but it also must be of the utmost concern for all attorneys who have an ethical obligation to keep client information confidential.

Company Policy Issues and Examples Relating to Employee Use of AI-Generated Content

Artificial Intelligence (AI) has become a crucial tool for organizations in various sectors, particularly in the generation of content and code by generative AI systems such as ChatGPT, GitHub Copilot, AlphaCode, Bard and DALL-E, among other tools. As the promise of incorporating these generative tools in the corporate setting is all but assured in the near term, there are a number of risks that need to be minimized as companies more forward. In particular, as AI applications grow increasingly sophisticated, they raise concerns with several forms of intellectual property (IP), such as patents, copyrights, and trade secrets. This article aims to discuss these issues and provide a sample company policy for using AI-generated content such as software code.

Former Copyright Office GC Tells House IP Subcommittee His Counterpart Got It Wrong on AI Fair Use

In response to last week’s hearing of the House of Representatives’ Subcommittee on Courts, Intellectual Property and the Internet about the impact of artificial intelligence (AI) on copyright law, former Copyright Office General Counsel, Jon Baumgarten, submitted a letter this week to the Subcommittee expressing his concerns with the testimony of one of the witnesses, Sy Damle of Latham & Watkins, who also formerly served as U.S. Copyright Office General Counsel. The letter was published in full on the Copyright Alliance website.

Artists Tell House IP Subcommittee in AI Hearing: It’s Not ‘Data’ and ‘Content’ to Us; It’s Our Livelihood

The House of Representatives’ Subcommittee on Courts, Intellectual Property and the Internet today held the first of several planned hearings about the impact of artificial intelligence (AI) on intellectual property, focusing in this initial hearing on copyright law. The witnesses included three artists, a professor, and an attorney with varying perspectives on the matter, although the artists all expressed similar concerns about the potentially dire effects of generative AI (GAI) applications on their respective industries and careers.