Artificial intelligence (AI) is a field of computer science that creates software or models that mimic human reasoning or inference. Machine learning is a subset of AI which uses algorithms trained on massive amounts of data to allow the computer to learn with gradually improving accuracy without explicitly being programmed. The biopharmaceutical and healthcare fields produce massive amounts of data, including properties and characteristics of drug compounds, biological, genomic, and clinical data, efficacy of treatments, adverse events and risks, and electronic health records. The data may come from many sources, both public and proprietary. AI systems trained on such data can streamline and optimize the drug development process, including drug discovery, diagnosing diseases, identifying treatments and risks, designing clinical trials, and predicting safety and efficacy profiles, leading to increasing efficiency and reducing costs.
Generative artificial intelligence (AI) platforms are already reshaping work life for many professionals, including those in the legal industry. On Day 3 of IPWatchdog LIVE 2023, a panel discussion titled “Impact of Generative Artificial Intelligence on Law and Innovation” explored ways that in-house legal teams can advance their company’s use of generative AI to improve productivity while balancing the need to protect confidential data and intellectual property.
A Pulitzer Prize-winning author and a number of Tony, Grammy and Peabody award winners are the latest to sue OpenAI for copyright infringement based on the way it trains its popular chatbot, ChatGPT. In July, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey brought a similar suit against OpenAI.
Earlier this week, the Review Board of the U.S. Copyright Office published a decision denying registration of a work created using the generative artificial intelligence (GAI) system, Midjourney, highlighting the complexities such technology is introducing to the U.S. copyright system…. The decision issued this week found that Jason M. Allen’s two-dimensional artwork, titled “Théâtre D’opéra Spatial,” contained “more than a de minimis amount” of AI-created content and that the AI content must therefore be disclaimed.
On August 30, the U.S. Copyright Office issued a notice of inquiry in the Federal Register seeking public comment on a range of issues related to the intersection of copyright law and artificial intelligence (AI). The recent notice is the latest action by the Office on the myriad of copyright issues that have been arising around the use of generative AI platforms including infringement liability for training AI systems on copyrighted content and human authorship requirements.
Given current and ongoing economic realities, patent practitioners—both in-house and outside counsel—are constantly being asked to do more within existing budgets. Meanwhile, more robust patent applications thick with technical detail are necessary to satisfy courts and patent offices around the world. Working within budgetary constraints without sacrificing quality requires outside the box thinking and use of available tools to streamline as much of the process as possible. Enter Artificial Intelligence (AI), which is taking the world by storm, and recently garnered the attention of the American Bar Association, which has just announced the creation of a task force that will examine the impact of AI on law practice and the ethical implications of its use for lawyers.
The Film Independent and the International Documentary Association (IDA) sent a letter to the Senate Subcommittee on Intellectual Property Tuesday, asking the Subcommittee to ensure that any federal right of publicity it may be considering as an answer to problems raised by generative AI artificial intelligence (AI) include an express exemption for creative works. The letter, penned by the law firm Donaldson Callif Perez, came in response to the Subcommittee hearing held on July 12, 2023, during which witnesses floated the idea of creating a federal right of publicity or an anti-impersonation right as a solution to concerns that generative AI could mimic artistic styles.
Autonomous vehicles were designed with the purpose of minimizing accidents on urban roads and providing more safety and comfort, assisting or performing independently some tasks that are the driver’s responsibility. The Society of Automotive Engineers (SAE) has developed a classification of autonomous vehicles, creating six categories for autonomous driving. Level zero refers to conventional cars without any technology of this type, while at the other extreme, at level five, the driver becomes a passenger, needing only to activate the vehicle and indicate the destination. In such case, it is up to the vehicle control system to carry out in a fully autonomous way the driving of the vehicle throughout the route and to carry out any emergency decision-making. The intermediate levels of autonomous driving include systems already found on the market, such as parking assistance, emergency braking and lane change assistance, among others.
On Thursday, Reuters reported that Google sent a letter to the U.S. Patent and Trademark Office (USPTO) criticizing proposed rule changes that the tech firm believes will stifle U.S. innovation. The internet giant expressly pointed to the field of artificial intelligence as a weak point for the USPTO and its patent examiners. The letter was signed by Halimah DeLaine Prado, General Counsel for Google.
On Friday, Judge Beryl Howell issued an opinion in Dr. Stephen Thaler’s challenge against the U.S. Copyright Office (USCO) over the denial of his application for a work generated entirely using generative artificial intelligence (AI) technology. The opinion supports the USCO’s refusal to register a work in which the claimant disclosed in the application that the image was the result of an AI system, called The Creativity Machine. The case is Stephen Thaler v. Shira Perlmutter and The United States Copyright Office (1:22-cv-01564) (June 2, 2022).
The past year has provided decades’ worth of developments across law and policy in the areas of artificial intelligence (AI) and machine learning (ML) technologies. If 2022 was the breakthrough year for accessible AI, then 2023 can so far be deemed as the first year of likely many more to come in the era of an AI inquisition. “After years of somewhat academic discourse,” reflects Dr. Ryan Abbott, “AI and copyright law have finally burst into the public consciousness—from contributing to the writer’s strike to a wave of high-profile cases alleging copyright infringement from machine learning to global public hearings on the protectability of AI-generated works.” Both the U.S. Copyright Office (USCO) and the U.S. Patent and Trademark Office (USPTO) are in active litigation over the eligibility of generative AI outputs for statutory protection. Additionally, both offices have held numerous webinars and listening sessions and conducted other methods of collecting feedback from the public as they work through policy considerations surrounding AI.
Everyone’s talking about artificial intelligence (AI), but not everyone’s talking about it the same way. The tenor of the global conversation on AI ranges from dystopian fearmongering to evangelistic optimism. It’s vital to know the prevailing mood in the territory where you plan to launch your AI-powered service, app, or consultancy. In this article, we’ll briefly tour recent legislation, ethical conversations, and economic strategies to demonstrate how varied current thinking is on this revolutionary new technology. We’ll look at the current situation in the United States, Canada, Europe, China, Japan and beyond, as countries develop the policies, guidelines and laws necessary to regulate AI innovation without stifling creativity.
Voice cloning, a technology that enables the replication of human voices from large language models using artificial intelligence (AI), presents both exciting possibilities and legal challenges. Recent machine-learning advances have made it possible for people’s voices to be imitated with only a few short seconds of a voice sample as training data. It’s a development that brings exciting possibilities for personalized and immersive experiences, such as creating realistic voiceovers for content, lifelike personal assistants and even preserving the voices of loved ones for future generations. But it’s also ripe with potential for abuse, as it could easily be used to commit fraud, spread misinformation and generate fake audio evidence.
On July 12, the U.S. Senate Judiciary Committee’s Subcommittee on Intellectual Property held its second hearing in two months on the intersection of artificial intelligence (AI) developments and intellectual property rights. This most recent hearing focused on potential violations of copyright law by generative AI platforms, the impact of those platforms on human creators, and ways in which AI companies can implement technological solutions to protect copyright owners and consumers alike.
Last week, comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey sued OpenAI in a U.S. district court, alleging the company’s generative AI product, ChatGPT, infringes on their copyrighted content. In addition to copyright infringement, the trio also claimed that the AI company violated the Digital Millennium Copyright Act (DMCA), unfair competition laws and unjustly enriched the company. The lawsuit accuses OpenAI of “copying massive amounts of text” used to train ChatGPT to produce new text from prompts. Language models like OpenAI rely on datasets of text or other media to train its generative capabilities.