JTIP Blog: Five New Posts Published

By: Alessandra Fable, Jeanne Boyd, Elisabeth Bruckner, Angela Petkovic, and Shelby Yuan

Volume 20, Issue 3 now available

JTIP Blog: Legal Implications of Digital Economies in Gaming and E-Payments

By: Kathleen Denise Arteficio

JTIP Blog: DEAD OR ALIVE: ANALYZING THE IMPACT OF TRADEMARK RESIDUAL GOODWILL ON ENTREPRENEURSHIP IN FASHION

By: Rohun Reddy

Volume 20, Issue 2 now available

JTIP Blog: Collecting Student Athlete Biometric Data

By: Anthony Vitucci

JTIP Blog: Facial Recognition Technology in the Commercial Sector

By: Michael Willian

Volume 20, Issue 1 now available

JTIP Blog: Personal Privacy & Biometric Data in the COVID-19 Pandemic

By: Luke Shadley

Fair Use, Licensing, and Authors’ Rights in the Age of Generative AI

By: Shen, Celeste | November 18, 2024

The rise of generative AI technologies has introduced unprecedented challenges to copyright law, particularly around the fair use of copyrighted works in AI training processes. Generative AI tools, such as ChatGPT, are trained on vast datasets that often include copyrighted material, typically without the consent of authors or compensation for use. This widespread, unauthorized use has led to legal disputes, with plaintiffs asserting that using protected texts in training AI models constitutes copyright infringement. This Note examines the application of the fair use doctrine to generative AI, analyzing each of the four statutory factors to demonstrate that generative AI’s commercial replication of copyrighted content is not transformative, harms the market for original works, and should not qualify as fair use. To address these issues, this Note proposes a blanket licensing scheme as a policy solution to balance the interests of copyright holders and AI companies. Such a scheme would ensure compensation for authors while legally permitting AI companies to access necessary training data, and therefore foster a sustainable partnership between creators and the AI industry.

Regulating Chatbot Output via Inter-Informational Competition

By: Zhang, Jiawei | November 18, 2024

The advent of ChatGPT has sparked over a year of regulatory frenzy. Policymakers across jurisdictions have embarked on an AI regulatory “arms race,” and worldwide researchers have begun devising a potpourri of regulatory schemes to handle the content risks posed by generative AI products as represented by ChatGPT. However, few existing studies have rigorously questioned the assumption that, if left unregulated, AI chatbot’s output would inflict tangible, severe real harm on human affairs. Most researchers have overlooked the critical possibility that the information market itself can effectively mitigate these risks and, as a result, they tend to use regulatory tools to address the issue directly.

This Article develops a yardstick for re-evaluating both AI-related content risks and corresponding regulatory proposals by focusing on inter-informational competition among various outlets. The decades-long history of regulating information and communications technologies indicates that regulators tend to err too much on the side of caution and to put forward excessive regulatory measures when encountering the uncertainties brought about by new technologies. In fact, a trove of empirical evidence has demonstrated that market competition among information outlets can effectively mitigate many risks and that overreliance on direct regulatory tools is not only unnecessary but also detrimental.

This Article argues that sufficient competition among chatbots and other information outlets in the information marketplace can sufficiently mitigate and even resolve some content risks posed by generative AI technologies. This may render certain loudly advocated but not well-tailored regulatory strategies—like mandatory prohibitions, licensure, curation of datasets, and notice-and-response regimes—unnecessary and even toxic to desirable competition and innovation throughout the AI industry. For privacy disclosure, copyright infringement, and any other risks that the information market might fail to satisfactorily address, proportionately designed regulatory tools can help to ensure a healthy environment for the informational marketplace and to serve the long-term interests of the public. Ultimately, the ideas that I advance in this Article should pour some much-needed cold water on the regulatory frenzy over generative AI and steer the issue back to a rational track.

Between Copyright and Computer Science: The Law and Ethics of Generative AI

By: Desai, Devin R.,Riedl, Mark | November 18, 2024

Copyright and computer science continue to intersect and clash, but they can coexist. The advent of new technologies such as digitization of visual and aural creations, sharing technologies, search engines, social media offerings, and more, challenge copyright-based industries and reopen questions about the reach of copyright law. Breakthroughs in artificial intelligence research, especially Large Language Models that leverage copyrighted material as part of training, are the latest examples of the ongoing tension between copyright and computer science. The exuberance, rush-to-market, and edge problem cases created by a few misguided companies now raises challenges to core legal doctrines and may shift Open Internet practices for the worse. That result does not have to be, and should not be, the outcome.

This Article shows that, contrary to some scholars’ views, fair use law does not bless all the ways that someone can gain access to copyrighted material even when the purpose is fair use. Nonetheless, the scientific need for more data to advance AI research means access to large book corpora and the Open Internet is vital for the future of that research. The copyright industry claims, however, that almost all uses of copyrighted material must be compensated, even for non-expressive uses. This Article’s solution accepts that both sides need to change. This solution forces the computer science world to discipline its behaviors and, in some cases, pay for copyrighted material. It also requires the copyright industry to abandon its belief that all uses must be compensated or restricted to uses sanctioned by the copyright industry. As part of this re-balancing, this Article addresses a problem that has grown out of this clash and is undertheorized.

Legal doctrine and scholarship have not solved what happens if a company ignores website code signals such as “robots.txt” and “do not train.” In addition, companies such as the New York Times now use terms of service that assert that you cannot use their copyrighted material to train software. Drawing on the doctrine of fair access as part of fair use, we show that the same logic indicates that such restrictive signals and terms should not be held against fair uses of copyrighted material on the Open Internet.

In short, this Article rebalances the equilibrium between copyright and computer science for the age of AI.

Journal of Technology and Intellectual Property Tweets