Regulating Chatbot Output via Inter-Informational Competition

Zhang, Jiawei | November 18, 2024

The advent of ChatGPT has sparked over a year of regulatory frenzy. Policymakers across jurisdictions have embarked on an AI regulatory “arms race,” and worldwide researchers have begun devising a potpourri of regulatory schemes to handle the content risks posed by generative AI products as represented by ChatGPT. However, few existing studies have rigorously questioned the assumption that, if left unregulated, AI chatbot’s output would inflict tangible, severe real harm on human affairs. Most researchers have overlooked the critical possibility that the information market itself can effectively mitigate these risks and, as a result, they tend to use regulatory tools to address the issue directly.

This Article develops a yardstick for re-evaluating both AI-related content risks and corresponding regulatory proposals by focusing on inter-informational competition among various outlets. The decades-long history of regulating information and communications technologies indicates that regulators tend to err too much on the side of caution and to put forward excessive regulatory measures when encountering the uncertainties brought about by new technologies. In fact, a trove of empirical evidence has demonstrated that market competition among information outlets can effectively mitigate many risks and that overreliance on direct regulatory tools is not only unnecessary but also detrimental.

This Article argues that sufficient competition among chatbots and other information outlets in the information marketplace can sufficiently mitigate and even resolve some content risks posed by generative AI technologies. This may render certain loudly advocated but not well-tailored regulatory strategies—like mandatory prohibitions, licensure, curation of datasets, and notice-and-response regimes—unnecessary and even toxic to desirable competition and innovation throughout the AI industry. For privacy disclosure, copyright infringement, and any other risks that the information market might fail to satisfactorily address, proportionately designed regulatory tools can help to ensure a healthy environment for the informational marketplace and to serve the long-term interests of the public. Ultimately, the ideas that I advance in this Article should pour some much-needed cold water on the regulatory frenzy over generative AI and steer the issue back to a rational track.