Artificial Intelligence (AI) capabilities are rapidly advancing. Highly capable AI could cause radically different futures depending on how it is developed and deployed. We are unable to specify human goals and societal values in a way that reliably directs AI behavior. Specifying the desirability (value) of AI taking a particular action in a particular state of the world is unwieldy beyond a very limited set of state-action-values. The purpose of machine learning is to train on a subset of states and have the resulting agent generalize an ability to choose high value actions in unencountered circumstances. Inevitably, the function ascribing values to an agent’s actions during training is an incomplete encapsulation of human values and the training process is a sparse exploration of states pertinent to all possible futures. After training, AI is therefore deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths. Law-making and legal interpretation convert opaque human goals and values into legible directives. Law Informs Code is the research agenda embedding legal processes and concepts in AI. Like how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice offer arrays of tools to address these problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes and the tools of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, and legal reasoning) can facilitate the robust specification of inherently vague human goals to increase human-AI alignment. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. Other data sources suggested for AI alignment – surveys, humans labeling “ethical” situations, or (most commonly) the beliefs of the AI developers – lack an authoritative source of synthesized preference aggregation. Law is grounded in verifiable resolutions: ultimately obtained from a court opinion, but short of that, elicited from legal experts. If law informs powerful AI, engaging in the deliberative political process to improve law would take on even more meaning.
The video game industry is massive, with an annual revenue of $180 billion worldwide; $60 billion of that in America alone. For context, the industry’s size is greater than that of the movie, book, and music industries combined. Yet, despite this market dominance, the video game industry is relatively new. Only in the 2011 decision of Brown v. Entertainment Merchants Association did the Supreme Court extend First Amendment protection to games. Still, the Court failed to define the scope of the game medium. As understood by an average person, a video game could be something as simple as Pac-Man or as complicated as a sophisticated $200 million recreation of the American West. If treated literally, the Supreme Court’s rule in Brown would require lower courts to treat all video games—regardless of their individual characteristics, sophistication, and visuals—as equally protected under the law. Yet lower courts have not been following this Supreme Court decision by its word. Instead, judges have scrutinized how complicated a video game is, whether it has a narrative, if its characters are unique, and other characteristics that should be irrelevant. This Article confronts the ways that lower courts discriminate against video games compared to established mediums and argues that this violates the Supreme Court mandate in Brown. It also provides a context and legal basis for constitutionally protecting video games that the Supreme Court failed to provide in its relatively simplistic decision. Ultimately, I argue that lower courts should take the Brown decision seriously and treat video games like any other protected medium, even if the results at first seem counterintuitive.
Biased black-box algorithms have drawn increasing levels of scrutiny from the public. This is especially true for those black-box algorithms with the potential to negatively affect protected or vulnerable populations.1 One type of these black-box algorithms, a neural network, is both opaque and capable of high accuracy. However, neural networks do not provide insights into the relative importance, underlying relationships, structures of the predictors or covariates with the modelled outcomes.2 There are methods to combat a neural network’s lack of transparency: globally or locally interpretable post-hoc explanatory models.3 However, the threat of such measures usually does not bar an actor from deploying a black-box algorithm that generates unfair outcomes on racial, class, or gendered lines.4 Fortunately, researchers have recognized this issue and developed interpretability frameworks to better understand such black-box algorithms. One of these remedies, the Shapley Additive Explanation (“SHAP”) method, ranks determinative factors that led to the algorithm’s final decision and measures the partial effects of the independent variables that were used in the model.5 Another, the Local Interpretable Model-agnostic Explanations (“LIME”) method, uses a similar method to reverse-engineer the determinative factors harnessed by the algorithm.6 Both the SHAP/LIME methods have the potential to shine light into the most accurate, precise black-box algorithms. These black-box algorithms can harm peoples’ physical being and property interests.7 However, algorithm developers currently hide behind the nominally impenetrable nature of the algorithm to shield themselves from liability. These developers claim that black-box algorithms are the industry standard, due to the increased accuracy and precision that these algorithms typically possess. However, SHAP/LIME can ascertain which factors might be cloud the judgement of the algorithm, and therefore cause harm. As such, SHAP/LIME may lower the foreseeability threshold currently set by tort law and help consumer-rights advocates combat institutions which recklessly foist malevolent algorithms upon the public. Part II will provide an overview of the SHAP/LIME methods, as well as applying it to a tort scenario involving a self-driving car accident. Part III will cover the potential tort claims that may arise out of the self-driving car accident, and how SHAP/LIME would advance each of these claims. SHAP/LIME’s output has not yet been compared to the foreseeability threshold under negligence or product/service liability. There are numerous factors that sway SHAP/LIME both towards and against reaching that threshold. The implications of this are severe—if the foreseeability threshold is not reached, a finder of fact might not find fault with the algorithm generator. Part IV will cover the evidentiary objections that might arise when submitting SHAP/LIME-generated evidence for admission. Reverseengineering an algorithm mirrors crime scene re-creation. Thus, the evidentiary issues involved in recreating crime scenes appear when reverseengineering algorithms.8 Important questions on relevance, authenticity, and accessibility to the algorithm directly affect the viability of submitting evidence derived using either the SHAP or LIME methods.9 Part V will conclude by contextualizing the need for transparency within an increasingly algorithm-driven society. I conclude that tort law’s foreseeability threshold is currently not fit for purpose when it comes to delivering justice to victims of biased black-box algorithms. As for complying with the Federal Rules of Evidence, SHAP/LIME’s admissibility depends on the statistical confidence level of the method’s results. I conclude that SHAP/LIME generally have been properly tested and accepted by the scientific community, so it is probable that statistically relevant SHAP/LIME-generated evidence can be admitted.10
In 2017, National Security Agency hacking tools were leaked on the Internet. One of these hacking tools relied on a vulnerability in Microsoft software. Its leak caused “the most destructive and costly N.S.A. breach in history.” This hacking tool took out: [the British health care system], Russian railroads and banks, Germany’s railway, French automaker Renault, Indian airlines, four thousand universities in China, Spain’s largest telecom, Telefonica, Hitachi and Nissan in Japan, the Japanese police, a hospital in Taiwan, movie theater chains in South Korea, nearly every gas station run by PetroChina, China’s state owned oil company, and, in the United States, FedEx and small electrical companies across the country. Then, this hacking tool was added to a different cyberweapon, where it caused an additional $10 billion in damage. Some consider this total a “gross underestimate.” The executive branch, through an internal process, had withheld this vulnerability from Microsoft for seven years. According to the executive branch, this Microsoft vulnerability was too valuable to disclose: the hacking tool using the Microsoft vulnerability “netted some of the very best counterterrorism intelligence” the NSA received. But the executive branch lacks the authority to unilaterally decide a vulnerability’s intelligence value outweighs the cost of withholding it. Vulnerabilities like the Microsoft one that the executive branch withheld are known as zero-day vulnerabilities (“zero-days”). This Comment’s thesis is that the executive branch can’t unilaterally withhold these zero-days to conduct offensive cyber operations or surveillance. I demonstrate this thesis in three steps. First, I explain what zero-days are and why they are dangerous. Second, I show the executive branch of the U.S. government unilaterally withholds zero-days. Third, and finally, I explain why the executive branch’s unilateral withholding of zero-days to conduct offensive cyber operations or national security surveillance is unconstitutional.
The Constitution grants patent owners exclusive rights over their inventions to “promote the Progress of Science.”1 This clause was drafted based on the belief that monetary incentives granted to the first inventor, such as the proceeds from selling and licensing the invention, will foster new ideas and accelerate innovation to the benefit of the public welfare. However, when the first inventor is the sole benefactor of the rewards from the innovation, subsequent innovation may be stifled. For instance, the first person to invent the idea of a mobile phone but lacking the right to use the underlying technologies essential to a mobile phone must obtain licenses from the patent owners for the phone’s low-voltage battery, keyboard, camera, operating system, and telecommunication technologies.2 In a free market system, these deals will rarely go smoothly. If a low-voltage battery is the only battery in the market suitable for a future mobile phone, the mobile phone inventor will be forced to license the lowvoltage battery from its owner. Bearing this in mind, a battery patent owner who has a lot of market power will naturally demand a very high royalty (similar to the “patent holdup” issue discussed later in this article). Even if the mobile phone inventor successfully secures all necessary patents at a reasonable royalty rate, the accumulative royalties may be too high for a mobile phone product to make economic sense (similar to the “royalty stacking” issue tackled later in this article). These issues are especially prominent in the context of technology standards. For example, if a particular phone transmitter becomes the industry standard for receiving input and sending output signals, all mobile phone companies would be “forced” to license from the transmitter’s patent owner to ensure their phone’s interoperability to compete with other companies. Despite the possibility that other transmitters in the market may work as well as the standard transmitter (standard–essential technology), the standard–essential technology in effect becomes the only transmitter in the market without competition. In other words, the potential substitutes and alternatives to the standard-essential technology are foreclosed due to its incompatibility with products that implement standard-essential technologies. The standard–essential technology, therefore, gives rise to an antitrust issue that inhibits future innovations and harms public welfare. This paper proposes a solution to antitrust issues arising from technology standard-setting. An overview of the standard-setting process is helpful to understand the issues inherent in the process, including patent holdup, patent ambush, and royalty stacking. These issues have antitrust implications. The paper next examines these issues in light of the Sherman Act and relevant antitrust case law. Two notable solutions to the technology standard antitrust issue and their limitations are briefly mentioned. Finally, we look to copyright law for solutions.