Author: Samuel Scialabba

Introduction

The emergence of Artificial Intelligence (AI) contract drafting software marks a pivotal moment in legal technology, where theoretical possibilities are transforming into market realities. As vendors compete to deliver increasingly sophisticated solutions, understanding the current state of this market becomes crucial for legal practitioners making strategic technology decisions. The landscape is particularly dynamic, with established legal tech companies and ambitious startups offering solutions that range from basic template automation to sophisticated language processing systems.

Yet beneath the marketing promises lies a more nuanced reality about what these systems can and cannot do. While some tools demonstrate remarkable capabilities in routine contract analysis and generation, others reveal the persistent challenges of encoding legal judgment into algorithmic systems. This tension between technological capability and practical limitation defines the current market moment, making it essential to examine not just who the key players are, but what their software delivers in practice.

This paper provides an analysis of the current market for AI contract drafting software, examining the capabilities and limitations of leading solutions. By focusing on specific vendors and their technologies, we aim to move beyond general discussions of AI’s potential to understand precisely where these tools succeed, where they fall short, and what this means for law firms and legal departments making technology investment decisions.

Historical Context and Technical Foundation

The rise of AI in legal practice reflects a fascinating evolution from theoretical possibility to practical reality. While early experiments with legal expert systems emerged in the 1960s at the University of Pittsburgh, marking the field’s experimental beginnings, the real transformation began with the maturation of machine learning and natural language processing (NLP) in the 21st century. These technologies fundamentally changed how computers could interpret and engage with human language, creating new possibilities for automated contract analysis and drafting that early pioneers could only imagine. The shift from rule-based expert systems to sophisticated language models represents more than just technological progress—it marks a fundamental change in how we conceptualize the relationship between computation and legal reasoning. Early systems relied on rigid, pre-programmed rules that could only superficially engage with legal texts. Modern AI tools, by contrast, can analyze patterns and context in ways that more closely mirror human understanding of legal language, though still with significant limitations. This technological evolution has particular significance for contract drafting, where the ability to understand and generate nuanced legal language is essential. While early systems could only handle the most basic document assembly, today’s AI tools can engage with contractual language at a more sophisticated level, analyzing patterns and suggesting context-appropriate clauses. 

Contract drafting represents a complex interplay of legal reasoning and strategic foresight. At its core, the process demands not just accurate translation of parties’ intentions into binding terms, but also the anticipation of potential disputes and the careful calibration of risk allocation. Traditional drafting requires mastery of multiple elements: precise definition of terms, careful structuring of obligations and conditions, strategic design of termination provisions, and thorough implementation of boilerplate clauses that can prove crucial in dispute resolution.

AI systems have sophisticated pattern recognition to analyze existing contracts and learn standard legal language patterns, which helps ensure accuracy and precision in expressing each party’s intentions. These systems can ensure that contract terms are legally enforceable by cross-referencing legal databases, statutes, and regulations to confirm compliance with relevant law. Furthermore, they excel at identifying common contractual conditions to obligations and suggesting appropriate risk mitigation clauses, such as force majeure clause.

The technology’s analytical capabilities extend to identifying potential areas of dispute based on historical contract analysis, enabling preventive drafting approaches. By leveraging large databases of legal documents, AI systems streamline the drafting process through automated insertion of standard provisions while maintaining consistency across documents. This automation of routine tasks allows lawyers to focus on strategic aspects of contract preparation and negotiation

Principal Players in AI Contract Drafting

  • Gavel

Gavel is a standout tool for document automation, designed to simplify the creation of legal documents through customizable templates and conditional logic. Its drag-and-drop interface is intuitive, making it accessible to non-technical users, and it excels at generating complex, customized documents quickly. Gavel’s ability to integrate with other systems and automate repetitive tasks, such as populating templates with data, makes it a powerful tool for legal teams looking to streamline their workflows.

However, Gavel’s focus on automation means it lacks advanced AI capabilities for contract analysis or review. It is primarily a tool for generating documents based on predefined templates, rather than analyzing or extracting insights from contracts. Additionally, the quality of its output depends heavily on the templates and data inputs, which may require significant upfront effort to configure.

  • Ironclad

Ironclad is a leader in contract lifecycle management (CLM), offering a comprehensive platform that combines AI-powered drafting with workflow automation. Its integration with Microsoft Word and other productivity tools allows users to draft, negotiate, and approve contracts within familiar environments. Ironclad’s AI is particularly effective at generating standard contracts (e.g., NDAs, service agreements) and suggesting clauses based on predefined templates. The platform’s analytics dashboard also provides valuable insights into contract performance, helping organizations optimize their workflows.

While Ironclad excels at automating routine tasks, its AI may struggle with highly complex or bespoke agreements, requiring significant customization. Additionally, its pricing structure, often tailored for enterprise-level clients, may be prohibitive for smaller firms or solo practitioners.

  • Zuva

Zuva, spun out of Kira Systems, focuses on AI-powered document understanding and contract analysis. Its technology is designed to be embedded into other software applications via APIs, making it a versatile solution for enterprises and developers. Zuva’s AI excels at extracting key terms and clauses from contracts, enabling users to quickly identify risks and obligations. The platform also offers a robust clause library, which can be used to streamline drafting and ensure consistency across documents.

Zuva’s strength as an embeddable solution also presents a limitation: it lacks a standalone, user-friendly interface for non-technical users. Additionally, while Zuva’s AI is powerful, it may require customization to handle highly specialized legal domains or jurisdiction-specific nuances.

  • LawGeex

LawGeex specializes in AI-powered contract review, using natural language processing (NLP) to compare contracts against predefined policies and flag deviations. This makes it an invaluable tool for legal teams tasked with ensuring compliance and reducing risk. LawGeex’s AI is particularly effective at handling high-volume, routine contracts, such as NDAs and procurement agreements, where speed and accuracy are critical.

While LawGeex excels at contract review, its capabilities in contract drafting are more limited. The platform is primarily designed to identify risks and deviations rather than generate new contracts from scratch. Additionally, its effectiveness depends on the quality of the predefined policies and templates, which may require significant upfront effort to configure.

  • Kira Systems

Kira Systems, now part of Litera, is a pioneer in AI-powered contract analysis, particularly in the context of due diligence and large-scale contract review. Its machine learning models are highly effective at identifying and extracting key clauses and data points from contracts, such as termination clauses, indemnities, and payment terms. Kira’s ability to handle vast volumes of documents quickly and accurately has made it a favorite among law firms and corporate legal teams, especially in industries like M&A, real estate, and financial services.

  • Luminance: AI for Anomaly Detection and Due Diligence

Luminance is a powerful AI platform designed for contract review and due diligence, with a particular focus on identifying anomalies and risks in large datasets. Its proprietary machine learning technology, based on pattern recognition, enables it to quickly analyze and categorize contracts without the need for extensive training. Luminance’s intuitive interface and real-time collaboration features make it a popular choice for legal teams working on complex transactions.

While Luminance excels at contract review and anomaly detection, its capabilities in contract drafting are more limited. The platform’s effectiveness may also depend on customization to handle jurisdiction-specific or industry-specific requirements.

AI in Practice: Use Cases Across Industries

  • Mergers and acquisitions 

Mergers and acquisitions (M&A) are among the most complex and high-stakes transactions in the legal world, requiring meticulous due diligence and the ability to process vast volumes of contracts under tight deadlines. In this context, Kira Systems has emerged as a leading solution. Kira’s machine learning models excel at extracting key clauses—such as termination provisions, indemnities, and payment terms—from large datasets, enabling legal teams to identify risks and inconsistencies quickly. For example, Clifford Chance, a global law firm, has leveraged Kira Systems to streamline clause extraction and comparison across multiple contracts, significantly reducing the time required for due diligence. Kira’s ability to handle the nuanced language of M&A agreements makes it an indispensable tool for law firms and corporate legal departments navigating these complex transactions.

  • Real Estate

The real estate sector is characterized by a high volume of contracts, including leases, purchase agreements, and mortgages. These documents often require careful review to ensure compliance with regulatory standards and to identify potential risks. Luminance has proven particularly effective in this domain. Its proprietary machine learning technology is designed to detect anomalies and categorize contracts quickly, making it ideal for real estate transactions. Luminance’s ability to analyze large datasets and flag non-standard clauses has been instrumental in helping real estate firms review leases and purchase agreements more efficiently. By automating the review process, Luminance allows legal teams to focus on strategic aspects of real estate deals, such as negotiation and risk mitigation.

  • Finance

The finance industry deals with a wide range of contracts, from loan agreements to derivatives, all of which must comply with strict regulatory standards. In this highly regulated environment, LawGeex has established itself as a trusted tool for contract review and compliance. LawGeex uses natural language processing (NLP) to compare contracts against predefined policies, flagging deviations and ensuring compliance with regulatory requirements. Its high accuracy rate—94% in spotting risks in non-disclosure agreements (NDAs), compared to 85% for human lawyers—makes it a valuable asset for financial institutions. By automating the review of high-volume contracts, LawGeex allows legal teams to focus on strategic risk management and regulatory compliance.

Conclusion: Algorithmic Precision Meets Strategic Expertise

The analysis of leading AI contract tools reveals a clear pattern: while each platform excels in specific domains—Kira in M&A due diligence, Luminance in anomaly detection, LawGeex in compliance—none yet offers a comprehensive solution for all contract-related tasks. This specialization reflects both the complexity of legal work and the current limitations of AI technology. The industry-specific applications demonstrate that AI tools are most effective when deployed strategically, focusing on tasks that benefit from pattern recognition and large-scale data processing, while leaving nuanced legal interpretation and strategic decision-making to human experts.

This bifurcation of responsibilities suggests an emerging model of legal practice where AI serves not as a replacement for lawyers but as a force multiplier for legal expertise. The success of platforms like Kira in M&A and LawGeex in financial compliance indicates that the future of legal technology lies not in attempting to replicate human judgment, but in augmenting it by handling routine analysis and flagging potential issues for expert review. As these technologies continue to evolve, the key challenge for legal practitioners will be developing workflows that effectively leverage AI’s analytical capabilities while preserving the critical role of human expertise in strategic legal thinking and complex decision-making.


A. Introduction

The dipartite structure of the American patent system, comprising the U.S. Patent and Trademark Office and the federal Article III court system, leads to interesting interactions between rulings made in each of the distinct subsystems. This is especially relevant to the patent system in the context of claim construction. Moreover, because the Federal Circuit has sole jurisdiction over patent cases, their unique frameworks for analyzing procedural and substantive legal issues lead to facially surprising outcomes. Two recent cases applying Federal Circuit precedent illustrate this in relation to the judicial application of collateral estoppel, or issue preclusion.

B. The Broad Strokes of Issue Preclusion

Issue preclusion stands for the idea that, “[o]nce a matter [has been] properly litigated, that should be the end of the matter for the parties to that action.” It is similar, in that sense, to res judicata, but issue preclusion has a distinct scope. Issue preclusion applies where the issue had been previously, properly litigated and the decision on the matter was material to the case in which it was decided. The more important distinction, however, between issue preclusion and res judicata is that issue preclusion does not require mutuality” between the parties in the case where issue preclusion is being asserted. In other words, the second litigation does not need to be between the same two parties, as is the case with res judicata.

The Federal Circuit, the relevant circuit for this discussion, recognizes exceptions to issue preclusion. The exception with the most interaction with rulings from the PTAB (the appeals board of the USPTO) is that issue preclusion does not apply where the subsequent proceeding applies “a different legal standard.” This was the deciding exception in the two cases that help us understand the implications of this Federal Circuit exception to issue preclusion.

C. Standard Disparities Between PTAB and Article III Courts

The USPTO applies its own particular standards during PTAB proceedings. These are statutorily defined, further interpreted in federal regulations, and expounded upon from the Manual for Patent Examination and Procedure. For the purpose of understanding how PTAB rulings interact with decisions of Article III courts, we will focus primarily on rules relevant to claim construction and patent validity.

  • Claim Construction in Front of the PTAB and the Courts

In evaluating the validity of patent claims, the USPTO applies a “broadest reasonable interpretation” when constructing, or determining the meaning of, the claims of the patent under examination. This is relevant during initial examination of the patent by the USPTO’s examiners and on appeal of a final rejection to the PTAB. In other words, the USPTO and PTAB look for the broadest interpretation of the language of the claims that remains reasonable. The courts, on the other hand, apply the Phillips Standard, which “constructs” the claim as one of ordinary skill in the field would understand it in light of the specification and prosecution history ­any record produced during the original examination of the patent.

This was a potential factor in the rejection of the application of collateral estoppel in our first case, DDR Holdings, LLC v. Priceline.com LLC. Interestingly, it was not a deciding factor in the disposition of the question of issue preclusion. As issue preclusion is an affirmative defense, it must be raised in the answer a complaint. DDR Holdings, however, failed to raise issue preclusion until their brief. The court explicitly notes that this is fatal, in and of itself, but nevertheless evaluates the merits of the request for collateral estoppel. In this case, DDR Holdings sought to estop Priceline from arguing that “merchants providing a service” were not merchants covered under DDR Holdings’ ‘399 patent’s definition of a merchant. Priceline was arguing this based on the prosecution history of the ‘399 patent. During prosecution and initial examination, DDR Holdings had deleted any reference of “providing services” from the definition of merchant within the specification. DDR Holdings had, however, incorporated this earlier, service-inclusive definition, by incorporating the containing preliminary application by reference.

In light of this, the PTAB, during Inter Partes Review initiated by Priceline over the asserted ‘399 patent, found that the portion of the specification nominally defining “merchant” was not, in fact, definitional. Instead, the PTAB applied the broadest reasonable interpretation standard during the Inter Partes Review and found “merchants” to include “producers, distributors, or resellers of the goods or services to be sold.” In other words, the specification did not limit “merchants” because the PTAB did not find sufficient evidence to show that DDR Holdings intended to define “merchant” restrictively via the specification. As such, the broadest reasonable interpretation of “merchant” within the claims would necessarily include purveyors of services.

With this particular ruling, DDR Holdings asserted that the matter had been properly litigated and, therefore, Priceline should be collaterally estopped from asserting their differing construction of merchants in the case before the court. The court noted, however, that it was not bound by the decision of the PTAB because that decision applied the “Broadest Reasonable Interpretation” instead of the court’s Phillips Standard. Because the Federal Circuit, and by extension the district court, must apply the Phillips Standard, issue preclusion could not apply. In other words, Priceline could assert its construction that would exclude “service” providing from the ‘399 patent’s definition of merchants.

Under this standard, the Federal Circuit found the discussion in the ‘399 patent’s specification of “merchant” to be definitional. Furthermore, the Federal Circuit found that the deletion of “services” between the provisional and non-provisional patent to be material and, under the Phillips Standard, found it to explicitly exclude services from the ‘399 claim coverage.

  • Inter Partes Review and Collateral Estoppel

The second case, Kroy IP Holdings, LLC v. Groupon, Inc.is more narrowly focused on challenges to invalidity, in the form of Inter Partes Review, before the PTAB. During an IPR proceeding, the PTAB applies a preponderance of evidence standard when determining if a patent is valid or invalid. This is at odds with the standard applied in Article III courts, which instead apply the clear and convincing evidence standard.

In Kroy, Groupon had previously initiated IPR of patents asserted by Kroy IP Holdings. In the IPRs, Groupon prevailed and the asserted patents were found to be invalid. The district court then held that Kroy IP Holdings was collaterally estopped from re-litigating the patent validity presented. After Groupon’s motion to dismiss was granted, Kroy IP Holdings sought appeal, arguing that, given the differing standards, collateral estoppel should not have applied, among other things.

The Federal Circuit ultimately decided in Kroy IP Holding’s favor. First, they noted that, on its face, collateral estoppel—or issue preclusion—could not apply here. This finding was the result of the lowered standard of proof required in front of the PTAB versus that required in front of the court.

The court then addressed a further exception to this exception. Groupon had argued, in favor of precluding Kroy IP Holding form re-litigating patent validity, that the Federal Circuit’s previous decisions stated that PTAB invalidity findings were themselves preclusive. The Federal Circuit disagreed and clarified. The Court explained that PTAB findings on validity only became preclusive once the Federal Circuit had affirmed them. A natural result of this, as described by the court, is that claims found invalid by the PTAB remain in existence until the decision is appealed to the Federal Circuit and affirmed or a district court, applying their heightened standard, independently finds the claims invalid. In other words, only once patent validity has been evaluated under the clear and convincing standard, either on appeal to the Federal Circuit or as a matter of first impression in front of a district court, does the disposition gain preclusive effect.

Because the District Court based its dismissal on the preclusive effect of the PTAB findings and said PTAB findings had not been appealed to the Federal Circuit, the court reversed and remanded.

 D. Conclusion

The cases discussed above illustrate a distinct challenge that faces the dipartite American patent system. Because of the differing standards applied by the two bodies that hold sway over patent litigation, parties can be given a functional second bite at the apple when moving between the USPTO and Article III courts. This system is ultimately imperfect and creates duplicative litigation, as demonstrated in both of these cases. This is counterbalanced by the increased efficiency the USPTO and PTAB ostensibly provide to the American intellectual property system. In summary, because of the structure of the American patent system, issue preclusion—or collateral estoppel—remains difficult to invoke in patent litigation.


Proponents of virtual reality (VR) as a medium for evidence in the courtroom have argued that it can bring many benefits to jurors, including enhanced empathy and better factual understanding. However, it is also speculated that VR could increase a juror’s biases or a false sense of accuracy. As VR technology advances, the legal field faces the challenge of balancing innovation with impartiality, paving the way for standards that will determine the future role of VR in trials. By examining VR’s speculative and actual impacts in evidence presentation, we gain insight into how this technology could affect the legal landscape further.

I. What Is VR and How Does It Relate To Evidence?

In its broadest sense, VR is “a simulated three-dimensional (3D) environment that lets users explore and interact with a virtual surrounding in a way that approximates reality, as it’s perceived through the users’ senses. VR technology primarily utilizes headwear that covers your eyes completely, and you can see a three-dimensional immersive world in a 360-degree spherical field of view. While VR technology is gaining popularity, many people don’t use or come across it indaily life. While VR technology has been trendy for recreational use, such as VR video games, VR has been implemented in many professional settings for training, education, healthcare, retail, real estate, and more. The visual, auditory, and even tactile aspects of virtual reality, ranging from vibrations to even full-body haptic suits, allow the immersion to feel more ‘real’ and thus allow these practical applications.

These practical applications have led to speculation and interest in using VR technology in the legal field. One of the primary ideas is that jurors can “experience” scenes of the case rather than physically going there. Jurors have shown a desire to visit crime scenes in homicide cases when the scene itself is relevant to the conviction. VR technology can help overcome the hurdles of photographs or videos by virtually going to the scene, as juries can ‘virtually witness’ the scene and simulated events. The power of VR technology to transportjurors to the scene of the crime can also help make complex cases more understandable.

II. Evidentiary Concerns with VR Evidence

Before addressing VR evidence’s potential benefits and harms, it is necessary to consider its admissibility. Federal Rules of Evidence, such as hearsay and authentication, present unique challenges for the admissibility of VR evidence.

Under the Federal Rules of Evidence, hearsay is an out-of-court statement offered to prove the truth of the matter asserted. For example, if trying to prove person A loves VR, person B testifies that person A told them, “I love VR.” In this example, person B testifying to person A’s statement is hearsay. Hearsay is not admissible unless it meets an exemption or exception in the Federal Rules of Evidence. While the exact use of VR evidence containing out-of-court statements would vary on a case-by-case basis, a secondary purpose of VR presentation of evidence would be admissible not for the truth but to clarify other admissible evidence. For example, if there is an admissible recording of a witness describing a crime scene, VR evidence could help contextualize their testimony and immerse the jurors in the scene. In this case, the purpose wouldn’t be to prove the scene looked exactly as described or appeared in VR but to clarify the witness’s admissible testimony. While this may prevent VR demonstrations from jury deliberation, they can still be shown in the courtroom.

Another unique issue that comes with introducing VR presentation of evidence is authentication. According to the Federal Rules of Evidence, to introduce evidence, a proponent must produce evidence “sufficient to support a finding that the item is what the proponent claims it is.” This presents a unique problem for VR demonstrations because a proponent must show that the VR evidence is authentic. For example, with a photograph, a witness can authenticate it by testifying to taking the photo or confirming it accurately represents its contents. However, because VR is created as a simulation rather than a direct capture, VR wouldn’t be able to be authenticated the same way as a photograph. A proponent would rely onFederal Rule of Evidence 901(b)(9) for authentication. Because this rule alone would not be sufficient, a guideline for admitting VR evidence is that the proponent should “demonstrate that a qualified expert created the VR demonstration using an accurate program and equipment. The proponent should also show that all data used to create the demonstration was accurate and that no unfounded assumptions were made. Lastly, the proponent must present witness testimony to “verify the accuracy of the final product.”

III. Speculated and Actual Benefits of VR Evidence

As VR technology has become cheaper, more mainstream, and more widely used, it is being used in actual cases, making the potential for its wider use more achievable. One of the primary speculated benefits was the immersive nature of VR, allowing jurors to engage deeper with evidence by experiencing crime scenes and potentially re-creating events firsthand. Another speculated benefit is VR’s potential to “appeal to a jury’s emotional and subconscious” responses through its immersive nature.

Real-life implementations of VR evidence have already illustrated some of these benefits. One example is from Marc Lamber and James Goodnow, personal injury attorneys who have implanted VR technology in cases to “transport a jury to an accident scene.” Lamber and Goodnow work with engineers, experts, and production companies to recreate the scene where an injury or death occurred. This has allowed jurors to not only visualize the circumstances, events, and injury but also empathize deeper with the injured person’s suffering and aftermath of the incident. This ability to ‘transport’ the jurors to the scene can be incredibly impactful as it may be hard for jurors to visualize the scene in an isolated courtroom. One study in Australia focused on how VR can affect a jury’s ability to get the ‘correct’ verdict. Researchers, legal professionals, police officers, and forensic scientists simulated a hit-and-run scene in VR and photographs, then split jurors into groups to test the differences. This study found that VR technology required significantly less effort than photographs to construct a coherent narrative, leading jurors to reach the correct verdict 9.5 times more frequently than those who relied on photographs alone. The immersive technology also gave the jurors a better memory of the critical details of the case; the photograph group had difficulty visualizing the events of the case from the photographs alone. Researchers said this study was “unequivocal evidence that interactive technology leads to fairer and more consistent verdicts.”

IV. Speculated and Actual Harms of VR Evidence

While the immersive nature of VR technology has brought speculations about potential benefits for the legal field, concerns have emerged about possible harm or shortcomings of VR technology as evidence. The primary concerns are about potential biases and costs. 

 VR technology might cause jurors to impermissibly judge parties, especially defendants in criminal trials, differently according to underlying biases that they hold. One study found that mock jurors who used VR technology to understand a criminal trial were more likely to judge a black defendant more harshly than the white one. These studies used VR to simulate scenes from trials and, through computer generation, swapped out the races of the defendants and tested the differences in guilty verdicts and sentencing. Salmanowitz’s study found that using an avatar instead of accurate visual representations of the defendants can reduce implicit bias based on race. The avatars were shown by only the handheld controllers visible in the virtual space. The VR technology made no substantial difference in the jury’s decisions. However, a study by Samantha Bielen et al.found that jurors may be biased using VR against non-white defendants, finding non-white defendants were more likely to be found guilty on the same evidence as a white defendant when using the VR. 

The cost of VR also presents a barrier to implementing VR technology in courts. In the Australian study, a researcher noted that using VR as an evidentiary medium is “expensive, especially in remote locations, and in some cases, the site itself has changed, making accurate viewings impossible.”  VR technology is expensive, with even the cheapest consumer-grade headsets costing around $500. Further, digital recreation of the scene starts at $15,000 but can “go up to six figures depending on complexity.”

V. Conclusion

While the balance between the benefits and harms of introducing VR as a medium for evidence may vary greatly case-by-case, overall, the demonstrated advantages in improving a jurors’ factual understanding tend to outweigh the drawbacks. Although speculation is a natural reaction to new technologies, as VR finds real-world application in courtrooms, its tangible benefits and harms have been clarified. This allows revisiting the initial speculation and more effectively addressing this balance and admissibility concerns that accompany VR demonstrations use as evidence. Increased use and advancements in VR technology could amplify these benefits by increasing empathy and accuracy and tampering with the effects of emotional bias.  With this evolution in VR technology, the potential for an immersive yet balanced use of VR in the courtroom grows, offering an even greater ability for jurors to engage with evidence to enhance understanding, minimize bias, and support fairer, more informed verdicts.

I. INTRODUCTION

In May of 2024, the Federal Circuit overruled 40 years of precedent for assessing the obviousness of design patents in LKQ Corp. v. GM Global Technology Operations LLC. Already, commentators and practitioners have a wide array of opinions about the impacts of LKQ. If recent history is any guide, however, declarative statements about the impacts of LKQ are premature, and they create risks to businesses, practitioners, and courts alike. Rather, patent law observers should adopt a wait-and-see approach for the impacts of LKQ on design patent obviousness. 

II. THE LKQ DECISION 

In LKQ, the Federal Circuit addressed the standard for assessing design patent obviousness under 35 U.S.C. § 103. Before this decision, to find a claimed design unpatentable as obvious, the two-part RosenDurling test required a primary reference that was “basically the same” as the claimed design and secondary references “so related” to the primary reference that their features suggested combination with the primary reference. 

In this case, the Federal Circuit held that framework to be too rigid under the Patent Act. Instead, the court ruled that the obviousness of a claimed design is to be determined through the application of the familiar Graham four-part test used to assess the obviousness of utility patents. 

A.   EARLY OPINIONS ABOUT LKQ  

In the months since LKQ, opinions about the impacts of the decision have poured in from academics, practitioners, and commentators alike. Some predict a seismic shift, stating that the “far-” and “wide-reaching consequences” of LKQ will likely make design patents harder to obtain and easier to invalidate. Others predict little change at all, stating that the obviousness test “is largely the same as before” and that the expected changes from LKQ are primarily procedural. Still others seem to have landed on a middle ground, expecting “noticeable differences” in the law, with “examiners [having] more freedom to establish that the prior art is properly usable in an obviousness rejection.” 

B. PARALLELS WITH KSR 

LKQ is not the only recent decision dealing with obviousness that evoked immediate and wide-ranging reactions. In 2007, the Supreme Court issued KSR International Co. v. Teleflex Inc., a decision addressing the obviousness standard for patents. Notably, the Court rejected the Federal Circuit’s rigid application of its “teaching, suggestion, or motivation” test for obviousness to a utility patent in that case. 

In the immediate aftermath of that case, commentators and practitioners were “divided on whether the decision of the Supreme Court in KSR [was] (a) a radical departure from the Federal Circuit’s approach, or (b) unlikely to change much.” Even after the Federal Circuit began to issue decisions under KSR, some argued that the case had only a “modest impact” on the Federal Circuit, and others even questioned “whether the Supreme Court achieved anything in KSR other than giving the Federal Circuit a slap on the wrist.”

Experts were also divided on the likely business impacts of KSR in its immediate aftermath. In the summer after the decision came down, two distinguished patent law experts speaking on a panel were asked if KSR would drive up the cost of preparing and prosecuting a patent. One said yes, and the other said no. 

C. CAUTIONARY TALES FROM KSR 

As time went on, however, the impacts of KSR became clear. Empirical studies from years after the decision routinely proved that the impacts of KSR were anything but modest, contradicting “a commonly held belief that KSR did not change the law of obviousness significantly.” Various empirical studies revealed “strong evidence that KSR has indeed altered the outcomes of the Federal Circuit’s obviousness determinations,” “a remarkable shift in the Federal Circuit’s willingness to uphold findings of obvious below,” and that “the benefit of retrospection shows KSR did change the rate of obviousness findings.”

Thus, KSR should serve as a cautionary tale against jumping to conclusions about the impacts of obviousness decisions. In the months following KSR, any declarative statements about its impacts were mere speculation. Even after the Federal Circuit began issuing decisions under KSR, the sample size remained too small to draw conclusions. Only years after the decision could researchers illuminate the impacts of KSR through empirical studies and show which of those early opinions were right and wrong. 

III. THE WISDOM OF A WAIT-AND-SEE APPROACH FOR LKQ

Since the Federal Circuit only issued LKQ in May of 2024, we remain in the window where any declarative statements about its impacts are premature. Indeed, the Federal Circuit acknowledged that “there may be some degree of uncertainty for at least a brief period” in its LKQ opinion. While the urge to jump to conclusions is understandable, a wait-and-see approach offers many advantages. 

First, as KSR demonstrated, early predictions may be inaccurate and may influence practitioners to adopt misguided design patent prosecution strategies. Overstating the impacts of LKQ may lead to overly cautious design patent applications, leaving intellectual property unprotected. A wait-and-see approach will allow prosecution strategies to develop based on reliable trends, reducing the risk of costly errors. 

Second, the Federal Circuit almost certainly has more to say about design patent obviousness than it included in its LKQ opinion. Faulty strategy changes based on an incomplete picture may later need to be undone at great expense. Waiting allows the courts to solidify the impacts of LKQ so that practitioners and businesses can adjust their approaches – if that is necessary – with greater certainty and lower risk. 

Third, overreacting to speculative predictions could cause companies to shift their design-around strategies, leading to unnecessary and wasteful changes in product lines. A wait-and-see approach allows companies to maintain their creative momentum and keep their design strategies consistent until the impacts of LKQ are better understood. 

Fourth, design patents have experienced a boom in recent years. Premature predictions about LKQ risk skewing the perceptions of business leaders and the public about the continued value in pursuing design patent protections. By waiting to confirm the impacts of LKQ, commentators avoid this risk. 

Fifth, predictions about LKQ could become self-fulfilling prophecies. Widespread speculation could unintentionally influence how courts evaluate obviousness in future cases. A wait-and-see approach allows courts to evaluate obviousness free from the noise of speculative predictions, focusing exclusively on the application of the law to the facts of each case. 

Lastly, practitioners face potential backlash from clients if they offer advice that turns out to be too aggressive or pessimistic. By advocating patience to their clients, practitioners can maintain client trust and offer more measured and thoughtful advice once the implications of LKQ become clear. 

IV. WHEN WILL WE KNOW? 

This all begs the question: when will we understand LKQ so that declarative statements about its impacts are appropriate? Again, we can turn to KSR for guidance. 

More than a year after KSR was handed down, some were still questioning if the decision had any impact at all. The first empirical studies of its impacts seemed to emerge about two to three years after the decision, uniformly finding that it altered the law of obviousness. Therefore, it seems safe to assume that empirical studies will illuminate the impacts of LKQ in 2026. Until then, patent law observers should wait and see. 

V. CONCLUSION

With the recent history of KSR as our guide, patent law observers should adopt a wait-and-see approach for the impacts of the Federal Circuit’s recent decision in LKQ. At this early stage, improper speculation and declarative statements about the impacts of the case creates risks for businesses, practitioners, and courts. Instead, a wait-and-see approach allows reliable trends to guide prosecution strategies and allows design patent momentum to continue. In due time, empirical studies will emerge and make the impacts of LKQ clear to all. 

Trending Uses of Deepfakes

Deepfake technology, leveraging sophisticated artificial intelligence, is rapidly reshaping the entertainment industry by enabling the creation of hyper-realistic video and audio content. This technology can convincingly depict well-known personalities saying or doing things they never actually did, creating entirely new content that did not really occur. The revived Star Wars franchise used deepfake technology in “Rogue One: A Star Wars Story” to reintroduce characters like Moff Tarkin and Princess Leia, skillfully bringing back these roles despite the original actors, including Peter Cushing, having passed away. Similarly, in the music industry, deepfake has also been employed creatively, as illustrated by Paul Shales’ project for The Strokes’ music video “Bad Decisions.”Shales used deepfake to make the band members appear as their younger selves without them physically appearing in the video.

While deepfakes offer promising avenues for innovation, such as rejuvenating actors or reviving deceased ones, it simultaneously poses unprecedented challenges to traditional copyright and privacy norms.

Protections for Deepfakes

Whereas deepfakes generate significant concerns, particularly about protecting individuals against deepfake creations, there is also controversy over whether the creators of deepfake works can secure copyright protection for their creations.

Copyrightability of Deepfake Creations

Current copyright laws fall short in addressing the unique challenges posed by deepfakes. These laws are primarily designed to protect original works of authorship that are fixed in a tangible medium of expression. However, they do not readily apply to the intangible, yet creative and recognizable, expressions that deepfake technology replicates. This gap exposes a crucial need for legal reforms that can address the nuances of AI-generated content and protect the rights of original creators and the public figures depicted.

Under U.S. copyright law, human authorship is an essential requirement for a valid copyright claim. In the 2023 case Thaler v. Perlmutter, plaintiff Stephen Thaler attempted to register a copyright for a visual artwork produced by his “Creativity Machine,” listing the computer system as the author. However, the Copyright Office rejected this claim due to the absence of human authorship, a decision later affirmed by the court. According to the Copyright Act of 1976, a work must have a human “author” to be copyrightable. The court further held that providing copyright protection to works produced exclusively by AI systems, without any human involvement, would contradict the primary objectives of copyright law, which is to promote human creativity—a cornerstone of U.S. copyright law since its beginning. Non-human actors need no incentivization with the promise of exclusive rights, and copyright was therefore not designed to reach them.

However, the court acknowledged ongoing uncertainties surrounding AI authorship and copyright. Judge Howell highlighted that future developments in AI would prompt intricate questions. These include determining the degree of human involvement necessary for someone using an AI system to be recognized as the ‘author’ of the produced work, the level of protection afforded the resultant image, ways to assess the originality of AI-generated works based on non-disclosed pre-existing content, the best application of copyright to foster AI-involved creativity, and other associated concerns.

Protections Against Deepfakes

The exploration of copyright issues in the realm of deepfakes is partially driven by the inadequacies of other legal doctrines to fully address the unique challenges posed by this technology. For example, defamation law focuses on false factual allegations and fails to cover deepfakes lacking clear false assertions, like a manipulated video without specific claims. Trademark infringement, with its commercial use requirement, does not protect against non-commercial deepfakes, such as political propaganda. The right of publicity laws mainly protect commercial images rather than personal dignity, leaving non-celebrities and non-human entities like animated characters without recourse. False light requires proving substantial emotional distress from misleading representations, a high legal bar. Moreover, common law fraud demands proof of intentional misrepresentation and tangible harm, which may not always align with the harms caused by deepfakes. 

Given these shortcomings, it is essential to discuss issues in other legal areas, such as copyright issues, to enhance protection against the misuse of deepfake technology. In particular, the following sections will explore unauthorized uses of likeness and voice and the impacts of deepfakes on original works. These discussions are critical because they aim to address gaps left by other legal doctrines, which may not fully capture the challenges posed by deepfakes, thereby providing a broader scope for protection. 

Unauthorized Use of Likeness and Voice

Deepfakes’ capacity to precisely replicate an individual’s likeness and voice may raise intricate legal issues. AI-generated deepfakes, while sometimes satirical or artistic, can also be harmful. For example, Taylor Swift has repeatedly become a target of deepfakes, including instances where Donald Trump’s supporters circulated AI-generated videos that falsely depict her endorsing Trump and participating in election denialism. This represents just one of several occasions where her likeness has been manipulated, underscoring the broader issue of unauthorized deepfake usage.

The Tennessee ELVIS Act updates personal rights protection laws to cover the unauthorized use of an individual’s image or voice, adding liabilities for those who distribute technology used for such infringements. In addition, on January 10, 2024, Reps. María Elvira Salazar and Madeleine Dean introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act (H.R. 6943). This bill is designed to create a federal framework to protect individual rights to one’s likeness and voice against AI-generated counterfeits and fabrications. Under this bill, digitally created content using an individual’s likeness or voice would only be permissible if the person is over 18 and has provided written consent through a legal agreement or a valid collective bargaining agreement. The bill specifies that sufficient grounds for seeking relief from unauthorized use include financial or physical harm, severe emotional distress to the content’s subject, or potential public deception or confusion. Violations of these rights could lead individuals to pursue legal action against providers of “personalized cloning services” — including algorithms and software primarily used to produce digital voice replicas or depictions. Plaintiffs could seek $50,000 per violation or actual damages, along with any profits made from the unauthorized use.

Impact on Original Work

The creation of deepfakes can impact the copyright of original works. It is unclear whether deepfakes should be considered derivative works or entirely new creations.

In the U.S., a significant issue is the broad application of the fair use doctrine. Under § 107 of the Digital Millennium Copyright Act of 1998 (DMCA), fair use is determined by a four-factor test assessing (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount and substantiality of the portion used, and (4) the impact on the work’s market potential. This doctrine includes protection for deepfakes deemed “transformative use,” a concept from the Campbell v. Acuff Rose decision, where the new work significantly alters the original with a new expression, meaning, or message. In such cases, even if a deepfake significantly copies from the original, it may still qualify for fair use protection if it is transformative, not impacting the original’s market value.

However, this broad application of the fair use doctrine and liberal interpretation of transformative use do not work in favor of the original creators. They may potentially protect the deepfake content even with malicious intent, which makes it difficult for original creators to bring claims under § 512 of the DMCA and § 230 of the Communication Decency Act.

Federal and State Deepfake Legislation

Copyright is designed to adapt with the times.” At present, although the United States lack comprehensive federal legislation that specifically bans or regulates deepfakes, there are still several acts that target deepfakes. 

In Congress, a few proposed bills aim to regulate AI-generated content by requiring specific disclosures. The AI Disclosure Act of 2023 (H.R. 3831) requires any content created by AI to include a notice stating, “Disclaimer: this output has been generated by artificial intelligence.” The AI Labeling Act of 2023 (S. 2691) also demands a similar notice, with additional requirements for the disclaimer to be clear and difficult to alter. The REAL Political Advertisements Act (H.R. 3044 and S. 1596) demands disclaimers for any political ads that are wholly or partly produced by AI. Furthermore, the DEEPFAKES Accountability Act (H.R. 5586) requires that any deepfake video, whether of a political figure or not, must carry a disclaimer. It is designed to defend national security from the risks associated with deepfakes and to offer legal remedies to individuals harmed by such content. The DEFIANCE Act of 2024 aims to enhance the rights to legal recourse for individuals impacted by non-consensual intimate digital forgeries, among other objectives.

On the state level, several states have passed legislation to regulate deepfakes, addressing various aspects of this technology through specific legal measures. For example, Texas SB 751 criminalizes the creation of deceptive videos with the intent to damage political candidates or influence elections. In Florida, SB 1798 targets the protection of minors by prohibiting the digital alteration of images to depict minors in sexual acts. Washington HB 1999 provides both civil and criminal remedies for victims of fabricated sexually explicit images. 

This year, California enacted AB 2839, targeting the distribution of “materially deceptive” AI-generated deepfakes on social media that mimic political candidates and are known by the poster to be false, as the deepfakes could mislead voters. However, a California judge recently decided that the state cannot yet compel individuals to remove such election-related deepfakes, since AB 2839 facially violates the First Amendment. 

These developments highlight the diverse strategies that states are employing to address the challenges presented by deepfake technology. Despite these efforts, the laws remain incomplete and continue to face challenges, such as concerns over First Amendment rights.

Conclusion

As deepfake technology evolves, it challenges copyright laws, prompting a need for robust legal responses. Federal and state legislation is crucial in protecting individual rights and the integrity of original works against unauthorized use and manipulation. As deepfake technology advances, continuous refinement of these laws will be crucial to balance innovation with ethical and legal boundaries, ensuring protection against the potential harms of deepfakes.

Introduction 

Algorithmic bias is AI’s Achilles heel, revealing how machines are only as unbiased as the humans behind them. 

The most prevalent real-world stage for human versus machine bias is the job search process. What started out as newspaper ads and flyers at local coffee shops, is now a completely digital process with click-through ads, interactive chatbots, resume data translation, and computer-screened candidate interviews. 

Artificial intelligence encompasses a wide variety of tools, but in context to HR specifically, common AI tools include Machine Learning algorithms that conduct complex and layered statical analysis modeling human cognition (neural networks), computer vision that classifies and labels content on images or video, and large language models. 

AI-enabled employment tools are powerful gatekeepers that determine the future of natural persons. With over 70% of companies using this technology investing into the promise of efficiency and neutrality, these abilities have recently come into question as these technologies have the potential to discriminate against protected classes. 

Anecdote 

On February 20, 2024, Plaintiff Derek Mobley initiated a class action lawsuit against an AI-enabled HR organization WorkDay, Inc., for engaging in a “pattern and practice” of discrimination based on race, age, and disability in violation of the Civil Rights Act of 1964, Civil Rights Act of 1886, Age Discrimination Act of 1967, and ADA Amendments Act of 2008. WorkDay Inc., according to the complaint, disproportionately disqualifies African-Americans, individuals over the age of 40, and individuals with disabilities securing gainful employment. 

WorkDay provides subscription-based AI HR solutions to medium and large sized firms in a variety of industries. The system screens candidates based on human inputs and algorithms and according to the complaint, WorkDay employs an automated system, in lieu of human judgement, to determine how high volume of applicants should be processed on behalf of their business clients. 

The plaintiff and members of the class have applied for numerous jobs that use WorkDay’s platforms and received several rejections. This process has deterred the plaintiff and members of the class from applying to companies that use WorkDay’s platform.

Legal History of AI Employment Discrimination 

Mobley vs. WorkDay is the first-class action lawsuit against an AI solution company for employment discrimination, but this is not the first time that an AI organization is being sued for employment discrimination. 

In August 2023, the EEOC settled the first-of-its-kind Employment Discrimination lawsuit against a virtual tutoring company that programmed its recruitment software to automatically reject older candidates. The company was required to pay $325,000 and if they were to resume hiring efforts in the US, they are required to call back all applicants during the April-May 2020 period who were rejected based on age to re-apply. 

Prior to this settlement, the EEOC issued guidance to employers about their use of Artificial Intelligence tools that extends existing employer selection guidelines to AI-assisted selections. From this guidance, employers, not third-party vendors, ultimately bear the risk of unintended adverse discrimination from such tools.

How Do HR AI Solutions Introduce Bias?

There are several steps in the job search process and AI is integrated throughout this process. Steps include: The initial search, narrowing candidates, and screening.  

Initial search

The job search process starts with targeted ads reaching the right people. Algorithms in hiring can steer job ads towards specific candidates and help assess their competencies using new and novel data. HR professionals found the tool helpful in drafting precise language and designing the ad with position elements, content and requirements. But these platforms can inadvertently reinforce gender and racial stereotypes by delivering the ad to candidates that meet certain job stereotypes.

For instance, ads delivered on Facebook for stereotypically male jobs are overwhelmingly targeted at male users even though the advertising was intended to reach a gender neutral audience. Essentially, at this step of the job search process, algorithms can prevent capable candidates from even seeing the job posting in the first place that further creates a barrier to employment. 

Narrowing Candidates

After candidates that have viewed and applied for the job through an ad or other source, the next step that AI streamlines is the candidate narrowing process. At this step, the system narrows candidates by reviewing resumes that best match the historical hiring data from the company or its training data. Applicants found the resume to application form data transfers helpful and accurate in this step of the process. But applicants were concerned that the model could miss necessary information.

From the company’s perspective, hiring practices from the client company are still incorporated into the hiring criteria in the licensed model. While the algorithm is helpful in parsing vast amounts of resumes and streaming this laborious process for professionals, the algorithm can replicate and amplify existing biases in the company data.

For example, a manager’s past decisions may lead to anchoring bias. If some bias like gender, education, race and age existed in the past and they are present in the employer’s current high performing employees that the company uses as a benchmark, those biases can be incorporated into the outcomes at this stage of the employment search process. 

Screening

Some organizations subscribe to AI tools that have a computer vision-powered virtual interview process that analyzes the candidates’ expressions to determine whether they fit the “ideal candidate” profile, while other tools like behavior/skills games are used to screen candidates prior to an in-person interview. 

Computer vision models that analyze candidate expressions to assess candidacy are found to perpetuate preexisting biases against people of color. For instance, a study that evaluates such tools, found the taxonomies of social and behavioral components creates and sustains similar biased observations that one human would make on an another because the model with those labels and taxonomies is trained with power hierarchies. In this sense, computer vision AI hiring tools are not neutral because they reflect the humans that train and rely on them. 

Similarly, skill games are another popular tool used to screen candidates. However, there are some relationships AI cannot perceive in its analysis. For instance, candidates that are not adept with online games perform poorly on those games, not because they lack the skills, but they lack an understanding of the games features. Algorithms, while trained on vast data to assess candidate ability, still fall short when it comes assessing general human capabilities like the relationship between online game experience and employment skills tests. 

Throughout each step of the employment search process, AI tools fall short in accurately capturing candidate potential capabilities.

Discrimination Theories and AI

Given that the potential for bias is embedded throughout the employment search process, legal scholars speculate courts are more likely to scrutinize discriminatory outcomes under the disparate impact theory of discrimination.

As a recap, under Title VII there are two theories of discrimination, disparate treatment, and disparate impact. Disparate treatment means the person is treated different “because of” their status as a protected class (i.e., race, sex). For example, if a manager were to intentionally use a bias algorithm to screen out candidates of a certain race, then this behavior would be considered disparate treatment. Note, this scenario is for illustrative purposes only. 

Disparate impact applies to facially neutral processes that have a discriminatory effect. The discriminatory effect aspect of this theory of discrimination can be complex because the plaintiff must identify the employer practice that has a disparate impact on a protected group. The employer can then defend that the practice by showing it is “job related” and consistent with “business necessity.” However, the plaintiff can still show that there was an alternative selection process and the business failed to adopt it. Based on this disparate impact theory, it is possible that when AI selection tools disproportionately screen women and/or racial minorities from the applicant pool, disparate theory could apply. 

Existing Methods to Mitigate Bias 

Algorithmic bias in AI tools has serious implications for members of protected classes. 

However, developers currently employ various tools to de-bias algorithms and improve their accuracy. One method is de-biased word embedding in which neutral associations of a word are supplemented to expand the model’s understanding of the word. For instance, a common stereotype is men are doctors and women are nurses or in algorithmic terms “doctor – man + woman = nurse.” However, with the de-bias word embedding process, the model is then trained to understand “doctor – man + woman = doctor.” 

Another practice currently employed by OpenAI is external Red Teaming. In which external stakeholders interact with the product and assess its weaknesses, possibility for bias, or other adverse consequences and provide feedback to OpenAI to improve and mitigate the onsets of adverse events. 

But there are limitations to these enhancements. To start, bias mitigation is not a one-size-fits-all issue. Bias is specific to its geographic and cultural bounds. For instance, a model in India may need to consider caste-based discrimination. Additionally, precision is required to capture the frame where bias is possible and solely relying on foreseeable bias from the developers’ perspective is limiting. Rather, employing some form of collaborative design that includes relevant stakeholders to contribute to the identification of bias, the identification of not biased is needed.

Lastly, a debiased model is not a panacea. A recent study in which users interacted with a debiased model that used machine learning and deep learning to provide recommendations for college majors, indicated that regardless of the debiased model’s output, users relied on their own biases to choose their majors, often motivated by gender stereotypes associated with those majors. 

Essentially, solutions from the developer side are not enough to resolve algorithmic bias issues. 

Efforts to Regulate AI Employment Discrimination

Federal law does not specifically govern artificial intelligence. However, existing laws including Title VII extend to applications that include AI. At this point, regulation efforts are largely at the state and local government level. 

Additionally, the EEOC’s Employer Guidelines is a start to shifting the onus on to employers to investigate the capabilities and outcomes of the technologies incorporated into their hiring practices.

New York City is the first local government to pass an official law that regulates AI-empowered employment decision tools. The statute requires organizations to inform candidates of the use of AI in their hiring process, and before using the screening device, notify potential candidates. If candidates do not consent to the AI-based process, the organization is required to use an alternative method. 

Like New York’s statute, Connecticut passed a statute specific to state agency’s use of AI and machine learning hiring tools. Connecticut requires an annual review of the tools performance, a status update on whether the tool underwent some form of bias mitigation training in an effort to prevent unlawful discrimination. 

New JerseyCalifornia, and Washington D.C. currently have bills that are intended to prevent discrimination with AI hiring systems. 

Employer Considerations

With the possibility of bias embedded throughout each step of the recruiting process, employers must do their part to gather information about the performance of the AI system they ultimately invest in. 

To start, recruiters and managers alike stressed the need for AI systems to provide some explanation why the applicant is rejected or selected to accurately assess the performance of the model. This need speaks specifically to AI models’ tendency to find proxies or shortcuts in the data to reach the intended outcome on a superficial level. For instance, models might find a candidate by only focusing on candidates who graduated from universities in the Midwest if most of upper management attended such schools. In this sense, employers should look for accuracy reports, ask vendors ways to identify and correct this issue in this hiring pool.

Similarly, models can focus on candidate traits that are unrelated to the job traits and are simply unexplained correlations. For example, one model in the UK linked people that liked “curly fries” on Facebook to have higher levels of intelligence. In this case, employers need to develop processes to analyze whether the output from the model was “job related” or related carrying out the functions of the business. 

Lastly, employers must continue to invest in robust diversity training. Algorithmic bias reflects the bias human behind the computer. While AI tools enhance productivity and alleviate the laborious parts of work, it also increases the pressure on humans to do more cognitive-intensive work. In this sense, managers need robust diversity training to scrutinize outputs from AI models, to investigate whether the model measured what it was supposed to, whether the skills required in the post accurately reflect the expectations and culture of the organization. 

Along with robust managerial training, these AI solutions often incorporate “culture fit” as a criterion. Leaders need to be intentional about precisely defining culture and promoting that defined culture in its hiring practices. 

Conclusion

A machine does not know its output is biased. Humans interact with context—culture dictates norms and expectations, shared social/cultural history informs bias. Humans, whether we like to admit it or not, know when our output is biased. 

To effectively mitigate unintentional bias in AI-driven hiring, stakeholders, ranging from HR professionals to developers and candidates, must understand the technology’s limitations, ensure its job-related decision-making accuracy, and promote transparent, informed use, while also maintaining robust DEI initiatives and awareness of candidates’ rights.