Tag: artificial intelligence

Introduction

The emergence of Artificial Intelligence (AI) contract drafting software marks a pivotal moment in legal technology, where theoretical possibilities are transforming into market realities. As vendors compete to deliver increasingly sophisticated solutions, understanding the current state of this market becomes crucial for legal practitioners making strategic technology decisions. The landscape is particularly dynamic, with established legal tech companies and ambitious startups offering solutions that range from basic template automation to sophisticated language processing systems.

Yet beneath the marketing promises lies a more nuanced reality about what these systems can and cannot do. While some tools demonstrate remarkable capabilities in routine contract analysis and generation, others reveal the persistent challenges of encoding legal judgment into algorithmic systems. This tension between technological capability and practical limitation defines the current market moment, making it essential to examine not just who the key players are, but what their software delivers in practice.

This paper provides an analysis of the current market for AI contract drafting software, examining the capabilities and limitations of leading solutions. By focusing on specific vendors and their technologies, we aim to move beyond general discussions of AI’s potential to understand precisely where these tools succeed, where they fall short, and what this means for law firms and legal departments making technology investment decisions.

Historical Context and Technical Foundation

The rise of AI in legal practice reflects a fascinating evolution from theoretical possibility to practical reality. While early experiments with legal expert systems emerged in the 1960s at the University of Pittsburgh, marking the field’s experimental beginnings, the real transformation began with the maturation of machine learning and natural language processing (NLP) in the 21st century. These technologies fundamentally changed how computers could interpret and engage with human language, creating new possibilities for automated contract analysis and drafting that early pioneers could only imagine. The shift from rule-based expert systems to sophisticated language models represents more than just technological progress—it marks a fundamental change in how we conceptualize the relationship between computation and legal reasoning. Early systems relied on rigid, pre-programmed rules that could only superficially engage with legal texts. Modern AI tools, by contrast, can analyze patterns and context in ways that more closely mirror human understanding of legal language, though still with significant limitations. This technological evolution has particular significance for contract drafting, where the ability to understand and generate nuanced legal language is essential. While early systems could only handle the most basic document assembly, today’s AI tools can engage with contractual language at a more sophisticated level, analyzing patterns and suggesting context-appropriate clauses. 

Contract drafting represents a complex interplay of legal reasoning and strategic foresight. At its core, the process demands not just accurate translation of parties’ intentions into binding terms, but also the anticipation of potential disputes and the careful calibration of risk allocation. Traditional drafting requires mastery of multiple elements: precise definition of terms, careful structuring of obligations and conditions, strategic design of termination provisions, and thorough implementation of boilerplate clauses that can prove crucial in dispute resolution.

AI systems have sophisticated pattern recognition to analyze existing contracts and learn standard legal language patterns, which helps ensure accuracy and precision in expressing each party’s intentions. These systems can ensure that contract terms are legally enforceable by cross-referencing legal databases, statutes, and regulations to confirm compliance with relevant law. Furthermore, they excel at identifying common contractual conditions to obligations and suggesting appropriate risk mitigation clauses, such as force majeure clause.

The technology’s analytical capabilities extend to identifying potential areas of dispute based on historical contract analysis, enabling preventive drafting approaches. By leveraging large databases of legal documents, AI systems streamline the drafting process through automated insertion of standard provisions while maintaining consistency across documents. This automation of routine tasks allows lawyers to focus on strategic aspects of contract preparation and negotiation

Principal Players in AI Contract Drafting

  • Gavel

Gavel is a standout tool for document automation, designed to simplify the creation of legal documents through customizable templates and conditional logic. Its drag-and-drop interface is intuitive, making it accessible to non-technical users, and it excels at generating complex, customized documents quickly. Gavel’s ability to integrate with other systems and automate repetitive tasks, such as populating templates with data, makes it a powerful tool for legal teams looking to streamline their workflows.

However, Gavel’s focus on automation means it lacks advanced AI capabilities for contract analysis or review. It is primarily a tool for generating documents based on predefined templates, rather than analyzing or extracting insights from contracts. Additionally, the quality of its output depends heavily on the templates and data inputs, which may require significant upfront effort to configure.

  • Ironclad

Ironclad is a leader in contract lifecycle management (CLM), offering a comprehensive platform that combines AI-powered drafting with workflow automation. Its integration with Microsoft Word and other productivity tools allows users to draft, negotiate, and approve contracts within familiar environments. Ironclad’s AI is particularly effective at generating standard contracts (e.g., NDAs, service agreements) and suggesting clauses based on predefined templates. The platform’s analytics dashboard also provides valuable insights into contract performance, helping organizations optimize their workflows.

While Ironclad excels at automating routine tasks, its AI may struggle with highly complex or bespoke agreements, requiring significant customization. Additionally, its pricing structure, often tailored for enterprise-level clients, may be prohibitive for smaller firms or solo practitioners.

  • Zuva

Zuva, spun out of Kira Systems, focuses on AI-powered document understanding and contract analysis. Its technology is designed to be embedded into other software applications via APIs, making it a versatile solution for enterprises and developers. Zuva’s AI excels at extracting key terms and clauses from contracts, enabling users to quickly identify risks and obligations. The platform also offers a robust clause library, which can be used to streamline drafting and ensure consistency across documents.

Zuva’s strength as an embeddable solution also presents a limitation: it lacks a standalone, user-friendly interface for non-technical users. Additionally, while Zuva’s AI is powerful, it may require customization to handle highly specialized legal domains or jurisdiction-specific nuances.

  • LawGeex

LawGeex specializes in AI-powered contract review, using natural language processing (NLP) to compare contracts against predefined policies and flag deviations. This makes it an invaluable tool for legal teams tasked with ensuring compliance and reducing risk. LawGeex’s AI is particularly effective at handling high-volume, routine contracts, such as NDAs and procurement agreements, where speed and accuracy are critical.

While LawGeex excels at contract review, its capabilities in contract drafting are more limited. The platform is primarily designed to identify risks and deviations rather than generate new contracts from scratch. Additionally, its effectiveness depends on the quality of the predefined policies and templates, which may require significant upfront effort to configure.

  • Kira Systems

Kira Systems, now part of Litera, is a pioneer in AI-powered contract analysis, particularly in the context of due diligence and large-scale contract review. Its machine learning models are highly effective at identifying and extracting key clauses and data points from contracts, such as termination clauses, indemnities, and payment terms. Kira’s ability to handle vast volumes of documents quickly and accurately has made it a favorite among law firms and corporate legal teams, especially in industries like M&A, real estate, and financial services.

  • Luminance: AI for Anomaly Detection and Due Diligence

Luminance is a powerful AI platform designed for contract review and due diligence, with a particular focus on identifying anomalies and risks in large datasets. Its proprietary machine learning technology, based on pattern recognition, enables it to quickly analyze and categorize contracts without the need for extensive training. Luminance’s intuitive interface and real-time collaboration features make it a popular choice for legal teams working on complex transactions.

While Luminance excels at contract review and anomaly detection, its capabilities in contract drafting are more limited. The platform’s effectiveness may also depend on customization to handle jurisdiction-specific or industry-specific requirements.

AI in Practice: Use Cases Across Industries

  • Mergers and acquisitions 

Mergers and acquisitions (M&A) are among the most complex and high-stakes transactions in the legal world, requiring meticulous due diligence and the ability to process vast volumes of contracts under tight deadlines. In this context, Kira Systems has emerged as a leading solution. Kira’s machine learning models excel at extracting key clauses—such as termination provisions, indemnities, and payment terms—from large datasets, enabling legal teams to identify risks and inconsistencies quickly. For example, Clifford Chance, a global law firm, has leveraged Kira Systems to streamline clause extraction and comparison across multiple contracts, significantly reducing the time required for due diligence. Kira’s ability to handle the nuanced language of M&A agreements makes it an indispensable tool for law firms and corporate legal departments navigating these complex transactions.

  • Real Estate

The real estate sector is characterized by a high volume of contracts, including leases, purchase agreements, and mortgages. These documents often require careful review to ensure compliance with regulatory standards and to identify potential risks. Luminance has proven particularly effective in this domain. Its proprietary machine learning technology is designed to detect anomalies and categorize contracts quickly, making it ideal for real estate transactions. Luminance’s ability to analyze large datasets and flag non-standard clauses has been instrumental in helping real estate firms review leases and purchase agreements more efficiently. By automating the review process, Luminance allows legal teams to focus on strategic aspects of real estate deals, such as negotiation and risk mitigation.

  • Finance

The finance industry deals with a wide range of contracts, from loan agreements to derivatives, all of which must comply with strict regulatory standards. In this highly regulated environment, LawGeex has established itself as a trusted tool for contract review and compliance. LawGeex uses natural language processing (NLP) to compare contracts against predefined policies, flagging deviations and ensuring compliance with regulatory requirements. Its high accuracy rate—94% in spotting risks in non-disclosure agreements (NDAs), compared to 85% for human lawyers—makes it a valuable asset for financial institutions. By automating the review of high-volume contracts, LawGeex allows legal teams to focus on strategic risk management and regulatory compliance.

Conclusion: Algorithmic Precision Meets Strategic Expertise

The analysis of leading AI contract tools reveals a clear pattern: while each platform excels in specific domains—Kira in M&A due diligence, Luminance in anomaly detection, LawGeex in compliance—none yet offers a comprehensive solution for all contract-related tasks. This specialization reflects both the complexity of legal work and the current limitations of AI technology. The industry-specific applications demonstrate that AI tools are most effective when deployed strategically, focusing on tasks that benefit from pattern recognition and large-scale data processing, while leaving nuanced legal interpretation and strategic decision-making to human experts.

This bifurcation of responsibilities suggests an emerging model of legal practice where AI serves not as a replacement for lawyers but as a force multiplier for legal expertise. The success of platforms like Kira in M&A and LawGeex in financial compliance indicates that the future of legal technology lies not in attempting to replicate human judgment, but in augmenting it by handling routine analysis and flagging potential issues for expert review. As these technologies continue to evolve, the key challenge for legal practitioners will be developing workflows that effectively leverage AI’s analytical capabilities while preserving the critical role of human expertise in strategic legal thinking and complex decision-making.


Introduction 

Algorithmic bias is AI’s Achilles heel, revealing how machines are only as unbiased as the humans behind them. 

The most prevalent real-world stage for human versus machine bias is the job search process. What started out as newspaper ads and flyers at local coffee shops, is now a completely digital process with click-through ads, interactive chatbots, resume data translation, and computer-screened candidate interviews. 

Artificial intelligence encompasses a wide variety of tools, but in context to HR specifically, common AI tools include Machine Learning algorithms that conduct complex and layered statical analysis modeling human cognition (neural networks), computer vision that classifies and labels content on images or video, and large language models. 

AI-enabled employment tools are powerful gatekeepers that determine the future of natural persons. With over 70% of companies using this technology investing into the promise of efficiency and neutrality, these abilities have recently come into question as these technologies have the potential to discriminate against protected classes. 

Anecdote 

On February 20, 2024, Plaintiff Derek Mobley initiated a class action lawsuit against an AI-enabled HR organization WorkDay, Inc., for engaging in a “pattern and practice” of discrimination based on race, age, and disability in violation of the Civil Rights Act of 1964, Civil Rights Act of 1886, Age Discrimination Act of 1967, and ADA Amendments Act of 2008. WorkDay Inc., according to the complaint, disproportionately disqualifies African-Americans, individuals over the age of 40, and individuals with disabilities securing gainful employment. 

WorkDay provides subscription-based AI HR solutions to medium and large sized firms in a variety of industries. The system screens candidates based on human inputs and algorithms and according to the complaint, WorkDay employs an automated system, in lieu of human judgement, to determine how high volume of applicants should be processed on behalf of their business clients. 

The plaintiff and members of the class have applied for numerous jobs that use WorkDay’s platforms and received several rejections. This process has deterred the plaintiff and members of the class from applying to companies that use WorkDay’s platform.

Legal History of AI Employment Discrimination 

Mobley vs. WorkDay is the first-class action lawsuit against an AI solution company for employment discrimination, but this is not the first time that an AI organization is being sued for employment discrimination. 

In August 2023, the EEOC settled the first-of-its-kind Employment Discrimination lawsuit against a virtual tutoring company that programmed its recruitment software to automatically reject older candidates. The company was required to pay $325,000 and if they were to resume hiring efforts in the US, they are required to call back all applicants during the April-May 2020 period who were rejected based on age to re-apply. 

Prior to this settlement, the EEOC issued guidance to employers about their use of Artificial Intelligence tools that extends existing employer selection guidelines to AI-assisted selections. From this guidance, employers, not third-party vendors, ultimately bear the risk of unintended adverse discrimination from such tools.

How Do HR AI Solutions Introduce Bias?

There are several steps in the job search process and AI is integrated throughout this process. Steps include: The initial search, narrowing candidates, and screening.  

Initial search

The job search process starts with targeted ads reaching the right people. Algorithms in hiring can steer job ads towards specific candidates and help assess their competencies using new and novel data. HR professionals found the tool helpful in drafting precise language and designing the ad with position elements, content and requirements. But these platforms can inadvertently reinforce gender and racial stereotypes by delivering the ad to candidates that meet certain job stereotypes.

For instance, ads delivered on Facebook for stereotypically male jobs are overwhelmingly targeted at male users even though the advertising was intended to reach a gender neutral audience. Essentially, at this step of the job search process, algorithms can prevent capable candidates from even seeing the job posting in the first place that further creates a barrier to employment. 

Narrowing Candidates

After candidates that have viewed and applied for the job through an ad or other source, the next step that AI streamlines is the candidate narrowing process. At this step, the system narrows candidates by reviewing resumes that best match the historical hiring data from the company or its training data. Applicants found the resume to application form data transfers helpful and accurate in this step of the process. But applicants were concerned that the model could miss necessary information.

From the company’s perspective, hiring practices from the client company are still incorporated into the hiring criteria in the licensed model. While the algorithm is helpful in parsing vast amounts of resumes and streaming this laborious process for professionals, the algorithm can replicate and amplify existing biases in the company data.

For example, a manager’s past decisions may lead to anchoring bias. If some bias like gender, education, race and age existed in the past and they are present in the employer’s current high performing employees that the company uses as a benchmark, those biases can be incorporated into the outcomes at this stage of the employment search process. 

Screening

Some organizations subscribe to AI tools that have a computer vision-powered virtual interview process that analyzes the candidates’ expressions to determine whether they fit the “ideal candidate” profile, while other tools like behavior/skills games are used to screen candidates prior to an in-person interview. 

Computer vision models that analyze candidate expressions to assess candidacy are found to perpetuate preexisting biases against people of color. For instance, a study that evaluates such tools, found the taxonomies of social and behavioral components creates and sustains similar biased observations that one human would make on an another because the model with those labels and taxonomies is trained with power hierarchies. In this sense, computer vision AI hiring tools are not neutral because they reflect the humans that train and rely on them. 

Similarly, skill games are another popular tool used to screen candidates. However, there are some relationships AI cannot perceive in its analysis. For instance, candidates that are not adept with online games perform poorly on those games, not because they lack the skills, but they lack an understanding of the games features. Algorithms, while trained on vast data to assess candidate ability, still fall short when it comes assessing general human capabilities like the relationship between online game experience and employment skills tests. 

Throughout each step of the employment search process, AI tools fall short in accurately capturing candidate potential capabilities.

Discrimination Theories and AI

Given that the potential for bias is embedded throughout the employment search process, legal scholars speculate courts are more likely to scrutinize discriminatory outcomes under the disparate impact theory of discrimination.

As a recap, under Title VII there are two theories of discrimination, disparate treatment, and disparate impact. Disparate treatment means the person is treated different “because of” their status as a protected class (i.e., race, sex). For example, if a manager were to intentionally use a bias algorithm to screen out candidates of a certain race, then this behavior would be considered disparate treatment. Note, this scenario is for illustrative purposes only. 

Disparate impact applies to facially neutral processes that have a discriminatory effect. The discriminatory effect aspect of this theory of discrimination can be complex because the plaintiff must identify the employer practice that has a disparate impact on a protected group. The employer can then defend that the practice by showing it is “job related” and consistent with “business necessity.” However, the plaintiff can still show that there was an alternative selection process and the business failed to adopt it. Based on this disparate impact theory, it is possible that when AI selection tools disproportionately screen women and/or racial minorities from the applicant pool, disparate theory could apply. 

Existing Methods to Mitigate Bias 

Algorithmic bias in AI tools has serious implications for members of protected classes. 

However, developers currently employ various tools to de-bias algorithms and improve their accuracy. One method is de-biased word embedding in which neutral associations of a word are supplemented to expand the model’s understanding of the word. For instance, a common stereotype is men are doctors and women are nurses or in algorithmic terms “doctor – man + woman = nurse.” However, with the de-bias word embedding process, the model is then trained to understand “doctor – man + woman = doctor.” 

Another practice currently employed by OpenAI is external Red Teaming. In which external stakeholders interact with the product and assess its weaknesses, possibility for bias, or other adverse consequences and provide feedback to OpenAI to improve and mitigate the onsets of adverse events. 

But there are limitations to these enhancements. To start, bias mitigation is not a one-size-fits-all issue. Bias is specific to its geographic and cultural bounds. For instance, a model in India may need to consider caste-based discrimination. Additionally, precision is required to capture the frame where bias is possible and solely relying on foreseeable bias from the developers’ perspective is limiting. Rather, employing some form of collaborative design that includes relevant stakeholders to contribute to the identification of bias, the identification of not biased is needed.

Lastly, a debiased model is not a panacea. A recent study in which users interacted with a debiased model that used machine learning and deep learning to provide recommendations for college majors, indicated that regardless of the debiased model’s output, users relied on their own biases to choose their majors, often motivated by gender stereotypes associated with those majors. 

Essentially, solutions from the developer side are not enough to resolve algorithmic bias issues. 

Efforts to Regulate AI Employment Discrimination

Federal law does not specifically govern artificial intelligence. However, existing laws including Title VII extend to applications that include AI. At this point, regulation efforts are largely at the state and local government level. 

Additionally, the EEOC’s Employer Guidelines is a start to shifting the onus on to employers to investigate the capabilities and outcomes of the technologies incorporated into their hiring practices.

New York City is the first local government to pass an official law that regulates AI-empowered employment decision tools. The statute requires organizations to inform candidates of the use of AI in their hiring process, and before using the screening device, notify potential candidates. If candidates do not consent to the AI-based process, the organization is required to use an alternative method. 

Like New York’s statute, Connecticut passed a statute specific to state agency’s use of AI and machine learning hiring tools. Connecticut requires an annual review of the tools performance, a status update on whether the tool underwent some form of bias mitigation training in an effort to prevent unlawful discrimination. 

New JerseyCalifornia, and Washington D.C. currently have bills that are intended to prevent discrimination with AI hiring systems. 

Employer Considerations

With the possibility of bias embedded throughout each step of the recruiting process, employers must do their part to gather information about the performance of the AI system they ultimately invest in. 

To start, recruiters and managers alike stressed the need for AI systems to provide some explanation why the applicant is rejected or selected to accurately assess the performance of the model. This need speaks specifically to AI models’ tendency to find proxies or shortcuts in the data to reach the intended outcome on a superficial level. For instance, models might find a candidate by only focusing on candidates who graduated from universities in the Midwest if most of upper management attended such schools. In this sense, employers should look for accuracy reports, ask vendors ways to identify and correct this issue in this hiring pool.

Similarly, models can focus on candidate traits that are unrelated to the job traits and are simply unexplained correlations. For example, one model in the UK linked people that liked “curly fries” on Facebook to have higher levels of intelligence. In this case, employers need to develop processes to analyze whether the output from the model was “job related” or related carrying out the functions of the business. 

Lastly, employers must continue to invest in robust diversity training. Algorithmic bias reflects the bias human behind the computer. While AI tools enhance productivity and alleviate the laborious parts of work, it also increases the pressure on humans to do more cognitive-intensive work. In this sense, managers need robust diversity training to scrutinize outputs from AI models, to investigate whether the model measured what it was supposed to, whether the skills required in the post accurately reflect the expectations and culture of the organization. 

Along with robust managerial training, these AI solutions often incorporate “culture fit” as a criterion. Leaders need to be intentional about precisely defining culture and promoting that defined culture in its hiring practices. 

Conclusion

A machine does not know its output is biased. Humans interact with context—culture dictates norms and expectations, shared social/cultural history informs bias. Humans, whether we like to admit it or not, know when our output is biased. 

To effectively mitigate unintentional bias in AI-driven hiring, stakeholders, ranging from HR professionals to developers and candidates, must understand the technology’s limitations, ensure its job-related decision-making accuracy, and promote transparent, informed use, while also maintaining robust DEI initiatives and awareness of candidates’ rights.