January 7, 2026

Last Updated on January 7, 2026

These days AI influences every stage of the hiring process, from an HR professional using AI to create the job description to applicants writing their resumes and cover letters with AI to an AI tool screening applications to AI avatars conducting interviews to hackers creating deep-faked job candidates to gain access to sensitive data.

New York City’s Local Law 144 (LL 144) is among the first of many emergent laws and bills designed to protect job applicants from AI bias—a widespread problem affecting automated employment decision tools (AEDTs) used in hiring and promotion.

What does this groundbreaking law cover and why? What entities does it apply to and what does compliance require? This article gives business leaders a comprehensive breakdown of all the key points.

Key takeaways

  • Bias in AEDT results is a prevalent problem earning widespread concern across the AI community among jobseekers, employers, regulators, and developers alike.
  • LL 144 makes AEDT users hiring for in-person or remote NYC jobs accountable to demonstrate that the AI’s recommendations are unbiased.
  • LL 144 mandates annual, independent race/gender bias audits on AEDTs, as well as public disclosure of audit results. It also requires advance notification to candidates that an AEDT is being used in the hiring process.
  • Any business using AI to screen applicants for jobs performed in or associated with offices in NYC are subject to LL 144 requirements.

What is NYC LL 144 on AI bias in HR?

NYC LL 144 seeks to uphold human rights in the workplace by making firms using AI-powered hiring tools accountable to demonstrate that the AI’s recommendations are unbiased. This groundbreaking legislation, in force since July 2023, requires:

  • Mandatory independent audits on AEDTs for race/gender bias on a yearly basis.
  • Public posting on employer websites of audit summaries and dates.
  • Including a notice within job listings at least ten business days before using an AEDT.
  • A choice for jobseekers to opt out of the automated hiring/screening process.

Along with similar statutes in the US and elsewhere, LL 144’s main purpose is to reduce hiring discrimination by ensuring AI-based HR tools don’t propagate race or gender prejudices. It also seeks to improve transparency and accountability in AI-driven employment decisions.

LL 144 is enforced by the New York City Department of Consumer and Worker Protection (DCWP). Penalties range from $500 up to $1,500 per violation, or more for ongoing non-compliance. The law does not specify actions to be taken if bias is found, however.

Is my company subject to LL 144 and/or other AI bias laws?

NYC Local Law 144 applies to employers, employment agencies, recruiters, or any other business regardless of location that:

  • Makes hiring/promotion decisions.
  • Hires people for jobs in NYC.
  • Uses an AEDT for selection or promotion.

The law covers any company, regardless of industry, size, or location that uses AI, machine learning, or related statistical modeling tools to screen, rank, select, and/or promote candidates for jobs “in the city.” A job is considered “in the city” if any of the following points are true:

  • The job will be performed in-person at an NYC location, at least part-time.
  • The work is fully remote, but the location associated with it is in NYC.
  • The location of the entity using the AEDT (e.g., a staffing agency) is in NYC.

Basically, any organization using AI to screen applicants for jobs in or associated with offices in NYC is subject to LL 144 requirements. For example, a SaaS provider with 40 employees based in California and using an AEDT for hiring would be subject to LL 144 if it had an NYC office.

With up to 50% of organizations already using AI for hiring, regulatory scrutiny around discrimination risk is growing rapidly. Employers should anticipate that other laws with similar intent to LL 144 may apply or soon will apply based on work location or other specifics.

US states that have enacted, soon will enact, or are currently considering AI bias legislation include Colorado, Utah, Illinois, New Jersey, Massachusetts, Maine, Utah, Pennsylvania, Connecticut, New York, New Mexico, Texas, and Virginia. US firms doing business in the EU are also potentially subject to the comprehensive EU AI Act, which classifies AI in HR as “high risk” and places significant restrictions on training data.

What is an AI bias impact assessment?

A bias impact assessment or bias audit is an objective, independent test to determine if an AEDT could negatively impact a jobseeker’s hiring or promotion based on race, ethnicity, and/or gender.

LL 144 defines a bias audit as:
An impartial evaluation by an independent auditor. Such bias audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers pursuant to subsection (c) of section 2000e-8 of title 42 of the United States code as specified in part 1602.7 of title 29 of the code of federal regulations.”

Under LL 144, an audit must be performed annually starting before the tool is first used. The organization using the AEDT is responsible for performing the audit but cannot itself conduct the audit. Likewise, the tool vendor cannot perform the audit, and their assertions about its performance cannot take the place of an audit.

AN LL 144 compliant audit analyzes how the AEDT selects or scores applicants from different race/ethnicity, gender, and intersectional groups (e.g., Black men, Asian women).

Importantly, AEDT users must publish a summary of the bias audit results. This must include the date at which AEDT use started, the most recent audit date, the source of data used for the audit, the number of applicants subject to the AEDT, and several audit metrics (e.g., selection rate, impact ratio). The summary must remain posted for at least six months following the most recent AEDT use.

How does AI bias impact AEDT results?

AI bias is common and problematic in AEDTs because the AI’s training often relies on historical data that reflects prejudicial cultural views, especially around race and gender. AI models that learn these views not only propagate but also tend to amplify them—to the obvious detriment of hiring practices and outcomes, never mind the legal risks. AI can also show algorithmic bias, when the AI model is coded to look for specific terms that may be more or less likely to be used by certain demographics.

Some widely publicized examples of AI bias in AEDTs include:

  • The Amazon in-house hiring tool that was withdrawn after it learned to downgrade resumes containing “female-associated” words, such as “women’s,” leading to gender discrimination. The AI was trained on resumes dominated by male applicants.
  • Various research showing that many large language models (LLMs), including Gemini and ChatGPT, exhibit bias against various races, genders, and people with disabilities. This pervasive AI bias similarly impacts hiring using AEDTs.
  • Studies corroborating that the overwhelming majority (perhaps 90%) of humans reviewing AI-generated HR outcomes are “perfectly willing to accept” biased AI recommendations unless the bias is “obvious.”

Another reason bias is problematic in AEDTs is their prevailing lack of transparency. Many AEDTs run a “black box” algorithm that is not comprehensively documented for users, making it hard for humans to recognize patterns or prove results are biased.

All this evidence of bias and difficulty detecting it make independent, qualified AI bias audits like those mandated by NYC LL 144 so important as a mitigation strategy.

What steps can we take to meet AI bias audit requirements?

Employers using AEDTs should consider alignment with LL 144 and similar laws as a competitive necessity to promote trust in hiring practices and reduce legal and reputational risk.

Steps you can take today to move toward compliance with LL 144, the EU AI Act, and similar legislation coming your way include:

  • Seek legal advice on what legislation currently applies or is likely to apply to your company.
  • Identify a qualified independent entity to conduct your AI bias audits.
  • Work with experts to prepare the data for the audit (e.g., gathering historical or test data) and share it with the auditor beforehand.
  • Complete the audit and publish a compliant summary of the results.
  • Notify applicants and other stakeholders about the audit based on mandated or best-practice notification procedures.

What’s next?

For organizations seeking support on the strategic use of AI, CBIZ Pivot Point Security offers AI governance and advisory services tailored to your needs. If your business is adopting, integrating, or outsourcing the use of AI systems that interact with or inform your employees, your clients, their data, and/or society at large, we can help you identify areas of AI compliance risk and other critical business considerations.

Contact us to speak with an AI governance expert about your unique situation and goals.

Back to Blog