New York City passed legislation that protects individuals from unlawful bias by the employer when automated employment decision tools are used. Pursuant to the law, employers must conduct AI tool audits to confirm that such tools are not biased. The new law was initially slated to go into effect on January 1, 2023. Recently, the New York City Department of Consumer and Worker Protection announced New York City has deferred enforcement of this regulation to April 15, 2023. Notably, New York City is not extending the enforcement of the January 1, 2023 effective date, rather, it is only deferring enforcement of the law.
Automated employment decision tools that fall within the scope of the law are defined as computational processes that issue simplified outputs that are used to “substantially assist or replace discretionary decision making for making employment decisions . . . .”
Pursuant to the new law, before using AI tools, employers must conduct a bias audit, defined as an impartial evaluation by an independent auditor that evaluates the relevant tool for its disparate impact on the basis of race/ethnicity and sex, within one year before the tool is used. Employers must then publish the results of the audit and the distribution date of the tool subject to the audit on their public website.
Employers will also be required to disclose to “employees and candidates that reside in the City” who apply for a position or promotion that such a tool will be used in the assessment or evaluation of such individuals at least 10 business days before the tool’s use and allow a candidate to request an alternative selection process or accommodation.