latest Post

DOJ warns that misuse of algorithmic hiring tools could violate accessibility laws

AI tools for the hiring process have become a hot category, but the Department of Justice warns that careless use of these processes could lead to violations of U.S. laws protecting equal access for people with disabilities. If your company uses algorithmic sorting, facial tracking or other high-tech methods for sorting and rating applicants, you may want to take a closer look at what they’re doing.

The department’s Equal Employment Opportunity Commission, which watches for and advises on industry trends and actions pertaining to eponymous matters, has issued guidance on how company can safely use algorithm-based tools without risking the systematic exclusion of people with disabilities.

“New technologies should not become new ways to discriminate. If employers are aware of the ways AI and other technologies can discriminate against persons with disabilities, they can take steps to prevent it,” said EEOC Chair Charlotte A. Burrows in the press release announcing the guidance.

The general sense of the guidance is to think hard (and solicit the opinions of affected groups) about whether these filters, tests, metrics and so on measure qualities or quantities relevant to doing the job. They offer a few examples:

  • An applicant with a visual impairment must complete a test or task with a visual component to qualify for an interview, such as a game. Unless the job has a visual component this unfairly cuts out blind applicants.
  • A chatbot screener asks questions that have been poorly phrased or designed, like whether a person can stand for several hours straight, with “no” answers disqualifying the applicant. A person in a wheelchair could certainly do many jobs that some may stand for, just from a sitting position.
  • An AI-based resume analysis service downranks an application due to a gap in employment, but that gap may be for reasons related to a disability or condition it is improper to penalize for.
  • An automated voice-based screener requires applicants to respond to questions or test problems vocally. Naturally this excludes the deaf and hard of hearing, as well as anyone with speech disorders. Unless the job involves a great deal of speech, this is improper.
  • A facial recognition algorithm evaluates someone’s emotions during a video interview. But the person is neurodivergent, or suffers from facial paralysis due to a stroke; their scores will be outliers.

This is not to say that none of these tools or methods are wrong or fundamentally discriminatory in a way that violates the law. But companies that use them must recognize their limitations and offer reasonable accommodations in case an algorithm, machine learning model or some other automated process is inappropriate for use with a given candidate.

Having accessible alternatives is part of it but also being transparent about the hiring process and declaring up front what skill will be tested and how. People with disabilities are the best judges of what their needs are and what accommodations, if any, to request.

If a company does not or cannot provide reasonable accommodations for these processes — and yes, that includes processes built and operated by third parties — it can be sued or otherwise held accountable for this failure.

As usual, the earlier this kind of thing is brought into consideration, the better; if your company hasn’t consulted with an accessibility expert on matters like recruiting, website and app access, and internal tools and policies, get to it.

Meanwhile, you can read the full guidance from the DOJ here, with a brief version aimed at workers who feel they may be discriminated against here, and for some reason there is another truncated version of the guidance here.



source https://techcrunch.com/2022/05/13/doj-warns-that-misuse-of-algorithmic-hiring-tools-could-violate-accessibility-laws/

About shashi pathipaka

shashi pathipaka
Recommended Posts × +

0 comments:

Post a Comment