By Laura M. Raisty

Artificial intelligence (AI) is changing the way companies across the country – and here in Massachusetts – hire new employees. Many employers are turning to technology like automated resume screeners, chatbots, and video interview tools to help sort through job applications and choose the best candidates more quickly. These tools promise speed and convenience, and in some cases, they can reduce human bias. But they also come with significant risks – especially when it comes to fairness and discrimination.

At the federal level, the Trump administration has taken swift steps to reverse several Biden-era initiatives related to AI in federal hiring. In response to these policy shifts, federal agencies including the Equal Employment Opportunity Commission (“EEOC”) and the Department of Labor have withdrawn their prior guidance on AI and workplace discrimination. This included the EEOC’s 2023 guidance on responsible AI in hiring. While federal guidance on use of AI in the workplace has been revoked or removed, employers must still comply with federal, state, and local laws to ensure their use of AI does not violate anti-discrimination laws.

In Massachusetts, the Massachusetts Attorney General’s Office issued an advisory emphasizing that users of AI in the state must adhere to anti-discrimination laws. Specifically, the advisory states that Massachusetts anti-discrimination law prohibits employers from using AI that discriminates on the basis of a legally protected characteristic, including “algorithmic decision-making that relies on or uses discriminatory inputs and that produces discriminatory results, such as those that have the purpose or effect of disfavoring or disadvantaging a person or group of people based on a legally protected characteristic.”

This focus on AI and hiring is not just a local issue. In recent years, legislative action related to use of AI and employment has substantially increased. Across the country, states like Colorado, Maryland, and Illinois have already passed laws to address the risks of AI in employment. New York City has also passed a law requiring a bias audit for any automated tool used by employers in hiring or employment related decisions. Like several other states, Massachusetts is now considering similar laws, and if passed, employers here would have to follow new rules for using AI in recruitment and promotion decisions.

So, what does this mean for Massachusetts employers today? You don’t have to wait for new laws to take effect to start making changes. If your company is already using – or thinking about using – AI in any part of the hiring process, now is a good time to review how those tools work and whether they could be causing unintended results. Here are a few steps you can take:

  • Look for signs of bias: Review your hiring data. Are certain groups of applicants rejected more often than others? If so, the AI might be reinforcing bias rather than reducing it.
  • Be upfront with applicants: Let candidates know if AI is being used in their application process and explain how it works. Transparency builds trust – and it might soon be legally required.
  • Keep humans in the loop: Even if AI is used to help with decision-making, make sure a real person reviews important choices. Don’t let software make final hiring decisions without human oversight.

AI can be a helpful tool, but it isn’t perfect. It can sometimes reflect and repeat the same unfair patterns that already exist in society. Employers must stay alert and informed to make sure these technologies support nondiscriminatory hiring—not undermine it. Any employer with questions about using AI in connection with recruitment, hiring, and promotions, is encouraged to contact any of Kenney &Sams’s employment attorneys.

******

This alert is for informational purposes only and may be considered advertising. It does not constitute the rendering of legal, tax, or professional advice or services. You should seek specific detailed legal advice prior to taking any definitive actions.