In a recent interview, EEOC Chair Charlotte Burrows emphasized that existing civil rights laws still apply to AI and called for the EEOC and the human resources sector to take a leading role in dealing with the implications of this new technology. Burrows acknowledged that the introduction of AI has had a tremendous impact on the EEOC and its functions. As a commissioner, her role is to inform all parties about the laws and standards that will apply to the use of AI and to ensure compliance. She highlighted the scalability of AI, which can potentially impact thousands or even millions of applicants, compared to the biased decisions made by individuals in the past.
Burrows discussed the various groups the EEOC is now engaging with regarding AI, including venture capitalists, investors, computer programmers, entrepreneurs, companies seeking to deploy AI products, and employees who will be subject to this technology. She emphasized that nobody wants to invest in, build, or use a product that violates civil rights. Burrows also mentioned the importance of working with legislators who may not be familiar with AI technology and providing them with assistance and information.
When asked about the EEOC’s resources to deal with the emergence of AI and the potential for scaled-up discrimination, Burrows clarified that AI itself does not automatically discriminate; it depends on the design and use of the systems. She expressed confidence in the EEOC’s ability to investigate discrimination, regardless of whether it originates from AI or humans. However, she acknowledged that investigating the technology and algorithms would require broader discussions and potentially more authority, funding, or specialized experts, which would be determined by Congress.
Regarding employees’ ability to know if they were not hired due to AI discrimination, Burrows highlighted the lack of transparency in the hiring process. Without employers disclosing the use of AI tools and their impact on interviews, employees have no way of knowing if AI was a factor. She drew a parallel with the black box of human decision-making, stating that it has always been challenging to discern the factors influencing employment decisions. Burrows mentioned proposals that suggest the need for consent and transparency in AI interviews to protect employees’ rights.
When asked about the accountability of AI vendors for hiring decisions that violate the law, Burrows acknowledged the complexity of the issue. From the EEOC’s perspective, the focus is on holding employers liable for biased terminations, regardless of whether AI was involved. However, she acknowledged the potential debate around vendors’ liability in state or foreign law proposals and private litigation, as it has not been extensively tested in court.
Burrows concluded by emphasizing the importance of federal agencies providing guidance and assistance to employers willing to comply with AI regulations. She stated that every agency should be proactive in this regard, regardless of the specific context in which AI is being used.
The introduction of AI technology has had a significant impact on the Equal Employment Opportunity Commission (EEOC). Commissioner Keith Sonderling emphasizes that existing civil rights laws still apply to AI and wants the EEOC to play a leading role in addressing the challenges posed by AI in different settings. Sonderling has been making employers aware of the laws and standards that will be applied by the EEOC in investigations related to AI. The scalability of AI is a major concern, as biased hiring decisions made by AI can impact hundreds of thousands or even millions of applicants.
Sonderling has been engaging in conversations with various stakeholders, including venture capitalists, investors, computer programmers, entrepreneurs, and companies that develop and deploy AI products. He believes that no one wants to invest in, build, or use products that violate civil rights. However, the introduction of AI has expanded the scope of the EEOC’s work, and Sonderling highlights the importance of working with legislators to provide assistance and guidance on AI-related issues.
In terms of resources, Sonderling acknowledges that investigating AI and algorithms would require additional expertise and funding. He suggests that Congress could provide more authority and funding for agencies like the EEOC to hire experts in this field. However, Sonderling expresses confidence that the EEOC can investigate discrimination cases, whether they involve AI or human decision-making, using existing laws.
Regarding job applicants’ ability to know if AI was used in discriminatory hiring decisions, Sonderling points out that without consent requirements or transparency from employers, it is difficult for applicants to determine if AI played a role. He compares this lack of transparency to the existing uncertainty around human decision-making in employment. Sonderling suggests that employers should decide whether they want to disclose their use of AI tools, similar to the debate around pay transparency.
When it comes to the accountability of AI vendors, Sonderling states that the EEOC will hold employers liable for bias-related terminations, regardless of whether AI was involved. However, he acknowledges the potential for debates about vendor liability in litigation or under state and foreign laws.
Sonderling believes that all federal agencies should provide guidance to employers on AI usage to promote compliance. He emphasizes that every agency, regardless of its area of focus, should address the implications of AI in their respective fields and provide information to vendors and applicants.
Disclaimer: Only the headline and content of this report may have been reworked by Newsearay, staff; the rest of the content is auto-generated from a syndicated feed. The Article was originally published on Source link