wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/07\/05\/nycs-anti-bias-law-for-hiring-algorithms-goes-into-effect\/<\/a><\/br> After months of delays, New York City today began enforcing a law that requires employers using algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit \u2014 and make the results public. The first of its kind in the country, the legislation \u2014 New York City Local Law 144 \u2014 also mandates that companies using these types of algorithms make disclosures to employees or job candidates.<\/p>\n At a minimum, the reports companies must make public have to list the algorithms they\u2019re using as well an an \u201caverage score\u201d candidates of different races, ethnicities and genders are likely to receive from the said algorithms \u2014 in the form of a score, classification or recommendation. It must also list the algorithms\u2019 \u201cimpact ratios,\u201d which the law defines as the average algorithm-given score of all people in a specific category (e.g. Black male candidates) divided by the average score of people in the highest-scoring category.<\/p>\n Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. Each day a company uses an algorithm in noncompliance with the law, it\u2019ll constitute a separate violation \u2014 as will failure to provide sufficient disclosure.<\/p>\n Importantly, the scope of Local Law 144, which was approved by the City Council and will be enforced by the NYC Department of Consumer and Worker Protection, extends beyond NYC-based workers. As long as a person\u2019s performing or applying for a job in the city, they\u2019re eligible for protections under the new law.<\/p>\n Many see it as overdue. Khyati Sundaram, the CEO of Applied, a recruitment tech vendor, pointed out that recruitment AI in particular has the potential to amplify existing biases \u2014 worsening both employment and pay gaps in the process.<\/p>\n \u201cEmployers should avoid the use of AI to independently score or rank candidates,\u201d Sundaram told TechCrunch via email. \u201cWe\u2019re not yet at a place where algorithms can or should be trusted to make these decisions on their own without mirroring and perpetuating biases that already exist in the world of work.\u201d<\/p>\n One needn\u2019t look far for evidence of bias seeping into hiring algorithms. Amazon scrapped a recruiting engine in 2018 after it was found to discriminate against women candidates<\/a>. And a\u00a02019 <\/a>academic study showed AI-enabled anti-Black bias in recruiting.<\/p>\n Elsewhere, algorithms have been found to assign job candidates different scores based on criteria like whether they wear glasses or a headscarf; penalize applicants for having a Black-sounding name, mentioning a women\u2019s college, or submitting their r\u00e9sum\u00e9 using certain file types; and disadvantage people who have a physical disability that limits their ability to interact with a keyboard.<\/p>\n The biases can run deep. A October 2022 study<\/a> by the University of Cambridge implies the AI companies that claim to offer objective, meritocratic assessments are false, positing that anti-bias measures to remove gender and race are ineffective because the ideal employee is historically influenced by their gender and race.<\/p>\n But the risks aren\u2019t slowing adoption.\u00a0Nearly one in four organizations already leverage AI to support their hiring processes, according to a February<\/a>\u00a02022 survey<\/a> from the Society for Human Resource Management. The percentage is even higher \u2014 42% \u2014 among employers with 5,000 or more employees.<\/p>\n So what forms of algorithms are employers using, exactly? It varies. Some of the more common are text analyzers that sort r\u00e9sum\u00e9s and cover letters based on keywords. But there are also chatbots that conduct online interviews to screen out applicants with certain traits, and interviewing software designed to predict a candidate\u2019s problem solving skills, aptitudes and \u201ccultural fit\u201d from their speech patterns and facial expressions.<\/p>\n The range of hiring and recruitment algorithms is so vast, in fact, that some organizations don\u2019t believe Local Law 144 goes far enough.<\/p>\n The NYCLU, the New York branch of the American Civil Liberties Union, asserts that the law falls \u201cfar short\u201d of providing protections for candidates and workers. Daniel Schwarz, senior privacy and technology strategist at the NYCLU, notes in a policy memo<\/a> that Local Law 144 could, as written, be understood to only cover a subset of hiring algorithms \u2014 for example excluding tools that transcribe text from video and audio interviews. (Given that speech recognition tools have a well-known bias problem<\/a>, that\u2019s obviously problematic.)<\/p>\n \u201cThe \u2026 proposed rules [must be strengthened to] ensure broad coverage of [hiring algorithms], expand the bias audit requirements and provide transparency and meaningful notice to affected people in order to ensure that [algorithms] don\u2019t operate to digitally circumvent New York City\u2019s laws against discrimination,\u201d Schwarz wrote. \u201cCandidates and workers should not need to worry about being screened by a discriminatory algorithm.\u201d<\/p>\n Parallel to this, the industry is embarking on preliminary efforts to self-regulate.<\/p>\n December 2021 saw the launch of the Data & Trust Alliance, which aims to develop an evaluation and scoring system for AI to detect and combat algorithmic bias, particularly bias in hiring. The group at one point counted CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta, Nike and Walmart among its members, and garnered significant<\/a> press coverage.<\/p>\n Unsurprisingly, Sundaram is in favor of this approach.<\/p>\n \u201cRather than hoping regulators catch up and curb the worst excesses of recruitment AI, it\u2019s down to employers to be vigilant and exercise caution when using AI in hiring processes,\u201d he said. \u201cAI is evolving more rapidly than laws can be passed to regulate its use. Laws that are eventually passed \u2014 New York City\u2019s included \u2014 are likely to be hugely complicated for this reason. This will leave companies at risk of misinterpreting or overlooking various legal intricacies and, in-turn, see marginalized candidates continue to be overlooked for roles.\u201d<\/p>\n Of course, many would argue having companies develop a certification system for the AI products that they\u2019re using or developing is problematic off the bat.<\/p>\n While imperfect in certain areas, according to critics, Local Law 144 does<\/em> require that audits be conducted by independent entities that haven\u2019t been involved in using, developing or distributing the algorithm they\u2019re testing and that don\u2019t have a relationship with the company submitting the algorithm for testing.<\/p>\n Will Local Law 144 affect change, ultimately? It\u2019s too early to tell. But certainly, the success \u2014 or failure \u2014 of its implementation will affect laws to come elsewhere. As noted in a recent piece for Nerdwallet<\/a>, Washington, D.C., is considering a rule that would hold employers accountable for preventing bias in automated decision-making algorithms. Two bills in California that aim to regulate AI in hiring were introduced within the last few years. And in late December, a bill was introduced in New Jersey that would regulate the use of AI in hiring decisions to minimize discrimination.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nNYC\u2019s anti-bias law for hiring algorithms goes into effect<\/br>
\n2023-07-05 21:52:34<\/br><\/p>\n