New AI tool detects hiring discrimination against ethnic minorities and women
Research shows that various industries have big ethnic and gender pay gaps, but the extent to which discrimination affects these inequalities is tricky to assess.
A new AI tool developed at the London School of Economics has shed some light on how recruitment prejudices influence these outcomes .
The system uses supervised machine learning algorithms to analyze the search behavior of recruiters on employment websites.
The researchers applied the algorithms to the online recruitment platform of the Swiss public employment service.
The tool used data from 452,729 searches by 43,352 recruiters, 17.4 million profiles that appeared in the search lists, and 3.4 million profile views. The researchers then analyzed how much time the recruiters spent looking at each profile, and whether or not they decided to contact a jobseeker.
They found that recruiters were up to 19% less likely to follow up with job seekers from immigrant and ethnic minority backgrounds than with equally qualified candidates from the majority population.
The study also showed that women experienced a penalty of 7% in professions that are dominated by men, while the opposite pattern was detected for men in industries that are dominated by women.
“Our results demonstrate that recruiters treat otherwise identical job seekers who appear in the same search list differently, depending on their immigrant or minority ethnic background,” said study co-author Dr Dominik Hangartner. Unsurprisingly, this has a real impact on who gets employed.”
Interestingly, the level of bias varied at different times of the day. Just before lunch or near the end of the workday, recruiters reviewed CVs more quickly, leading immigrant and minority ethnic groups to experience up to 20% higher levels of discrimination.
“These results suggest that unconscious biases, such as stereotypes about minorities, have a larger impact when recruiters are more tired and fall back on ‘intuitive decision-making’,” said Dr Hangartner.
The researchers believe the bias can be reduced by re-designing recruitment platforms to place details such as name and nationality lower down the CV. But their tool could also help, by continuously monitor hiring discrimination and informing approaches to counter it.
You can read the study paper in the journal Nature .
C-list celebs slammed for promoting digital blackface app
Former cast members of Keeping Up with the Kardashians have been slated for promoting an app that uses AI to alter profile photos so they look like different races.
Brody Jenner and Scott Disick — as well as influencer Danielle Cohn — recently posted profile photos that were fed through the Gradient’s “AI Face” feature.
Disick’s initial tweet showed his face modified to appear that he’s from Europe, Asia, and India. “Tried new filters in the Gradient app,” he wrote. “Which one is better?”
A raft of critics on social media accused them of dehumanizing Black people and stereotyping entire continents, while others wondered whether the app‘s developers know that India is in Asia .
Disick and Jenner have now deleted their original tweets and replaced them with new posts hashtagged #ad but with comments turned off, while Cohn has kept her video on TikTok .
The duo aren’t the first members of the Kardashian-Jenner clan to promote apps accused of propagating racismacial recognition
In 2016, Kylie Jenner used Snapchat’s “Bob Marley” filter to make her look like a Black man with dreadlocks, while in April , Kim and Khloe Kardashian advertised Gradient’s “Ethnicity Estimate.” The feature’s been criticized for equating nationality with ethnicity and using flags of countries to represent entire continents or religions. In the case of the Caribbean, the region gets a Jolly Roger flag.
Gradient isn’t the first AI-powered app accused of promoting digital blackface. FaceApp previously had a similar feature, but wisely ditched the filter hours after its launch.
Hopefully, Gradient does the same and sticks to its more harmless features such as finding celebrity lookalikes and turning photos into 15th-century photos. But users should be advised that the app has also sparked previous concerns .
So you’re interested in AI? Then join our online event, TNW2020 , where you’ll hear how artificial intelligence is transforming industries and businesses.
Alphabet X has built a ‘plant buggy’ — a cute AI farmer
Alphabet‘s X lab has unveiled its latest moonshot project: a crop-inspecting robot named the “plant buggy.”
The solar-powered prototype roams autonomously through fields, using GPS software to identify the location of plants. When it finds them, it uses cameras and machine perception tools to study their traits and any issues in the field.
The cart combines data collected from the field, such as plant height and fruit size, with environmental factors including weather forecasts and soil information. This is all analyzed by machine learning to evaluate how the crops are growing and interacting with their surroundings.
The buggy comes in a variety of shapes and sizes so it can adapt to different fields. Alphabet says it can help farmers treat individual plants and predict how different crops will respond to their environments. They could use these insights to forecast the size and yield of a harvest or spot diseases before they ruin a whole crop.
[ IBM announces Call for Code 2020 grand prize winner ]
The buggy has been tested in st rawberry fields in California and soybean fields in Illinois. I t’s already analyzed the life cycles of a range of crops, including melons, berries, lettuce, oilseeds, oats, and barley.
The device is part of an X project called Mineral, which was formed to develop “computational agriculture” that analyzes information about the plant world to help make farming more sustainable.
With farming workforces rapidly aging and climate change affecting crop yields in ways that can be hard to anticipate, such AI initiatives could play a key role in securing future food supplies.