Data Science

Lack of diversity in data science perpetuates AI bias

Lack of diversity in data science perpetuates AI bias

Data privacy measures such as the General Data Protection Regulation and the California Consumer Privacy Act are expanding the definition and protection of private sensitive data. Anonymization efforts, though valiant, can only go so far.

“You can only manage what you measure, right?” said Hannah Sperling (pictured), business process intelligence, academic and research alliances at SAP SE.But if everybody is afraid to touch sensitive data, we might not get to where we want to be. I’ve been getting into data anonymization procedures, because if we could render more workforce data usable, especially when it comes to increasing diversity in STEM or in technology jobs, we should really be letting the data speak.

Sperling spoke with Lisa Martin, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the Women in Data Science (WiDS) event. They discussed data anonymization and the inherent bias of human-generated analysis.

Complete objectivity is logically impossible

Taking the human factor out of analysis is not only idealistic, it’s the wrong path, according to Sperling. Since analysis is inherently a backward-looking effort, she believes that recognizing and adjusting for those biases is the model to follow.

“I’m sometimes amazed at how many people still seem to think that data can be unbiased,” Sperling said. “The sooner that we realize that we need to take into account certain biases, the closer we’re going to get to something that represents reality better and might help us to change reality for the better as well.”

Lack of diversity in data science has perpetuated bias in artificial intelligence decisions, from soap dispensers that only recognize light-colored skin to decisions on hiring, financial applications and parole approvals.

“There is a big trend around explainability, interpretability in AI worldwide because awareness around those topics is increasing,” Sperling explained. “That will show you the blind spots that you may have, no matter how much you think about the context. We need to get better at including everybody; otherwise you’re always going to have a certain selection bias.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Women in Data Science (WiDS) event:

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Sperling spoke with Lisa Martin, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during the Women in Data Science (WiDS) event. They discussed data anonymization and the inherent bias of human-generated analysis.

Source: https://siliconangle.com/2022/03/09/lack-diversity-data-science-perpetuates-ai-bias-wids2022/

Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button