It’s easy to decry cancel culture, but hard to turn it back. Thankfully, recent developments in my area of academic specialty—artificial intelligence (AI)—show that fighting cancel culture isn’t impossible. And as I explain below, the lessons that members of the AI community have learned in this regard can be generalized to other professional subcultures.
To understand the flash point at issue, it’s necessary to delve briefly into how AI functions. In many cases, AI algorithms have partly replaced both formal and informal human decision-making systems that pick who gets hired or promoted within organizations. Financial institutions use AI to determine who gets a loan. And some police agencies use AI to anticipate which neighborhoods will be afflicted by crime. As such, there has been a great focus on ensuring that algorithms won’t replicate their coders’ implicit biases against, say, women or visible minorities. Citing evidence that, for instance, “commercial face recognition systems have much higher error rates for dark-skinned women while having minimal errors on light skinned men,” computer scientist Timnit Gebru, formerly the co-lead of Google’s ethical AI team, has argued that AI systems are contaminated by the biases of the mostly white male programmers that created them. In a paper authored with colleagues at Google and my university, she warned that large language-based AI systems in particular encourage a “hegemonic worldview” that serves to perpetuate hate speech and bigotry.
These issues have also been taken up by the Conference on Neural Information Processing Systems (NeurIPS), the leading conference in the AI community. As of this writing, the NeurIPS home page is dominated by a statement attesting to the organizers’ commitment to “principles of ethics, fairness, and inclusivity.” This year, NeurIPS has started requiring paper authors to include a section describing the “broader impacts” on society that the underlying science might present, no matter how obscurely technical the underlying content. There is also an ethics board to evaluate whether any paper runs afoul of such concerns. “Regardless of scientific quality or contribution,” the organizers have announced, “a submission may be rejected for ethical considerations, including methods, applications, or data that create or reinforce unfair bias” (or, less controversially, “that have a primary purpose of harm or injury”). [ … ]
The post Beating Back Cancel Culture: A Case Study from the Field of Artificial Intelligence appeared first on NewsCetera.
Click this link for the original source of this article.
This content is courtesy of, and owned and copyrighted by, https://newscetera.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.