Artificial intelligence

Duke Professor Wins New Nobel Award in Artificial Intelligence

While many scholars in the developing field of machine learning were focused on improving algorithms, Rudin’s contributions may allow humans to see inside that machine learning algorithms.

This contribution to the field has earned her the $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI).

Founded in 1979, AAAI serves as the prominent international scientific society serving AI researchers, practitioners and educators.

The first applied project was to use machine learning for predicting manholes were at risk of exploding due to degrading and overloaded electrical circuitry.

Soon she discovered that no matter how many newly published academic bells and whistles are added, it struggled to improve performance when confronted by the challenges posed by working with handwritten notes.

If we could understand what information the predictive models were used, the user feedback for the whole process improved. She decided to work on it, which built the foundation for the lab.

Over the next decade, Rudin developed techniques for interpretable machine learning, which are predictive models that explain themselves in ways that humans can understand.

While the code for designing these formulas is complex and sophisticated, the formulas might be small enough to be written in a few lines on an index card. Rudin has applied her brand of interpretable machine learning to numerous impactful projects.

“Cynthia’s commitment to solving important real-world problems, desire to work closely with domain experts, and ability to distill and explain complex models is unparalleled,” said Daniel Wagner, deputy superintendent of the Cambridge Police Department.

Her research resulted in significant contributions to the field of crime analysis and policing. More impressively, she is a strong critic of potentially unjust ‘black box’ models in criminal justice and other high-stakes fields.

Black-box models are the opposite of Rudin’s transparent codes. The methods applied in these AI algorithms make it impossible for humans to understand what factors the models depend on, which data the models are focusing on and how they’re using it.

An intense advocate for transparent interpretable models was accurate, just and bias-free results are essential.

As Rudin continues to help people and publish her interpretable designs, more concerns continue to crop up with black box code with her influence is finally beginning to turn the ship.

To have a ‘Nobel Prize’ for AI helps society to make it finally clear without a doubt that this topic�AI work for the benefit of society is important.

Source: Medindia


Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button