BrandPost: A Hidden Trap for CIOs: Data-set Bias in Machine Learning

Share on facebook
Share on twitter
Share on linkedin
In today’s enterprises, machine learning has come of age. No longer a niche application, ML is now increasingly used for mission-critical applications and executive decision-making. Business leaders, for example, are using ML algorithms to determine new markets to enter, understand how to best support their customers and identify opportunities to make strategic financial investments.Given the growing importance of ML to the enterprise, CIOs need to be sure their ML algorithms are producing accurate, trustworthy insights that are free from data set bias. And unfortunately, this is often not the case.A growing body of research has identified cases where ML algorithms discriminate based on such classes as race and gender. In a 2018 study, for instance, researchers found that a facial recognition tool used by law-enforcement misidentified 35 percent of dark-skinned women as men, while the error rate for light-skinned men was only 0.8 percent.[1]To read this article in full, please click here

This post was originally published on this site

Source: CIO Magazine On:

Read On

In today’s enterprises, machine learning has come of age. No longer a niche application, ML is now increasingly used for mission-critical applications and executive decision-making. Business leaders, for example, are using ML algorithms to determine new markets to enter, understand how to best support their customers and identify opportunities to make strategic financial investments.

Given the growing importance of ML to the enterprise, CIOs need to be sure their ML algorithms are producing accurate, trustworthy insights that are free from data set bias. And unfortunately, this is often not the case.

A growing body of research has identified cases where ML algorithms discriminate based on such classes as race and gender. In a 2018 study, for instance, researchers found that a facial recognition tool used by law-enforcement misidentified 35 percent of dark-skinned women as men, while the error rate for light-skinned men was only 0.8 percent.[1]

To read this article in full, please click here

About the author: CIO Minute
Tell us something about yourself.

Leave a Comment

CIO Portal