MUMBAI, INDIA - FEBRUARY 24: Microsoft CEO Satya Nadella addresses the Microsoft Future Decoded Summit at St Regis, on February 24, 2020 in Mumbai, India. (Photo by Anshuman Poyrekar/Hindustan Times via Getty Images)MUMBAI, INDIA - FEBRUARY 24: Microsoft CEO Satya Nadella addresses the Microsoft Future Decoded Summit at St Regis, on February 24, 2020 in Mumbai, India. (Photo by Anshuman Poyrekar/Hindustan Times via Getty Images)
Microsoft CEO Satya Nadella. (Photo by Anshuman Poyrekar/Hindustan Times via Getty Images)

One of the chief issues with machine learning and artificial intelligence systems is that they, like the data scientists who create them, often have their own built-in biases. 

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Whether that’s favoring one portion of the population over another when deciding who deserves a loan, or misidentifying people of different ethnicities via facial recognition algorithms, machine learning programs have generated problematic outcomes and shaken trust in the technology.” data-reactid=”24″>Whether that’s favoring one portion of the population over another when deciding who deserves a loan, or misidentifying people of different ethnicities via facial recognition algorithms, machine learning programs have generated problematic outcomes and shaken trust in the technology.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Microsoft (MSFT) is attempting to address the issue of bias in machine learning with its new Fairlearn toolkit. The kit, which the tech giant announced is being made available via its Azure Machine Learning platform in June, will let companies that are developing machine learning models in Azure test for biases in their systems that could dramatically impact people’s lives.” data-reactid=”25″>Microsoft (MSFT) is attempting to address the issue of bias in machine learning with its new Fairlearn toolkit. The kit, which the tech giant announced is being made available via its Azure Machine Learning platform in June, will let companies that are developing machine learning models in Azure test for biases in their systems that could dramatically impact people’s lives.

The announcement came during Microsoft’s annual Build developers conference this week. Rather than its normal live event held in Seattle, the company hosted a virtual version of the show.

NEW YORK, NY - APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City. The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)NEW YORK, NY - APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City. The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)
NEW YORK, NY – APRIL 30: The Microsoft store is seen on April 30, 2020 in New York City. The company said the effects of the coronavirus may not be fully understood until future periods but it has seen an increase in the Cloud business as more people work from home. (Photo by Eduardo MunozAlvarez/VIEWpress via Getty Images)

The Fairlearn toolkit first debuted at Microsoft’s Ignite event in November, but is being made generally available next month.

In explaining the importance of such tools, Microsoft used the example of EY, which tested Fairlearn on a machine learning model designed to automate loan decisions.

When the firm began using Fairlearn on the model, it revealed that the company’s loan algorithm had a significant bias in approving loans for men versus women that resulted in a 15.3 percentage point difference between men receiving loans in the test compared to their female counterparts.

The algorithm was built using loan approval data from banks which included information like transaction, payment, and credit history.

But Microsoft says that can introduce biases against applicants from certain demographics. And if that bleeds into loan approvals, it can have a dramatic impact on individuals’ lives.

According to Microsoft, when EY used the Fairness toolkit to train new machine learning models, it was able to cut the difference in loan approvals to 0.43 percentage points.

“Increasingly we’re seeing regulators looking closely at these models,” said Erin Boyd, Microsoft CVP of Azure AI, said in a statement. “Being able to document and demonstrate that they followed the leading practices and have worked very hard to improve the fairness of the datasets are essential to being able to continue to operate.” 

With machine learning algorithms being used across an increasingly wide range of applications, whether that includes facial recognition algorithms for law enforcement agencies, or banks, ensuring bias isn’t a part of the equation will only become more important moving forward.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Read more:” data-reactid=”46″>Read more:

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Got a tip? Email Daniel Howley at [email protected] or [email protected], and follow him on Twitter at&nbsp;@DanielHowley.” data-reactid=”52″>Got a tip? Email Daniel Howley at [email protected] or [email protected], and follow him on Twitter at @DanielHowley.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="Follow Yahoo Finance on&nbsp;Twitter,&nbsp;Facebook,&nbsp;Instagram,&nbsp;Flipboard,&nbsp;SmartNews,&nbsp;LinkedIn,&nbsp;YouTube, and&nbsp;reddit” data-reactid=”53″>Follow Yahoo Finance on TwitterFacebookInstagramFlipboardSmartNewsLinkedIn, YouTube, and reddit