The Critical Link Between Successful AI and Human Diversity

Are AI and Human Diversity as necessary in financial services as in other disciplines? Artificial intelligence (AI) and machine learning in financial services range from chatbot assistants to fraud detection and task automation. Banks using AI can streamline tedious processes and vastly improve the customer experience by offering 24/7 access to their accounts and financial advisory services. 

While it is expected that AI adoption will accelerate as further advancements in technology and user/regulator acceptance are made, perhaps the biggest hurdle to successful adoption is eliminating bias in the data.

Two promises of AI are unimaginable speed and removing human bias in decision-making. While it is possible to design algorithms that achieve both, bias can still be inserted via the data selected, refined, and input by human supervision.

AI and human diversity can work together to remove human bias in decisions

It’s the human factor that worries me…and it’s not just that errors occur when humans are involved! We know that diversity in data is key to successful AI, and by extension, we can insist that AI and human diversity correlations are just as crucial in the supervision of what data is selected and how it is refined and input.

 AI and Human Diversity Critical Link in Supervised Machine Learning

 With supervised machine learning, algorithms analyze labeled data and learn how to map input data to an output label—often using a neural network. Neural networks operate similarly to how we understand our brains work, with input flowing through many layers of “neurons” and eventually leading to an output.

The accuracy of a neural network is highly dependent on its training data, both the volume and diversity. In the case of object identification, has the network seen the object from multiple angles and lighting conditions? Has it seen the object in a variety of colors and against many different backgrounds? Has it really seen all varieties of that object?

In the case of making forecasts, a machine learning algorithm can make a prediction about the future based on the historical data upon which it’s been trained. But when that training data comes from a world filled with bias and inaccuracies, the algorithm will likely be “taught” how to only proliferate those features.

A machine learning algorithm is only as intelligent as its training data. If the training data is biased, then the algorithm is biased. And unfortunately, training data is biased more often than it’s not. If we want a neural network to truly understand the world, we need to expose it to the huge diversity of our world–which is easier if AI and human diversity linkage become a higher priority in training data.

Neural Machine Translation (NMT)

Gender bias in AI language translation technology is a simple example of the ease with which bias is injected into training data. Language training data often exhibit gender bias because fewer sentences in the volumes of historical data refer to women than to men. In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender.

When Google Translate started using NMT, people noticed a bias when translating from non-gendered to gendered languages. For example, here’s how it translated four gender-neutral Turkish phrases:

AI and human diversity needed in gendered languages

In this case, the translation algorithm was spitting out the pronoun that was most frequently associated with that profession throughout history, and as a result, it “learned” a sexist view of the world. To someone living in 1940, the phrases “She is a cook”, “He is an engineer”, “He is a doctor”, and “She is a nurse” seem natural and consistent with their experiences, but not for someone living in our day.

Applicant Tracking Software (ATS)

Not only is training data often corrupted by historical bias, but every one of us also has an implicit bias (no matter how hard we strive to remove it). This can lead HR teams to unintentionally overlook candidates from specific races, genders, physical abilities, etc.

For example, let’s say Candidate Jones and Candidate Smith are both applying for the same position (even this example exhibits bias as Smith and Jones are viewed as “common names” lol). Candidate Jones went to MIT while Candidate Smith went to technical/trade college. Without even realizing it, a recruiter might favor Candidate Jones because of their prestigious education even though Candidate Smith has more relevant hands-on experience and skills.

The supposed promise of an applicant tracking system is that by replacing human bias with unbiased artificial intelligence, an organization can counteract similar showings of implicit bias, treat every candidate equitably, and hire more diverse talent.

However, Amazon’s experiment with ATS technology in 2014 showed that while an algorithm may be unbiased, the training data too often perpetuates the very bias hoped to be avoided. What they discovered was that the software preferred male candidates over female candidates, penalizing résumés that contained the word “women’s” (as in “women’s chess club”) and downgrading graduates from all-women colleges.

Trying to understand how software became sexist, they discovered that the training data consisted of a decade of résumés that had been previously rated by employees as part of the hiring process.

…And, in 2014, Amazon employees were largely male. Below is a bar chart of the following data on gender breakdown in job roles at Amazon:

No alt text provided for this image

Even if the male employees weren’t intentionally sexist, they were rating the résumés based on their own personal experience. Plus, many résumés come from referrals, and males have generally worked with other males. That results in a training data set that has relatively little representation of female résumés — and biased scoring of the résumés it does have.

Additionally, text parsing algorithms often utilize a library of word vectors that rank the similarity of words to other words based on how often they typically co-occur in digitized texts. A 2018 study found bias in one of the most popular word vector libraries, revealing that terms related to science and math were more closely associated with males while terms related to the arts were more closely associated with females—showing a potential bias within the libraries themselves.

AI and human diversity needed in text parsing algorithms

The scatter plot on the left shows the association of subject discipline terms with gender. It shows that the arts are more associated with females and science is more associated with males.

Amazon’s attempt at automatically screening applicants failed, but some companies are still attempting to create automated solutions for hiring that are free from human bias.

Diversity Equality Inclusion (DEI)

One great advancement that came out of 2020 was a heightened focus on how organizations (and their followers) are actively viewing diversity, equity, and inclusion (DEI). 

Yet, for too many organizations, DEI has become more of a framework that does little to bring about actual, substantive change. Frameworks are a starting point, not the end product in any effort for change. 

Additionally, AI and human diversity need to be linked as part of these frameworks.

There are many people in the trenches, with more experience than me, who are heading up successful efforts in organizations. My intent isn’t to elaborate on their efforts, but I have listed below 15 well-known benefits of DEI in any organization. However, I’ve added a 16th benefit that I think has been highly overlooked; and that’s the imperative of human diversity in developing successful AI.

1. All Employees are Welcomed and Encouraged to Thrive – Culture becomes life

2. DEI Helps Employees Feel Safe, Respected, and Connected – Increased productivity

3. Inclusion is Critical for Employee Engagement – Combat Work-From-Home Burnout

4. Employees Will Feel a Sense of Belonging – Feel safe to contribute their ideas

5. Increased Empathy Equals Increased Team-Building – Positive co-worker relationships

6. Inclusion Creates a Sense of Psychological Safety – Key to high-performing teams

7. Diverse Teams Innovate Faster – Diversity of thought fuels new product development

8. DEI Drives Better Results – Creates and nourishes innovation and then lets it thrive

9. The Innovation that Comes From Diversity is a Competitive Advantage – Open Minded

10. Diverse Cultures Reach a Wider Audience – Diversity of thought

11. Diversity Equals Excellence – More representative of what the customer base looks like

12. DEI Drives Improved Business Outcomes – Few failures and greater adoption

13. A Diverse, Inclusive Environment Retains Employees – Sense of belonging

14. Inclusive Companies are More Likely to Hit Financial Goals

15. DEI Efforts Define a Company’s Values – All people are of equal importance

16. AI and Human Diversity

AI’s role in business will only continue to increase. Fraud, processes, process controls, product determination and production, market targeting, and pricing are just some of the areas accurate unbiased AI can make improvements. Or, failing the human diversity criteria, exponentially increase the bias and discrepancies it promises to erase.

“An AI Now study revealed that women makeup only 15% of AI researchers at Facebook and just 10% at Google. The study also found that less than 5% of the staff at Facebook, Google, and Microsoft are Black, while Black workers in the U.S. as a whole represent roughly 12% of the workforce. Due to the lack of diverse engineers and researchers, the products that are developed and used by billions of users may result in the propagation of bias on a large scale. Hence, inclusion and diversity in AI are crucial.” (Forbes, Diversity And Inclusion In AI, March 16, 2021)

So, while the touted purposes of Google, Facebook, and Microsoft in creating an AI system are to accurately dispense equality, their actual human diversity and inclusion accomplishments seem far behind their programs.

Since AI systems are designed to assist and amplify human behavior (planning, learning, and problem-solving) the risk to amplify biased behavior is real. Even more concerning, is that the aggressive growth in AI is already straining the resource capacity of competent people needed to help build these systems. This shortage is only exacerbating the problem of not having a talent pool diverse enough to reflect the concerns of the populations they impact.

Those companies (and government entities) looking to implement AI, must ensure that those who are developing AI technologies don’t perpetuate or accelerate bias—within the data or by human supervision of the data. By addressing diversity gaps today, companies can better mitigate the bias in the systems that promise to bring a better tomorrow.

You may also like...

Verified by MonsterInsights