BIAS IN MACHINE LEARNING AND AI
Presenting new ethical challenges in business: AI as both perpetrator and champion for diversity and inclusion
Uncovering Bias in Business
The meteoric growth in the availability and use of data over the past 20 years has led to the pervasive use of rules-driven decision making and AI models across many aspects of our lives. For businesses in particular the benefits have been huge through both enhanced sales and marketing capability and increased cost efficiencies.
For the consumer, data and AI in many cases has created a fairer more equal society and grown access to previously unavailable products and services. However, what is less well understood is how these tools are built and how the decisions they take for us are made.
This matters hugely, because somewhat unwittingly this growth in data has lead to human bias effectively becoming hard-coded into these systems, exacerbating the efforts of society to become more diverse and inclusive as well as potentially opening up business to significant commercial and reputational risk.
Bias can be easy to recognise in many cases, as we have seen recently, but what do you do about it when it is invisible, systemic and no one knows or realises it is there?
Is this a situation you would be comfortable for your business or shareholders to be in?
This paper addresses the issues above in order to achieve the simple purpose of raising awareness amongst our clients and peers, explaining in non-technical ways how it happens and what you should be doing to address this now and as an ongoing practice in your business.
We will take the reader through a simple explanation of what bias is and why it matters for your business on many levels. Using real life examples from well known brands it will explain simply how bias can creep into your business through data and show you the steps you need to go through to start addressing it a building the foundations of an ongoing programme.
We are providing all of our readers the opportunity to arrange a short 30 minute briefing session with our in house experts to discuss your own situation and answer any questions you may have.
Beyond Analysis recently presented this topic to the Institute of Travel and Tourism members. A recording of this webinar has been made available at the end of this paper.
What is Bias and Why Does it Matter?
DEFINITION
The action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment.
Unconscious bias (= that the person with the bias is not aware of) can influence decisions in recruitment, promotion and performance management.
(Cambridge Dictionary)
"Recent events highlight very publicly the extent to which diversity and inclusion are still a part of everyday life. What is often less understood is how systemic and pervasive this issue can be without us even knowing."
Bias is a disproportionate weight in favour of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Some biases are positively beneficial, such as making good choices about only eating healthy foods or avoiding dangerous situations or people.
Biases are usually based on lived experiences or stereotypes, rather than on a basis of sound fact. Be they positive or negative, these biases can lead to poor choices and discriminatory practices. Bias is often characterised as stereotypes of people based on what tribe or group they belong to and these are usually quite visible and noticeable groups such as race, ethnicity, gender or age.
But bias can occur in many more less obvious places that may be so outwardly visible such as sexual orientation, values, martial status or a persons physical qualities.
People may be aware they hold biases, often as a result of a lived experience, but more often than not it can be hidden below the surface and be something quite unconscious or unintentional. This can make it even harder to find and recognise.
Bias is not a term we typically use in our day to day lives, but it exists everywhere and never more so than today has it become such an important topic of debate as we address many decades of failings across society to treat each other fairly, with kindness and mutual respect.
In our survey of C-suite executives:
Less than 30% had considered data as a negative source of bias in their board level discussions about diversity and inclusion and their response to recent events
In business, as in society in general, bias has been happening since the dawn of time. However with thanks to the numerous movements that have grown over the past few years it has become a primary responsibility of the C-suite to understand, build awareness of the causes and implications across their business and take positive action for change.
The imperative has never been greater to address bias. The pressure from the coverage or recent events will surely have motivated many executive boards to sit up and ask some painful questions of how they operate, recruit or manage the progression and pay of their workforce up the ranks.
But how many are looking deeply enough into the issue and searching out the underlying causes and fully appreciate the full scope of the impact of bias in their business?
It can be easy to recognise on the outside, as we have seen recently, but what do you do about it when it is invisible, systemic and no one knows or realises it is there? Building awareness of the issues and the underlying causes is the first step.
Responsible data driven businesses have much more to gain from understanding bias than simply doing the right thing. Tackling what are essentially fatal flaws in your data and business systems could potentially also reap huge financial return.
Moral and Legal
We are morally and legally obligated to not be discriminating upon pre-ordained characteristics.
Customer Experience & Brand
Bias in AI systems could erode trust between humans and machines that learn.
Regulatory
Regulators want to ensure that citizens are treated fairly by regulated entities to ensure equality, i.e. no individual or group is unfairly treated based on their sex, race, etc.
Financial Reward
By allowing bias in our systems we can be unwittingly missing out on great business opportunities.
Quality & Accuracy
The more we come to rely on models to make decisions, the greater the imperative to know how they work and confirm that they work as planned.
How Does Bias Creep into Your Business?
Your data is most often the source of the issue.
In many cases, Artificial Intelligence (AI) or Machine Learning (ML) models can be a force for good, working well to reduce humans’ subjective interpretation of data to help eliminate bias and treat everyone fairly. It can do this because AI and ML models learn to consider only the variables that improve their predictive accuracy, based on the training data they are provided with to learn from.
The essence of these models, be they automated or simply coded rules is that they are nearly always built using historical data as the basis. Be that through some rear view analysis to create a segmentation or attrition model, or a clever credit approval algorithm that is continuously looking at patterns in data to optimise which applications it approves.
So it's actually quite easy to see how when the wrong data is used to train models (data that somehow contains implicit racial, gender, or ideological biases), that the outputs are really doing little more than replicating the flawed decisions of the humans that came before them.
This is likely to be unintentional and most often occurs indirectly as a result of historical institutional bias that has gone unnoticed, unknown even, for years. But knowing this can happen, it is can also be as a result of simply poor data governance practices and quality controls in the design and development of solutions and systems.
So everything is fine if the training data is ok. But this is a big if. The reality is that many models end up being trained on data containing human decisions or on data that reflects second-order effects of societal or historical inequities.
Although these outcomes are probably unintentional, the bias still occurs and the risks and the damaging results to those at the sharp end of it, remain the same. There are a growing number of bias examples where and how this has happened to well known institutions or companies that may help bring the issues to life.
How many times have you been asked to participate in a review of the design of one of the algorithms that choose who your company will or will not do business with?
Less than 30% had considered data as a negative source of bias in their board level discussions about diversity and inclusion and their response to recent events
Training data containing ethnic imbalances
Empirical studies have found that various CV screening systems have resulted in employers granting interviews at different rates to candidates with the identical CV but with names that reflect different racial groups or ethnicities.
Source: Are Emily and Greg More Employable Than Lakisha and Jamal?
A Field Experimentation Labor Market Discrimination. Marianne Bertrand and Sendhil Mullainathan. AMERICAN ECONOMIC REVIEW - VOL. 94, NO. 4, SEPTEMBER 2004 (pp. 991-1013)
+50%
Call-backs for interviews of resumes with white names
Insidious outcomes of misunderstanding data
Amazon designed an artificial intelligence recruitment system to screen CVs and recommend shortlisted candidates. Unwittingly the solution was trained on data submitted by applicants over the previous 10-year period, much of which came from men. The system was intended to review job applications and give candidates a score ranging from one to five stars. By 2015, it was clear that the system was not rating candidates in a gender-neutral way because it was built on data accumulated from CVs submitted to the firm mostly from males.

Training data incomplete
Facial recognition has become increasingly used in law enforcement across the globe. Some of the leading solutions from companies like IBM and Microsoft were tested and found to be 99% accurate at determining gender, for white men. However the accuracy rate for identifying dark skinned women dropped to 35%.
The Facial rCOMPAS algorithm is widely used in the United States to guide sentencing decisions and outcomes by providing a prediction on the likelihood of a criminal re-offending. After a number of investigations, in May 2016 it was reported that COMPAS is racially biased and predicts that black defendants pose a higher risk of re-offending.

User generated data
Google images search for "CEO" results return just 11% women, even though 27% of CEOs in the US are female. When users click on Google ads, the click responses are influencing the targeting algorithms and making decisions on what ads to serve to which user. Likewise, Google's Ads tool for targeted advertising was found to serve significantly fewer ads for high paid jobs to women than to men.
(Datta, Tschantz, & Datta, 2015)

Bias in the travel industry
In a detailed academic study of travel behaviours over the past decade it was observed that travellers suffer from the influence of common biases at all the stages of travel, such as pre-trip, on-site, and post-trip. Major drivers cited include the nature of images used to market destinations and products, the ranking of search results for holiday queries and the use off feedback rankings.

Fixing and Managing Bias in Your Business
The first step is an independent assessment of your data and the decisions it informs.
We believe passionately in the power of data to do good. Ensuring our clients are fully aligned with how their models operate and perform and have the information to take the best ethical decisions is a critical part of our value to them.
Our bias solution supports businesses to create the required transparency of their models through independent validation. We look to address the main challenges: ensuring the right stakeholders fully understand how the model works and fixing the underlying data.
Data bias: The underlying data is often the source of the issue and so our solution interrogates all aspects of the data inputs for bias. We consider the following issues:
-
Training data contains human decisions and reflects second-order effects of societal or historical inequalities.
-
Poor sampling techniques.
-
User generated data.
-
Statistical correlations that are unacceptable or illegal.
Individual understanding: Enabling non-technical employees to build a relative understanding of AI is important to allow them to understand how the unintentional bias occurs in their models.
Growing awareness and giving everyone accurate and measurable views on the relative importance and significance of features is a first step. An additional layer of independently verified testing of the model function and outputs provides the validation required internally to mitigate internal/institutional bias.
Transparency in how the model functions is at the root of the challenge
Consensus
Mutual level of understanding of AI and purpose of the models between technical and non-technical employees.
Weight
Accurate and measurable view on the relative importance and significance of features of the model.
Independence
Independently verified testing of the model function and outputs to mitigate internal/institutional bias.
Essentially it is about reviewing bias before, during and after modelling
Before Modelling
Input data is fair. Proportional representation across all groups.
During Modelling
Model performance is fair for all groups for protected/sensitive features.
After Modelling
Predictions and model out put maintain equal probability of false positive or negative for all groups with protected/sensitive features.
Our model validation approach answers the critical questions of your models
Data staging
-
Are any particular groups suffering from systematic data error or ignorance?
-
Have you intentionally or unintentionally ignored any group?
-
Are all groups represented proportionally e.g. when it comes to the protected feature of race, are all races being identified or merely one or two?
-
Do you have enough features to explain minority groups?
Model build
-
Are you sure you aren’t using or creating features that are tainted?
-
Have you considered stereotyping features?
-
Are you models meeting the criteria of the use case they were designed for?
-
Is your model accuracy similar for all groups?
-
Can you be sure that predictions are not skewed towards certain groups?
-
Are the models optimising all required metrics and not just those that suit the business?
Finding Hidden Customer Opportunities
As we have seen, AI and Machine Learning (ML) technology have become a major part of the armoury for many industries whether it be private companies, financial services, travel, healthcare or governments. These data driven tools are used to make ever more important decisions that can have far-reaching impact on individuals and societies both positive and negative.
As these solutions have evolved and become more widely used, new human rights issues have been brought to light as biases are uncovered within the decision-making systems we design. This also causes legal, ethical and brand reputation issues for the entity involved.
Yet, by uncovering bias we can uncover hidden customer opportunities. Beyond Group CEO Paul Alexander and Bias Expert Jordan Browne-Moore were invited by the Institute of Travel and Tourism to consider how the travel industry can take on bias and begin positive disruption.
Uncovering accidental bias to find hidden customer opportiunities
To understand how to implement data analytics solutions to tackle bias in your business, contact our team of strategy experts. You may also be interested in reading more of our experts insights.