AI has the potential to revolutionise workplace ED&I by mitigating unconscious bias, identifying patterns of bias and enhancing communication.
But can we completely trust AI? Is it really unbiased? And what are the governance issues in AI that we should be aware of – and address?
It was fantastic to welcome Jo Stansfield to our signatory Knowledge Share forum recently. Jo is founder of Inclusioneering, an inclusive innovation consultancy, and was previously Director of People, Data and Insights for AVEVA.
Our quarterly Knowledge Share Forums bring our signatory ED&I leads together to exchange knowledge and experience around a particular topic, often guided by a guest speaker.
Here are the session highlights - in Jo’s words – on the questions we need to be asking when using AI in ED&I.
A (very) short history of AI
AI is everywhere right now, so it’s easy to forget that research in AI and neural networks has been around for seventy years. And it was in 1950 that Alan Turing coined the term ‘Turing Test’ to determine whether a machine could exhibit intelligent behaviour, indistinguishable from that of a human.
What we’re seeing in AI today though, is an explosion of computing power, enabling systems to process an enormous amount of data to build intelligent capability in, for example, security, policing, health and the workplace.
How algorithms learn
I’ve come across quite a few firms whose marketing leans on people’s (false) assumption that their AI system can’t be biased because it’s automated or because it’s an algorithm – but actually this isn’t true.
Why? Because algorithms learn from people, they're made by people, and they're deployed by people.
This means that all AI systems are biased to a certain extent, and the bias that they know is the bias that's inherent in society.
AI and bias
In theory, AI could revolutionise workplace ED&I by identifying patterns of bias and enhancing communication. It could promote fairness by treating individuals with greater impartiality, based on data-driven insights.
But in reality, there’s a risk that rather than helping remove some of that bias, AI tools can actually amplify and scale it.
Because bias isn't just a technical issue. Bias starts in the real world as inequality and discrimination. This bias is captured within the data used in training sets, in model design and deployment decisions, and is amplified by the resulting outputs from the AI.
Putting our trust in AI
So how much trust should we be putting in our workplace AI systems?
You may have heard the term ‘automation bias’. This is when people put their trust in automated decisions and recommendations, because the systems feel intelligent. But often this decision making isn’t any more clever than human intelligence.
Generative AI such as ChatGPT, MidJourney or DALL-E creates representations of people where we can observe any embedded biases. These exaggerated biases that AI systems create are known as ‘representational harms’. These are harms which degrade certain social groups by reinforcing the status quo by amplifying stereotypes.
In a recent article [LINK], Bloomberg used AI to produce thousands of images relating to job titles, in order to gauge bias in generative AI. The analysis found that the image sets generated for every high-status job, like CEO, were represented by light skinned people. Lower status, lower paid jobs (like fast food worker) were represented by Black and darker skinned people.
All of this underscores the importance of awareness and intentionality in our use of AI in the workplace. As computer scientist and digital activist Joy Buolamwini puts it, ‘whether AI will help us reach our aspirations or reinforce unjust inequalities is ultimately up to us.’
Assessing the risks
So what risks should we be considering when we’re implementing our AI systems?
Firstly, we need to think carefully about what the system is, about what it’s doing. And most importantly - what’s the power relationship between us and the people it’s acting upon. Do they know it's running? Do they have access to the data? And can they contest any of the decisions that are made?
We should also consider the legal implications of using AI systems. For example, if we use software to help us make employment decisions, the Equality Act still applies if that software discriminates, and organisations are still liable for those decisions.
What’s more, the EU AI act is currently going through the European parliament; it's expected to come into law in 2025/6, bringing with it a whole host of new legislations around AI.
Governance
Effective governance in AI is crucial to ensure its ethical, transparent and accountable use. And it’s not just about having great developers – it’s about having human oversight of the systems to make sure they’re working reliably, accurately and consistently.
If you’re still at an early stage of using AI, make sure you have experts you can call on to highlight risks, assess outcomes and plan out scenarios of what might happen if the system fails.
And if you’re more established in using AI, consider setting up an ethics committee to oversee it. A committee with knowledge of data, ethics and tech, made up of people who have different lived experiences to build in that diverse input early on.
About Jo Stansfield
Jo’s an engineer turned ED&I consultant and throughout her career she has focused on making tech more inclusive and accessible.
She founded Inclusioneering™ in 2021, drawing on her experience and qualifications in engineering, software development, diversity, equity and inclusion, and organisational psychology.
Working in male-dominated industries in her career sparked Jo’s curiosity about the lack of diversity she encountered and inspired her to pivot her focus from the technical to the human dimensions of engineering.
Read more about Inclusioneering and the work it does here.