By Corporate English Solutions

30 October 2023 - 14:35

Minimising AI bias - Best practices for  organisations

The power and impact of Artificial Intelligence (AI) is ground-breaking and far-reaching. It can influence decisions that can forever change the course of our professional (and personal) lives. That’s why it’s so important to minimise AI bias. 

We share essential strategies to tackle the ethical challenges of AI in the workplace – and beyond.


Reding time: 5 minutes

Artificial intelligence is proving to be a revolutionary force in the workplace. Its capacity to transform huge amounts of data and automate repetitive tasks is (so far) unmatched. Its other superpowers? AI can streamline hiring and provide data-driven insights into employee performance; deliver speedy responses to customer inquiries; personalise marketing campaigns and assess and monitor regulatory compliance.

Despite the benefits of AI in the workplace, there is a growing concern we need to discuss: AI bias.

Why are people so concerned about bias in AI?

AI in the workplace has incredible power. It can influence decisions that impact our professional lives. From the interviews, jobs and performance evaluations we receive to the types of training and career development opportunities we’re offered. But if those decisions are made by biased AI systems, they can lead to discrimination against particular groups of people.

How does AI bias happen? 

AI systems don’t design themselves. Humans play a significant role in AI development. Unconscious bias in the workplace is common. And this can cause unconscious bias to be introduced into the data and algorithms used in AI systems. The result is the potential for AI to unintentionally discriminate against certain groups, perpetuating inequalities that exist in society. 

Is this concern justified?

Simply put, yes. The apprehension around AI bias is deeply rooted in real-world examples of bias in artificial intelligence, leading to discrimination and unfair treatment. Think about recent cases questioning the ethics of AI in recruitment tools, customer service chatbots, online marketing and compliance and risk assessment in financial services. If left unchecked, bias in AI systems can have significant and far-reaching consequences on our lives, livelihoods and wellbeing.

So, what can be done? Keep reading for three effective strategies your organisation can use to navigate the ethics of AI and minimise bias in an AI-driven workplace.

1. Regularly test and audit AI systems

One global HR team for a major retailer recently learned an invaluable lesson: that artificial intelligence screening systems need to be routinely tested for bias. Particularly when they are adopted in diverse local markets. After an internal recruitment process for UX designers in Australasia, local hiring managers noticed a disturbing pattern. 80% of ‘top’ candidates were male, despite a reasonably balanced representation of female candidates. 

This outcome led the local HR team to pause its use of company-wide AI screening tools. A comprehensive audit revealed bias in the data used to assess candidates. Mostly because the data used to train to screen applications was based on input from predominantly male resumes from North America. The audit was an eye-opening experience that led to routine testing of AI-recruitment tools across all markets.

How can organisations minimise similar bias in AI systems? Follow these practical steps:

  • Confirm that AI vendors include Key Performance Indicators (KPIs) focused on fairness, accuracy and transparency, sharing their results regularly.
  • Set up periodic reporting, highlighting any efforts made for mitigating bias in artificial intelligence as well as improvements.
  • Establish clear procedures for addressing identified issues or biases, including a set process for modifications and continuous improvement.
  • Ensure transparency and accountability in testing AI systems, making results accessible to all relevant stakeholders.


When procuring AI solutions:

  • Include clauses in contracts that allow testing and audit rights so that your organisation can conduct independent assessments or use third-party auditors.
  • Establish penalties for non-compliance with testing and assessment requirements, but also incentives for meeting or exceeding them.
  • Encourage collaboration with the vendor to develop a mutually agreed-upon testing and assessment methodology and a regular audit schedule.


Remember AI systems need to evolve with shifting data, user behaviour and ethical standards. You’ll need to regularly test them to prevent AI bias and discrimination in sensitive areas like hiring, financial services and compliance.

2. Cultivate diversity in AI development teams

What do diverse teams have in common? They’re more innovative, creative, resilient and drive higher financial performance. And when it comes to developing AI systems, diverse teams play an influential role. They help challenge assumptions and stereotypes that affect certain data features, algorithms or decision criteria in AI systems. However, the AI industry lacks diversity, which explains how AI bias can take place.

Consider Nora’s experience, an AI developer for a mobile banking start-up. As a non-native English speaker, she was able to identify AI bias and discrimination in the bank’s AI-powered chatbot. Based on natural language processing (NLP), the chatbot could understand and respond to human speech. But it couldn’t understand and interact well with everyone.

Nora uncovered why a majority of non-native English-speakers exited the chatbot at higher rates. It was because the app’s initial training data was based on sounds and vocabulary of native English speakers. She sounded the alarm, which fixed this language bias preference but also eventually led to multilingual chatbot support. The result? An increase in new customers (and more diverse team members like Nora). 

In this context of AI-driven customer service, diverse AI teams can spot biases that may be missed by a more homogenous team. In addition to establishing and implementing bias mitigation protocols to actively detect unintentional biases introduced by AI developers, you can take the following actions.


When procuring AI solutions:

  • Include diversity criteria in your procurement guidelines and RFPs. Request vendors disclose team composition, such as gender, ethnicity, age and more.
  • Check if vendors have supplier diversity programmes and examine their diversity policies. 
  • Ask for case studies illustrating vendor commitment to diversity and inclusion. 
  • Establish diversity metrics and reporting requirements within contracts. 


Diverse teams are set to lead the way in the next phase of AI development and adoption. As the AI landscape evolves, they will not only uncover biases but also drive innovations, developing new ideas, technologies, and solutions that consider and accommodate the needs and preferences of a wider range of people.

3. Enhance accountability and transparent, bias-aware communication

Everyone makes mistakes. But who is responsible when AI makes bad decisions?

A rising star among online rental platforms recently had a lot of explaining to do. A media investigation into housing discrimination revealed AI bias toward certain racial and ethnic groups. The platform’s AI model negatively segmented customers based on personal information.

What the company did next was a masterclass in accountability and transparency – as well as addressing the ethics of AI. After its own internal investigation, they released a public report outlining exactly how its AI model made biased decisions on rental application assessments. And how this led customer service agents to unintentionally discriminate against rental inquiries from specific racial and ethnic groups. 

The company later published a detailed guide on mitigating bias in artificial intelligence, which led to industry-wide changes. Their acknowledgment of accountability and commitment to transparency eventually increased participation among a more diverse customer base in their major markets. 

By communicating how AI arrives at specific decisions, it makes it easier to trace potential sources of bias. And openly question the ethics of Artificial Intelligence. This level of transparency also empowers users to assess the validity and fairness of those decisions. 

To effectively enhance transparency, accountability and bias-aware communication in AI systems, consider the following strategies:

Transparent explanations

Clearly communicate the factors and data that influence key decisions, such as sharing data sources, algorithms and key variables used. Explain how ethical considerations are factored into the decision-making process. Highlight efforts to identify and mitigate bias, emphasising accountability measures.

Interactive interfaces

Allow users to explore decision factors on their own through interactive interfaces. It empowers users, enhances transparency and contributes to greater accountability by involving them in the decision-making process. Implement features for questions and user feedback.

Consistent documentation process

Ensure there is a clear record of AI decision process across all applications and systems. Publish and share widely transparency reports detailing how AI systems reach key decisions, including both positive and biased examples.


When procuring AI solutions: 

  • Require vendors to follow certain transparency standards, provide clear communication and implement bias mitigation measures. Ensure you can audit AI systems for bias-related issues. 
  • Collaborate with the vendor to establish expectations for transparency, accountability and bias mitigation throughout the procurement process.


It’s important to remember that bias isn’t just a human problem; it’s a challenge AI systems face as well (largely because of human input). Implementing these strategies can help to ensure fairer and more inclusive practices in the workplace. And all aspects where AI impacts our lives. We must first, though, be aware of the potential for AI bias and take proactive steps to minimise it in all its forms.

British Council has over 80 years’ experience of partnering with organisations and individuals in over 200 countries. Founded in 1934, we are a UK charity and are committed to upholding ethical practices, transparency, fairness, and the greater good of the global community.

Our holistic approach to learning and assessment is based on research-driven innovation and empowers growth, positively impacting individuals and organisations. Partner with us to upskill your workforce and develop skills for success in 2023 and beyond.