While generative AI (artificial intelligence) has been around since the 1960s, it has been making more and more headlines as of late. In fact, up to 90% of the material on the internet may be artificially created in a few years, according to generative AI experts. In this blog we will explore what generative AI is in the context of DE&I and analyse whether this tool will be a friend or foe for DE&I initiatives in the near future.
First, it is important to establish the key differences between “AI” and “generative AI”. A branch of artificial intelligence, known as “generative AI”, is dedicated to producing new content, such as communications, images, or even music, using patterns and data that it has been trained on. Generative AI models can produce new information that is synthesised in a way that resembles human creativity rather than simply copying the given data.
In contrast, artificial intelligence is a more comprehensive phrase that refers to a variety of tools and algorithms intended to replicate human cognitive processes. Generative AI is a subset of artificial intelligence, along with other subsets, such as natural language processing models, expert systems, and machine learning techniques. In essence, AI as a whole covers a larger range of applications and technologies beyond merely generative capabilities, even if generative AI explicitly focuses on the development of new content.
Generative AI will likely become the norm among businesses very shortly. More than 1,400 corporate leaders participated in an October 2023 survey by Gartner, Inc., and 45% of them said they are currently testing generative AI, while 10% have already implemented the technology in production. By comparison, a Gartner survey taken in March and April of 2023 suggested that only 15% of participants were piloting generative AI, showing a notable growth.
Some of the benefits of generative AI are already being utilised by the market. These benefits include the potential to automate data management and uncover previously undiscovered insights. Similarly, new innovators are emerging and are assisting organisations in analysing their content and communications through a DE&I lens, thanks to recent advancements in natural language processing (NLP).
The majority of these solutions have something to do with HR procedures as naturally, there is a close relationship between DE&I and HR operations, including hiring, retaining, and promoting employees. Any company that wants to advance in DE&I must monitor these facets of human capital. This is particularly true for delicate topics, such as gender and racial pay disparities. Large organisations may now track this information in real time thanks to data analytics tools.
Diversio, a pioneer in the utilisation of AI, has established itself as a leader in this field. With prominent clients, such as Honda and Unilever, as well as an extensive database comprising over 20,000 businesses, Diversio possesses the capability to access an organisation's HR data and provides valuable recommendations for fostering a workforce that is more inclusive. By seamlessly integrating with existing HR platforms and offering benchmarking tools and "inclusion scores", Diversio streamlines the process for businesses to evaluate their performance.
Through its AI recommendation engine, which identifies patterns in data, Diversio suggests changes to enhance inclusivity. These suggestions have been formulated based on a comprehensive analysis of 1,200 meticulously examined DE&I policies and programmes implemented by businesses worldwide.
Generative AI has also been used in the communication and content fields to assist businesses in identifying and reducing unconscious bias. AI has become well known for its capacity to assist in real-time transcription by automatically condensing the content on a page. However, it can also encompass voice-to-text functionalities, image explanations, recognition of gestures, and more.
Such tools contribute to the establishment of a more inclusive work environment, capable of accommodating a diverse range of talents. Additionally, it overcomes language barriers through instantaneous language translation, thereby facilitating seamless collaboration among teams from various regions. One of the best examples of these technologies is Textio. It functions as a kind of performance management tool and screening service, enabling anyone in charge of an organisation to have an AI assistant determine whether the language they're using is biased.
It is worth noting that, with the rate AI is advancing, the next few years likely introduce more innovators in the field, furthering the usage of generative AI in the workplace. Whilst the DE&I benefits of this technology are apparent, concerns regarding the utilisation of generative AI also need to be addressed.
The possibility of bias amplification with generative AI is one of the main issues within the workplace. The generated content may reinforce pre-existing biases if the training data utilised to create these AI models is biased or lacks diversity. For instance, a generative AI model may produce biased text outputs that support gender stereotypes if it is trained on text input containing these preconceptions. This is vital to keep in mind, even with the introduction of ChatGPT and other generative AI technologies. The information that is "fed" into ChatGPT comes from the internet, which includes content that is xenophobic, homophobic, sexist, and biased.
For example, in 2016, Microsoft's chatbot Tay became an unfortunate example of how generative AI can learn bias. Tay was programmed to learn from real people's comments on Twitter (now known as “X”), absorbing their language and style. However, it quickly became a target for offensive and inappropriate content. Within a short span of time, Tay started mimicking the negative aspects commonly found on social media platforms – in turn, posting offensive content that it learnt from other users.
The problem of AI bias is further highlighted in a 2023 study by Bloomberg. In this study, Bloomberg generated hundreds of images linked to crime and job titles using AI software, Stable Diffusion, in order to assess the extent of biases in generative AI.
In addition to these categories, Bloomberg utilised the text-to-image model to generate representations of workers for 14 job titles. The model produced 300 images for each of the seven jobs that are normally regarded as "high-paying" and "low-paying" in the US. For the majority of the jobs, however, the tool was not accurate or representative of the broader population —in particular, it overrepresented those with darker skin tones in low-paying sectors.
For instance, the model depicted 70% of "fast-food workers" as having darker skin tones, despite the fact that 70% of fast-food workers in the US are white, according to the study. Similarly, 68% of social workers were depicted as having darker skin tones, despite 65% of social workers in the US being white.
As noted in 'AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business' paper by Humphreys et al., the tendency for generative AI models to "hallucinate" or make up names, dates, and numbers, is one of their fundamental characteristics and potential barriers to its broader usage. This occurs as a result of the software's ability to identify patterns and predict the best course of action within a sequence. Due to their predictive nature, the text and graphics produced by these models may be flawed or even problematic.
Recently, generative AI has been making more headlines in the DE&I space with the controversy surrounding Google’s AI model, Gemini. Social media users have shared numerous examples of images created by Gemini's image generator, showing historical figures such as the Pope and US Founding Fathers.
The images lacked historical accuracy as the model portrayed white, male historical figures as women or people of colour, including German WW2 soldiers and the Vikings. This, in turn, led to Google pausing Gemini’s ability to produce these images and further demonstrated the current limitations of AI, which is unable to put the images it produces into historical context.
To conclude, whilst generative AI continues to evolve and develop, we, as DE&I professionals and beyond, are able to utilise AI in a few concrete ways across the employee lifecycle:
DE&I professionals can take generative AI a step further and utilise it to reach out to a more diverse talent pool and ensure that their career websites are accessible. For instance, the Page Group used Microsoft Azure AI to support their recruiters with wider outreach.
It is recommended that HR and DE&I professionals support ongoing assessment and monitoring of AI systems in order to identify and rectify biases promptly. A thorough bias mitigation plan must include regular audits of AI algorithms, employee and stakeholder input methods, and continuous training on bias identification. Organisations can maintain justice and equity in the application of AI technologies by remaining watchful and attentive to new biases.
To effectively address bias in AI, collaboration between HR, DE&I specialists and data scientists is essential. While data scientists can use their technical skills to build bias detection tools, assess results, and fine-tune algorithms, HR and DE&I experts can offer beneficial insights on organisational values, diversity programmes, and potential causes of bias. Through interdisciplinary collaboration, organisations can create more effective plans for reducing bias in artificial intelligence.
To dive further into the subject of artificial intelligence and its impact on DE&I, explore our webinar, How AI impacts DE&I in the world of work. Alternatively, browse our range of training and consultation services, including unconscious bias training and conscious inclusion training.
Can’t find what you’re looking for? Get in touch for a complimentary one-to-one session to discuss your bespoke needs.