Is AI Discrimination the Biggest Reputational Risk of the Large Language Model Revolution?

AI discrimination and bias is increasingly on the risk radar of companies looking to realise the potential of large language models – or it should be.

From vetting CVs to risk assessing criminal defendants, as the use cases of AI have grown exponentially within the last few years so too has awareness of the potential risk of biased or discriminatory AI outputs.

In June last year, the EU competition commissioner claimed the risk of discrimination by AI is far more pressing than apocalyptic notions of human extinction at the hands of machines.

Moreover, an increasing number of organisations are using GenAI in everyday business operations. The opportunity cost of failing to respond to the AI revolution is potentially significant, while a brand’s position on AI will be a reputational hygiene factor.

If, say, a targeted marketing strategy is conceived based on flawed data particular groups could be ostracised, or if a generative tool is used recklessly then creative productions may reinforce harmful stereotypes, or even be just plain racist. Just ask Google.

Scrutiny, from customers and clients, internal talent and potential recruits, and wider stakeholders not least including regulators and the media, is only going to increase. And, as with any emerging and disruptive trend, corporate missteps will be quickly seized on – risking the erosion of trust in the brand in question, as well as the technology itself.

Put simply, without human supervision and effective due diligence, accepting an AI output without due regard for the risk of discriminatory bias could have serious consequences.

Click here to read the full article on infiniteglobal.com

Register for Europe’s leading legal tech expo!