3 points on ethical ... and not so ethical AI

Alan Pelz-Sharpe

Alan Pelz-Sharpe

It’s no longer a question of artificial intelligence (AI) or not. Rather, the question is what kind of AI. On one hand, we want AI to also adhere to universal human values concerning individual rights and equality; we need it’s decisions to be within the grasp of our influence. On the other hand, we also don’t want an AI that is reproducing our own prejudices such as racism and sexism. This was already reported to be the case back in 2016, when software used by the US court system for prisoner risk assessment was found to be biased against black people.

In April this year, the European Union published 7 guidelines on developing AI. The guidelines was the united effort of 52 AI experts. These guidelines can be seen as a response to the fact that AI is already having a massive, yet often not fully understood impact on our societies, and that this will only increase in the years to come. AI is becoming a demoratic and human question as universal as climate change. 

At the Boye 19 Conference in Brooklyn in May, we held a roundtable on the topic where industry analyst Alan Pelz-Sharpe submitted a few discussion topics in advance. Formed by his insights and the following discussion, here are three points on ethical AI. 

1. Artificial intelligence is biased

There is a perception that AI is somehow neutral and free of bias, but that is certainly not the case. AI is dependent on data, and data is full of biases. There is a saying in the programming world that if you feed a computer garbage, you will get garbage out. So, if we are not thoughtful about the input, AI can reflect both sexism, racism, ageism and all kinds of other isms (none of them being idealism). 

If you use AI in your decision making, for hiring, approving a loan, targeting a customer, we need to address those biases by putting our data under scrutiny. But of course, we can only detect the biases that we are aware of ourselves, which means that developing ethical AI is also about personal, and indeed society wide reflection. 

And even then, we will in all likelihood not be able to create AI that is completely free of biase. 

2. AI and Transparency

When you use AI to make decisions, you should in theory know how the AI came to its decision. 

This is important for a number of reasons. For improving the program continuously, for purely ethical reasons, and even for regulatory requirements. However, many AI systems run complex neural networks, making it humanly impossible to unravel how they came to their decisions. 

So how to deal with these black boxes? According to Alan Pelz Sharpe, you may need to put transparency ahead of speed:

  • You may need to use simpler Machine Learning methods that are transparent in how they came to a decision - you may have to sacrifice some speed, complexity and power to meet transparency requirements. 

But seeing how AI is becoming a big internal topics, and is now increasingly being addressed by players such as the EU, more transparency might become a regulatory standard in the not too distant future. And if the public attention keeps growing on the topic, transparency might become an integrated part into companies CSR policies and used as a differentiator in the marketplace. 

3. Ownership

If you use AI to target customers or make any kind of impactful decision then you are responsible for that decision - even if you don’t know how it was made. The bottom line is that AI makes mistakes, regularly, and that we don’t want to live in a world where no one is held accountable for them. 

But to run an ethical AI system you not only have to take ownership when it makes mistakes, you have to monitor it to try and avoid those mistakes in the first place. 

AI is not the same as traditional IT systems (WCM, CX, CRM etc), and if you walk away after deploying it, the consequences could be much more catastrophic than say a dip in sales when no one is using the new CRM.

You may need to dedicate resources, reducing some of those immediate financial gains from implementing the AI in the first place. However, it will most likely prove a sound financial investment in the long run, as AI and Machine Learning is at its core about continuous learning.

Over time, small mistakes could compound, and in many cases the only remedy is to shut down the system, which would certainly cascade into a substantial financial loss.

In conclusion, AI is not only here to stay, but to spread, and develop. It is our responsibility to steer that development in a way that will prove both financially and ethically sound.