Ethics are not relevant to AI

I hear a lot about ethics and AI. We must have ethical guidelines for AI, and we must be aware of bias in the data, which affects certain social groups.

 

I have to admit I get a little tired of hearing about it. Firstly, I do not think that in practice there is any particular interrelation, and secondly, it is a topic that is more often brought up by the inexperienced talkers than the experienced doers in the AI world.

 

Why is there no interrelation?

You're probably thinking I'm going astray here. Aren't ethics important when talking about court rulings and, for example, loan applications? Isn't it essential that an AI tool does not judge ethnicities differently? Isn't it essential to be able get answers regarding a loan rejection?

Yes it is. In fact, it is extremely important, and something we should take very seriously. For me the only problem is that it does not matter if AI is in the picture here or not. No matter the underlying technologies and processes of a court ruling and a loan application, it is essential to have the ethics in order. So, it is the domain that determines the need for ethics, not the underlying technology, which may in some cases happen to be AI.

Often, ethics and AI are also mentioned in domains where ethics already has it's challenges. There is already a huge problem in the United States, where people of colour are judged significantly harsher. I often find that new technology is held up against a perfect standard, that does not exist and cannot be achieved. AI will never be able to pass a court ruling without any kind of bias, just as humans cannot.

 

Killer robots and war

When it comes to killer robots and war, I'm not particularly nervous either. In fact, I think they could even offer a better solution than the one we have today. 

But isn't it a disaster if killer robots hit civilians?

From 2003-2019, approx. 200,000 Iraqi civilians lost their lives in the war on terror. During the same period, only about 27,000 actual terrorists were killed. As we can see, for a single terrorist to be wiped out, many civilians die. That, I feel, is a rather low benchmark. So from a consequentialist standpoint, a killer robot could be lousy and hit several civilians, but still do better than the coalition forces historically have done.

It is of course more complicated than this and there are many important issues involved, but again, war is a domain that has considerable existing challenges with ethics.

AI in practice

I wrote at the beginning of this post that ethics are most often talked about by those who have no experience with AI. This may be worded slightly harshly, but it's just the easiest topic to tackle when you have no real experience.

I have visited and spoken with a lot of people who actually work with AI. The applications of AI I encounter in the real world are rarely in particularly ethics-heavy domains. Therefore, ethics are actually of very little relevance in practice, and if you are working with AI yourself, the probability that you will spend considerable time on ethics is also quite small.

To conclude: ethics are important, always. When the subject of ethics first comes up, when speaking of AI, this for me is a sign that an existing problem is being articulated. Perhaps these problems should be articulated regardless of the technology.

Previous
Previous

How much does it cost to build artificial intelligence?

Next
Next

The 7 unique challenges in AI