AI Causes Fatality!

OK, I made up that headline, but how long before we start seeing headlines talking about AI leading to a negative or even catastrophic outcome. There is the obvious use case of an autonomous vehicle, but frankly we are using machine learning to make predictions, recommendations, and decisions in areas where the machine can perceivably perform these tasks faster or better (or both) than humans, so we implicitly go with them.

I have sat in meetings where ML use cases are presented, and observed a split room where there is one group in the audience who want to understand the source of the training data and the algorithms used behind the model, and then there is the other group who simply accept the science without the need to understand it, and just want to know when it can go into production. This is understandable as studies have shown that humans are hard-wired to follow the path of least resistance, and the amount of effort required to understand the algorithm or training method, particularly for a non-technical audience, can simply be the more challenging option by far, and thus lead to blind trust.

This is one of the key reasons why AI regulation is important. We can act on a prediction or recommendation or even let the AI make the decision for us, but if it goes wrong who is at fault? The end user? The AI company? The data scientist? The source of the training data? Or even the inherent bias of humankind from days gone by? This quandary is not a new one and I remember having similar discussions over a decade ago around the use of decision support systems in healthcare and how far they should be allowed to go, but what is new is the impending explosion of such systems and use cases in all industries.

The big question for me is how far can the regulation go without stifling innovation and restricting the true benefits we can get from AI? (and yes I am a huge advocate of AI!). You can put the responsibility on the developers to ensure their creations are ethical, safe and trustworthy, but surely there is also an onus on consumers for responsible use? As ever there is the challenge of expert technologists lacking in domain expertise and domain specialists not being technology savvy, and what I think is required is joint approach between these groups to lead to sensible regulation. What are your thoughts on AI regulation?

Leave a Comment

Your email address will not be published. Required fields are marked *