Research
2 min read

The ethics of explainability in human and AI decision-making

Dr Carlo Cordasco, Manchester Alliance Business School, discusses the lack of explainability in machine learning techniques, also known as the black-box problem. Some theorists and regulators support a Right to Explanation, which would underpin a set of correlative duties for organisations developing and/or adopting AI. In this paper, Carlo argue against such a right and illustrate that a meaningful approach to the Right to Explanation requires a commitment to ex-ante and rules-based decision-making procedures which may entail large costs in terms of accuracy, both in human and AI decision-making. He concludes by suggesting that a Right to Explanation is warranted for Public Administration decisions, as explainability is a key component of predictability which, in turn, shapes adherence to the Rule of Law.

This Oxford Future of Professionals online seminar series is co-convened by Mari Sako, Professor of Management Studies at Saïd Business School, University of Oxford, and Julian Corj, PhD candidate in Management at Oxford.

The OxFOP series is intended to provide a forum for rigorous discussion of how professionals – such as accountants, auditors, journalists, lawyers, and physicians – are responding to opportunities and challenges of adopting artificial intelligence (AI) in their work.  This year’s series will address the topic of responsible AI from various perspectives, including law, computer science, management, and philosophy.