Dr Carlo Cordasco, Manchester Alliance Business School, discusses the lack of explainability in machine learning techniques, also known as the black-box problem. Some theorists and regulators support a Right to Explanation, which would underpin a set of correlative duties for organisations developing and/or adopting AI. In this paper, Carlo argue against such a right and illustrate that a meaningful approach to the Right to Explanation requires a commitment to ex-ante and rules-based decision-making procedures which may entail large costs in terms of accuracy, both in human and AI decision-making. He concludes by suggesting that a Right to Explanation is warranted for Public Administration decisions, as explainability is a key component of predictability which, in turn, shapes adherence to the Rule of Law.
This Oxford Future of Professionals online seminar series is co-convened by Mari Sako, Professor of Management Studies at Saïd Business School, University of Oxford, and Julian Corj, PhD candidate in Management at Oxford.