A report from the Universities of Oxford and Bologna will help protect society from unethical AI, supporting organisations to meet future EU regulations.
A world-first approach to help organisations comply with future AI regulations in Europe has been published today in a report by the University of Oxford and the University of Bologna. It has been developed in response to the proposed EU Artificial Intelligence Act (AIA) of 2021, which seeks to coordinate a European approach in tackling the human and ethical implications of AI.
A one-of-a-kind approach, the ‘capAI’ (conformity assessment procedure for AI) will support businesses to comply with the proposed AIA, and prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment.
Produced by a team of experts at Oxford University’s Saïd Business School and Oxford Internet Institute, and at the Centre for Digital Ethics of the University of Bologna, capAI will help organisations assess their current AI systems to prevent privacy violations and data bias. Additionally, it will support the explanation of AI-driven outcomes, and the development and running of systems that are trustworthy and AIA compliant.
Thanks to capAI, organisations will be able to produce a scorecard for each of their AI systems, which can be shared with their customers to show the application of good practice and conscious management of ethical AI issues. This scorecard covers the purpose of the system, the organisational values that underpin it, and the data that has been used to inform it. It also includes information on who is responsible for the system - along with their contact details - should their customers wish to get in touch with any queries or concerns.