An AI ethics label as a decision-making aid for ethical AI systems
For years, the European energy efficiency label has been helping consumers* to buy electronic devices. The AI Ethics Label now fulfills this task for ethical AI systems.
Everyone agrees that so-called AI and algorithmic systems should be “ethical”. Catalogues of values for ethical design already exist — one example are the Algo.Rules developed by the Bertelsmann Stiftung in cooperation with the iRights.Lab.
But how do you translate abstract principles into practice? And above all: How do you manage the balancing act between the high complexity of the topic and user-friendliness for technically less experienced consumers*?
The AI Ethics Impact Group, in which the iRights.Lab participates, has now designed an AI Ethics Label for this purpose. It shows the ethical evaluation of an AI system in a concrete, simple representation, similar to the European energy efficiency label. It creates clarity at first glance and reduces complexity without losing it.
For the evaluation, “ethics” is recorded with six values: transparency, accountability, privacy, justice, reliability and sustainability.
With the help of the so-called VCIO model (values, criteria, indicators and observables) these abstract values are broken down and made measurable. For each value it is measured on several levels how “transparent” or “reliable” an AI system is.
Michael Puntschuh, Policy Advisor at iRights.Lab, explains how this measurement works:
The label offers AI-developing organisations the opportunity to determine the quality of their products and to communicate it to the outside world.
It also reduces complexity for consumers*, industry and regulators by providing better comparability of products available on the market. It also provides a quick overview of whether an algorithmic system meets the ethical requirements necessary in a specific case.
Which these necessary requirements are depends on the application context. In order to determine and evaluate this context, the working paper presents the “risk matrix”. It classifies application contexts of AI systems according to two dimensions: Intensity of the potential damage and the dependency of the person(s) affected on the respective decision. It defines five groups according to the risk. Systems with a higher risk should then also meet higher requirements.
Both elements — the VCIO model and the risk matrix — are combined in the AI ethics label.
The label is presented in the working paper “From Principles to Practice — An interdisciplinary framework to operationalise AI ethics”, which can be downloaded here. It is based on the proposal of the EU Commission in its White Paper on Artificial Intelligence to use such a regulatory approach.
About the AI Ethics Impact Group (AIEIG):
The AIEIG is an interdisciplinary group initiated by the VDE Verband der Elektrotechnik Elektronik Informationstechnik e.V. and the Bertelsmann Foundation, which developed the approach now presented. The following additional organizations participate in the group:
Telefon: +49 30 40 36 77 230
Fax: +49 30 40 36 77 260