Ethics at first glance: The AI Ethics Label

An AI ethics label as a decision-making aid for ethical AI systems

For years, the European energy efficiency label has been helping consumers* to buy electronic devices. The AI Ethics Label now fulfills this task for ethical AI systems. 

Every­one agrees that so-called AI and algo­rith­mic sys­tems should be “eth­i­cal”. Cat­a­logues of val­ues for eth­i­cal design already exist — one exam­ple are the Algo.Rules devel­oped by the Ber­tels­mann Stiftung in coop­er­a­tion with the iRights.Lab.

But how do you trans­late abstract prin­ci­ples into prac­tice? And above all: How do you man­age the bal­anc­ing act between the high com­plex­i­ty of the topic and user-friend­li­ness for tech­ni­cal­ly less expe­ri­enced con­sumers*?

The AI Ethics Impact Group, in which the iRights.Lab par­tic­i­pates, has now designed an AI Ethics Label for this pur­pose. It shows the eth­i­cal eval­u­a­tion of an AI sys­tem in a con­crete, sim­ple rep­re­sen­ta­tion, sim­i­lar to the Euro­pean ener­gy effi­cien­cy label. It cre­ates clar­i­ty at first glance and reduces com­plex­i­ty with­out los­ing it.

For the eval­u­a­tion, “ethics” is record­ed with six val­ues: trans­paren­cy, account­abil­i­ty, pri­va­cy, jus­tice, reli­a­bil­i­ty and sus­tain­abil­i­ty.

With the help of the so-called VCIO model (val­ues, cri­te­ria, indi­ca­tors and observ­ables) these abstract val­ues are bro­ken down and made mea­sur­able. For each value it is mea­sured on sev­er­al lev­els how “trans­par­ent” or “reli­able” an AI sys­tem is.

Michael Puntschuh, Pol­i­cy Advi­sor at iRights.Lab, explains how this mea­sure­ment works:

The label offers AI-devel­op­ing organ­i­sa­tions the oppor­tu­ni­ty to deter­mine the qual­i­ty of their prod­ucts and to com­mu­ni­cate it to the out­side world.

It also reduces com­plex­i­ty for con­sumers*, indus­try and reg­u­la­tors by pro­vid­ing bet­ter com­pa­ra­bil­i­ty of prod­ucts avail­able on the mar­ket. It also pro­vides a quick overview of whether an algo­rith­mic sys­tem meets the eth­i­cal require­ments nec­es­sary in a spe­cif­ic case.

Which these nec­es­sary require­ments are depends on the appli­ca­tion con­text. In order to deter­mine and eval­u­ate this con­text, the work­ing paper presents the “risk matrix”. It clas­si­fies appli­ca­tion con­texts of AI sys­tems accord­ing to two dimen­sions: Inten­si­ty of the poten­tial dam­age and the depen­den­cy of the person(s) affect­ed on the respec­tive deci­sion. It defines five groups accord­ing to the risk. Sys­tems with a high­er risk should then also meet high­er require­ments.

Both ele­ments — the VCIO model and the risk matrix — are com­bined in the AI ethics label.

The label is pre­sent­ed in the work­ing paper “From Prin­ci­ples to Prac­tice — An inter­dis­ci­pli­nary frame­work to oper­a­tionalise AI ethics”, which can be down­loaded here. It is based on the pro­pos­al of the EU Com­mis­sion in its White Paper on Arti­fi­cial Intel­li­gence to use such a reg­u­la­to­ry approach.

About the AI Ethics Impact Group (AIEIG):
The AIEIG is an inter­dis­ci­pli­nary group ini­ti­at­ed by the VDE Ver­band der Elek­trotech­nik Elek­tron­ik Infor­ma­tion­stech­nik e.V. and the Ber­tels­mann Foun­da­tion, which devel­oped the approach now pre­sent­ed. The fol­low­ing addi­tion­al orga­ni­za­tions par­tic­i­pate in the group:

  • Think Tank iRights.Lab
  • TU Kaiser­slautern
  • ITAS / KIT
  • TU Darm­stadt – Philoso­phie
  • IZEW Tübin­gen
  • Hochleis­tungsrechen­zen­trum Stuttgart

Das Lab

Schützen­straße 8
D‑10117 Berlin

kontakt@irights-lab.de
Tele­fon: +49 30 40 36 77 230
Fax: +49 30 40 36 77 260