Towards more comprehensibility and transparency

Artificial intelligence applications are widely used. They provide a basis for decision-making and often decide for themselves. A new ITA study asks how AI can be explained in a comprehensible way and how risks can be identified in order to respond to concerns.

Digital systems are a dominant force in many areas of everyday life. One major ingredient are applications of artificial intelligence (AI). The trend towards more AI is obvious, but at the same time there are concerns, i.e. regarding our privacy and our individual decision-making power. AI systems can efficiently support data preparation and decision-making in many areas; on the other hand, automated decisions by the systems without the possibility of intervention by individuals are raising red flags. In particular, the complexity and lack of transparency of AI systems creates unease.

In April 2021, the EU Commission presented a proposal for an AI law that is to regulate a legal framework for the development and use of trustworthy AI systems. A key factor in assessing the potential impact of the increased use of AI systems is the ability to understand them. Further, technical processing operations are to be made transparent. Transparency has been shown to be an essential factor in building trust in AI systems and the institutions that use and regulate them.

The short study conducted by the Institute of Technology Assessment (ITA) of the Austrian Academy of Sciences in cooperation with the Austrian Chamber of Labour investigates the extent to which AI systems can be made comprehensible and transparent for consumers, which measures could contribute to this and which basic demands regarding AI systems can be formulated from the perspective of consumers.

Duration

10/2021 - 01/2022

Project team

Funding

  • Titus Udrea