Towards more comprehensibility and transparency

Artificial intelligence applications are widely used. They provide a basis for decision-making and often decide for themselves. A new ITA study asks how AI can be explained in a comprehensible way and how risks can be identified in order to respond to concerns.

Digital systems are a dominant force in many areas of everyday life. One major ingredient are applications of artificial intelligence (AI). The trend towards more AI is obvious, but at the same time there are concerns, i.e. regarding our privacy and our individual decision-making power. AI systems can efficiently support data preparation and decision-making in many areas; on the other hand, automated decisions by the systems without the possibility of intervention by individuals are raising red flags. In particular, the complexity and lack of transparency of AI systems creates unease.

In April 2021, the EU Commission presented a proposal for an AI law that is to regulate a legal framework for the development and use of trustworthy AI systems. A key factor in assessing the potential impact of the increased use of AI systems is the ability to understand them. Further, technical processing operations are to be made transparent. Transparency has been shown to be an essential factor in building trust in AI systems and the institutions that use and regulate them.

The short study conducted by the Institute of Technology Assessment (ITA) of the Austrian Academy of Sciences in cooperation with the Austrian Chamber of Labour investigates the extent to which AI systems can be made comprehensible and transparent for consumers, which measures could contribute to this and which basic demands regarding AI systems can be formulated from the perspective of consumers.



  • ITA [Hrsg.],. (2022). Understanding AI. ITA Dossier No 62en (May 2022, Author: Walter Peissl). Wien. doi:10.1553/ita-doss-062en
  • Riedlinger, D. (2022). ITA-Studie für Arbeiterkammer: Entmündigung durch Künstliche Intelligenz?. Ita-Newsfeed. Retrieved from
  • Udrea, T., Fuchs, D., & Peissl, W. (2022). Künstliche Intelligenz. Verstehbarkeit und Transparenz – Endbericht (p. 77). Wien. doi:/10.1553/ITA-pb-2022-01
  • ITA [Hrsg.],. (2022). KI verstehen? ITA-Dossier Nr. 62 (April 2022; Autor*innen: Titus Udrea, Daniela Fuchs, Walter Peissl). Wien. doi:10.1553/ita-doss-062
  • Riedlinger, D. (2021). KI verstehen?. Ita-Newsfeed. Retrieved from
  • 1

Conference Papers/Speeches

Conference Papers/Speeches

  • 24/11/2022 , Feldkirch
    Walter Peissl,  Daniela Fuchs,  Titus-Ionut Udrea: 
    Transparente KI – geht das? Was meinen wir damit?
    Technikfolgenabschä̈tzung aus Arbeitnehmer:innenperspektive
  • 28/04/2022 , online/Wien
    Walter Peissl,  Daniela Fuchs,  Titus-Ionut Udrea: 
    Künstliche Intelligenz - Verstehbarkeit und Transparenz
    Pressegespräch AK13
  • 21/04/2022 , Wien
    Walter Peissl,  Daniela Fuchs,  Titus-Ionut Udrea: 
    Künstliche Intelligenz – Verstehbarkeit und Transparenz
    BAT-Klausur 2022: AI am Arbeitsplatz
  • 15/03/2022 , online
    Walter Peissl,  Daniela Fuchs,  Titus-Ionut Udrea: 
    Artificial intelligence - Explainability and transparency
    Trustworthy AI? Not without Consumer Protection!
  • 1


10/2021 - 01/2022

Project team