Feasibility study for the development of a self-assessment tool to identify social, legal and ethical risks for AI companies
Artificial intelligence is currently in bloom: Big Data applications and the digitalisation of our everyday lives are opening up new possibilities for making machines capable of learning. Will machines soon outdo us with their abilities?
Research on so-called artificial intelligence (AI) is making great progress. AI is a sub-discipline of computer science that aims at making machines capable of learning so that they can perform tasks partially or completely autonomously. Enormous increase in computing power made this possible and digitalisation in general fosters automation. Stimulated by new developments in the field of big data and the use of algorithms that process enormous amounts of data from various domains, more and more data is being interpreted by machines.
A central field of AI is machine learning, which already permeates many applications today: from robotics, search algorithms, text, image, speech and even face recognition (e.g. in social media), to digital assistance systems or self-driving cars. Novel approaches such as "Deep Learning" should even enable self-learning machines that optimise themselves.
AI alone is no guarantee for success
AI is already well advanced in some areas, in many others it is only in its infancy. Although AI will certainly change the world we live in, it is still unclear in what way exactly. Recent developments definitely hold enormous potential, e.g., for process automation and optimisation. AI-based systems can already be used today as a tool for supporting decision-making for complex problems, e.g. in medicine, the energy sector, or the geosciences in order to predict weather phenomena or earthquakes.
These opportunities, though, entail serious challenges and important ethical questions. This also includes issues such as increasing technological dependencies, lacking transparency and verifiability of technology as well as numerous risks for security, privacy and human autonomy. Processes are not necessarily more efficient or better wherever AI is involved. On the contrary, depending on the area of application, increased proneness to errors, inefficiency of processes and corresponding problems can be expected. Machines do not have any intuition, they work with probabilities - this can lead to tensions in the interaction between man and machine.
But even seemingly efficient AI involves enormous risks: In particular, the aggrevation of social inequality through discriminatory algorithms or dangers due to the misuse of AI systems such as manipulation through bots are problem areas with serious risks. Futhermore, these systems tend to be "black boxes", i.e., they are very opaque in their structure and functionality. Embedded algorithms with the possibility of self-optimization reinforce already existing problems to verify their proper functioning.
How not to lose the human factor
It is thus essential to establish societal mechanisms that ensure responsible research, development and use of these technologies for the benefit of society. This requires profound knowledge about realistic opportunities, limits and threats.
Technology Assessment deals with developments in the realm of AI along these fields of tension. Crucial questions are, among others: Where do the advantages, where the dangers for society outweigh? What do learning machines mean for human learning? What are the consequences of automated decisions for security, privacy and autonomy? How can highly complex, AI-based systems remain comprehensible and verifiable for humans? What happens in the event of deficient or dangerous behaviour? To what extent are ethical behaviour and values even programmable at all - or are tensions between machine and human autonomy ultimately "pre-programmed"? A number of ITA-projects address these questions.