Feasibility study for the development of a self-assessment tool to identify social, legal and ethical risks for AI companies
What consequences does the use of new technologies such as algorithms and artificial intelligence have for society and companies? Numerous Austrian start-ups develop technologies and products based on methods of artificial intelligence. However, they often lack the know-how and methods to assess the consequences of the technologies developed. This results in risks for society as well as for the start-up itself. The aim of this exploratory project is therefore to assess the basic feasibility of technology assessment for Austrian AI start-ups and to develop a prototype for a self-assessment tool.
Austrian AI start-ups develop innovative technologies and highly scalable business models. This can have a major impact on society and enterprises. At the same time, social awareness of the ethical aspects of products and services is increasing. Thus, the success of companies also depends to a large extent on the trust of external stakeholders such as customers, investors, sponsors or future employees. If, for example, an AI star-tup gets into the media because of an unintentional and unanticipated gender bias in the data and calculations used, it will be tainted with a stigma that has a long-term negative impact on stakeholder relations.
Technical systems based on the methods of artificial intelligence have in recent years gained enormously in importance. Large-scale funding programmes for the development of artificial intelligence indicate an increase in significance for European industries. At the same time, the EU and national governments are focusing on issues such as 'trustworthy' AI, responsible research and innovation, ethical consequences of AI, and the publication of AI guidelines with the aim of developing AI systems that serve the common good.
There exist numerous methods that allow the systematic assessment of technology impacts. However, due to the dynamic development of companies, the high scalability and the limited resources available in the AI field, new methods have to be found that take into account the corporate culture and very specific organizational form of this field. The aim of this exploratory project is to determine the basic feasibility of technology assessment for Austrian AI start-ups and to develop a prototype of a self-assessment tool. The broadest possible introduction of such a tool should enable AI start-ups to identify and anticipate their risk areas in social, legal and ethical questions. These measures have a confidence-building effect and stabilize the relationships with all stakeholders. This also contributes to sensitising the Austrian AI start-up scene to possible consequences and risks.
Technology assessment mainly deals with social, political and economic consequences of technology development beyond the company level. In practice, it is often difficult or even impossible to derive concrete actions at the company level from the guidelines available so far, because these are typically located at the level of a sector or technology, but not at the level of individual companies, products and services. The present project meets this challenge of changing perspectives.