A sociotechnical analysis of the Austrian Public Employment Service’s profiling system
In 2020, the Austrian Public Employment Service (AMS) will introduce an algorithm to help allocate subsidies for the training of jobseekers. The so-called "AMS algorithm" is controversial. The ITA Wien and TU Wien analyze technical specificities and social consequences of the system for the Chamber of Labor.
In Austria, the (semi-)automated profiling of jobseekers becomes reality. Using statistics from past years, the system calculates the prospects of jobseekers on the labor market. Based on these forecasts, AMS clients are divided into three groups which will receive varying resources for training. The algorithmic system searches for correlations between characteristics of jobseekers and successful employment. The characteristics include age, citizenship, gender, education, care responsibilities and health impairments as well as past employment, contacts with the AMS, and labor market data of the location of residence. The purpose of the system is to invest primarily in those jobseekers for whom support measures are most likely to lead to reintegration into the labor market.
The so-called "AMS algorithm" is controversial. Critical voices see it as an algorithmic manifestation of discrimination on the labor market. Johannes Kopf, member of the AMS executive board, replies with a remarkable statement: "Our new assistance system takes this reality into account, but can of course not discriminate by itself.” This reflects the myth of technology being value-neutral tools, even though countless studies have shown that the design and use of technologies establish certain values, norms and interests in society. Moreover, Big data analyses are surrounded with the aura of truth, objectivity and accuracy. Such beliefs are repeatedly reproduced in public debates without empirical proof. The question thus is whether such assumptions could play a role in the case of the AMS algorithm too.
The above considerations are venture points for our analysis of the AMS algorithm from a socio-technical perspective. We investigate the following questions:
- How do values and social objectives (e.g. principles of equality, solidarity or increased efficiency) shape the AMS algorithm?
- Which career data are used for the algorithm and how do they shape it?
- Which forms of bias, discrimination and error rates play a role and how are these taken into account? Are past discriminations objectified through automation and thus possibly consolidated?
- How will the 'success' of the algorithm be measured and will the algorithm be adapted based on the results of the test phase in 2019?
- What insight and rights of appeal do those affected by the system have? Are special trainings for AMS staff planned? What measures need to be taken to ensure transparency and explainability of the algorithmic classification?
Our study draws on comparative studies and evaluations from other countries that have used algorithms in labor market policy. In order to examine basic effects, challenges and impacts it further links these to research results from the field of "fairness in machine learning". We analyze available documentations of data sets, models, materials, etc. and focus on whether past discrimination is likely to be perpetuated or can be remedied. This leads to conclusions on sociotechnical effects such as possible bias or error rates. A further focus on the social embedding of the algorithm into AMS practice investigates how much insight and agency different groups of people (AMS employees, jobseekers) are expected to have. The results of the overall study will lead to specific policy recommendations.
The project is carried out in interdisciplinary cooperation with the Centre for Informatics & Society at the TU Wien.