Below Human Error: Designing Automation with AI

Below Human Error: Designing Automation with AI

Abstract for AI@Work – Reshaping Work 

Below Human Error: Designing Automation with AI 

Anne Henriksen, PhD Fellow & Anja Bechmann, Professor School of Communication and Culture, Aarhus University (Organization & Management) 

Introduction Automation has long been a phenomenon in the industrial work industry (Hirsch-Kreinsen 2016), and is now moving into knowledge work industries in the form of Artificial Intelligence (AI). This happens as organizations in various sectors adopt AI software systems to automate work practices along with the decisions and assessments that they involve. The generally assumption is that such AI systems can reduce costs and perform tasks with a greater accuracy and at a faster rate than hu- man professionals can. In the light of this development, we argue for the need of an in-depth analy- sis of how automation is created, and thus how professionals are being outperformed by Machine Learning (ML) models build into AI systems. This paper goes beyond the smooth and seamless pro- cesses of automation and studies from the perspective of AI developers how automation with AI is produced in practice: 

How do AI developers sociotechnically design and construct automation, and with what aim, ideals, and strategies? 

We examine this question in a specific case of applied AI by drawing on empirical data from a qual- itative case study. The case in question is a Scandinavian AI company which develops AI systems for healthcare and accounting. The goal of the developers in the case is to produce automation with a performance that is “below human error”. We will analyze how developers strive to achieve this, what obstacles they encounter, and how they attempt to overcome such obstacles. Furthermore, we discuss the potential implications of producing automation with AI from a sociocultural perspective. By doing so, the article adds to state-of-the-art understandings of human-machine collaboration in automation systems (e.g. Woods, 1996; Woods, 2010) and knowledge production in AI (e.g. Bechmann and Bowker, 2019; Jaton, 2017). 

Theory, Case Study & Methods To examine the research question, we employ a framework of theories in the AI literature (e.g. Russell, Norvig, and Davis 2010; Norvig ,1992; Goodfellow, Bengio, and Courville, 2016) along with theories in the STS literature (Star 1990; Latour, 2003; Fujimura, 1987) which bring to front the (invisible) work that goes into the process of creating ideal cases for sophisticated technologies and making technologies applicable and doable. Methodologically, we employ a work process and practice-oriented approach (Bowker and Star, 2000) to account for the strategies used to overcome obstacles in order to reach the desired performance level. 

The collection of data in the qualitative case study has been carried out in the Scandinavian AI company during a period of 10 months in 2019. The methods applied are participant observation, spontaneous interviews, and planned in-depth interviews (Hammersley and Atkinson, 1995) with AI developers including but not limited to Machine Learning developers. Data has been collected 

Abstract for AI@Work – Reshaping Work 

on the work performed by AI developers in relation to two AI systems; a system for healthcare un- der development, and a system for accounting launched in the Spring 2019. The differences of the two systems and their application areas allows us to study (1) how automation is done and practiced before as well as after the implementation and continuous stabilization of an AI system; and (2) how obstacles to creating automation with AI are similar or dissimilar from system to system. 

Preliminary Findings & Conclusion We empirically find that automation with AI is highly dependent on the knowledge of human pro- fessionals that underlies work practices which are subjected to automation. In the case, we see that the collective behavior of human professionals is used as the core knowledge ground in the ML models developed for the two AI systems. However, this use of human knowledge is also what cre- ates one of the major obstacles to achieve automation with a performance level that is “below hu- man error”. AI developers struggle to break with the closed learning cycle that occur when using training data produced by the very same work practices that ML models are trained to outmatch. The developers in the case apply various strategies to overcome such knowledge shortcomings in the ML models, the major one being the detection of deviant behavior of ML models. Such behav- ior is interpreted as a proxy for an error in the core knowledge of the ML models, and thus as a proxy for faulty work practice that should not be automated. In this way, the understandings that AI developers hold of for example errors are key in the design of automation and are therefore highly critical. These insights are important supplementary to the critical algorithmic and AI studies that often takes place at a more generic level due to primarily limited access to the work processes be- hind outcomes. 

Words: 782 


Bechmann A and Geoffrey CB (2019) Unsupervised by Any Other Name: Hidden Layers of Knowledge Production. Big Data & Society 6(1): 2053951718819569. 

Bowker GC, and Star SL (1999) Sorting Things out: Classification and Its Consequences. First pa- perback edition. Inside Technology. Cambridge, Massachusetts London, England: The MIT Press. 

Fujimura JH (1987) Constructing `Do-Able’ Problems in Cancer Research: Articulating Alignment. Social Studies of Science 17(2): 257–293. 

Goodfellow I, Yoshua B, and Aaron C (2016) Deep Learning. Adaptive Computation and Machine Learning. Cambridge, Massachusetts: The MIT Press. 

Hammersle M, Atkinson P (1995) Ethnography: Principles in Practice. 2nd ed. London; New York: Routledge. 

Abstract for AI@Work – Reshaping Work 

Hartmut, HK (2016) Digitization of Industrial Work: Development Paths and Prospects. Journal for Labour Market Research 49(1): 1–14. 

Florian J (2017) We Get the Algorithms of Our Ground Truths: Designing Referential Databases in Digital Image Processing. Social Studies of Science 47(6): 811–840. 

Latour B (2003) Science in Action: How to Follow Scientists and Engineers through Society. 11. print. Cambridge, Mass: Harvard Univ. Press. 

Norvig, P (1992) Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. 1st edition. Morgan Kaufmann. 

Russell SJ, Norvig P, and Davis E (2010 ) Artificial Intelligence: A Modern Approach. 3rd ed. Pren- tice Hall Series in Artificial Intelligence. Upper Saddle River: Prentice Hall. 

Star, SL (1990) Power, Technology and the Phenomenology of Conventions: On Being Allergic to Onions. The Sociological Review 38(1_suppl): 26–56. 

Woods, DD (1996) Decomposing Automation: Apparent Simplicity, Real Complexity. Automation and Human Performance: Theory and Applications (eds. Parasuraman R and Mouloula M): 3–17. Routledge. 

Woods, DD., ed. (2010) Behind Human Error. 2nd ed. Farnham; Burlington, VT: Ashgate.