Narrow Intelligence: a socio-technical perspective of the narrowness of artificial intelligence at work

Narrow Intelligence: a socio-technical perspective of the narrowness of artificial intelligence at work

Narrow Intelligence: a socio-technical perspective of the 

narrowness of artificial intelligence at work 

Mohammad H. Rezazade Mehrizi 

Vrije Universiteit Amsterdam 

KIN center for digital innovation 

m.rezazademehrizi@vu.nl 

Despite the bold claims and visionary hopes, artificial intelligence (AI) still has a
limited practical presence at work. In particular, many of AI applications can be
best described as “narrow intelligence”: algorithms, which are trained to tackle a
very specific task, compared to the wide range of interconnected tasks in a regular
organizational practice (Topol 2019; Wong et al. 2019). To understand how AI comes
into the work of professionals and organizations, we need to better understand the
different aspects of the narrowness that algorithms have and its (unintended)
consequences for the work of individuals and organizations.
We empirically explored this question through ethnographic observations,
systematic analysis of the AI developments, and interviews with practitioners in the
context of ‘medical diagnosis’, and particularly ‘radiology work’. Our data collected
through 40 hours of observation, combined with 80 interviews with the radiologists,
and analysis of 250 AI applications in the domain of radiology enabled us to identify
three types of narrowness regarding the intelligence of these algorithms. Analysing
these three sources of data, we identify three types of narrow intelligence:
‘technological narrowness’, ‘attentional narrowness’, and ‘organizational
narrowness’.
Technological narrowness. First, and despite the expectations of the majority
of radiologists who think that AI offers insights into ‘most’ of their medical decisions,
more than 80% of the existing AI applications work with images taken from only one
specific machine (e.g., only MRI), examine only one specific body organ (e.g., only
lung), and offer insights regarding only one specific medical question (e.g., “whether
there is any lung nodule”). To put into perspective, a simple chest scan can be
clinically checked on more than 75 medical questions, from which only one question
concerns the presence of lung nodules. Only recently, a few AI applications
attempted to integrate the algorithms into more comprehensive applications, yet
still these applications can work with only one specific type of input and body organ.

Attentional narrowness. Second, our observations of the radiologists who are
working with AI applications show that the attention of radiologists can be overly
focused on the few specific medical questions that the algorithm examine, hence
leading them to overlook the wider range of medical questions that they are
supposed to check and report when they examine a patient’s image. To illustrate, in
an observation of a senior radiologist working with an AI application developed for
‘detecting lung nodules’, we observed that he spent more than two-thirds of the
time for examining the cases on checking the results of the AI application regarding
the lung nodules, which is only one of the 35 medical issues that a radiologist had to
typically check and report.
Organizational narrowness. Finally, zooming out on the way the algorithms are
integrated with other information systems and into the radiology workflow, we
identified the organizational narrowness of AI applications: when AI applications are
used for a very short period of time, with low regularity, and by a small portion of the
users. Our observations across three radiology departments show that their AI
applications can be practically relevant for examining less than 5% of the total cases
that a radiologist examine during a typical day, and these cases are often assigned
to a special groups of radiologists who are specialized in a specific domain such as
neuro-radiology. Hence, organizationally, limited experience is developed regarding
the usage of the algorithms, very occasional working routines are developed, and
hence the integration of the algorithms with the daily work is often sporadic.
Considering the three types of narrowness that AI applications have at work,
we discuss how they can influence the way practical knowledge is developed by
professionals (e.g., to be overly narrow and specialized) and how knowledge work is
structured within the organizations (e.g., the risk of over-specialization and
fragmentation of the knowledge work). We discuss the theoretical implications of our
findings for understanding the role of AI at work and its implications for the design
and implementation of AI systems in general, and for knowledge work in particular.

References
Topol, Eric. 2019. Deep Medicine: How Artificial Intelligence Can Make Healthcare
Human Again. New York: Basic Books.
Wong, S. H., H. Al-Hasani, Z. Alam, and A. Alam. 2019. “Artificial Intelligence in
Radiology: How Will We Be Affected?” European Radiology 29 (1): 141–43.