Does machine learning have unintended organization-level consequences?
Does machine learning have
unintended organization-level consequences?
|Brian T. Pentland
Michigan State University
Auckland University of Technology
Michigan State University
| Krishna Pothugunta
Michigan State University
In this paper, we theorize about the unintended consequences that could occur when firms use machine learning (ML) to improve decision-making and operations. We focus on one particular class of ML technology: neural networks (NN) with on-going adaptive training using labeled data. These kinds of NNs are trained for specific tasks. For example, a firm might train a NN to assess the credit-worthiness of customers. Then, once the NN is in use, the firm will continue to train the network with additional data about new customers. In this way, the network adapts and improves over time. Figure 1 shows the basic model of what might happen as an organization uses more of these task-specific NNs.
Figure 1: How ML utilization can result in organizational inertia
1) ML Utilization. We begin by observing that as ML is used for a task, the performance of that task is likely to improve. If task performance was not improving, we assume the ML would not be put into service. We focus on neural networks (NN) trained through supervised learning (with labeled data), but the general argument probably applies to other classes of automated learning algorithms, as well. In our model, ML Utilization simply refers to the number of such NNs in use.
2) On-going training. As operations continue, the NN will continue to be fed a stream of labeled data. The NN will use this stream of data to continue adapting and improving for this specific task in this specific context.
3) Competency traps. It is difficult to imagine a more precise example of what March (1991) referred to as a “competency trap.” The NN is trained on past data, with the assumption that it will work in the future. Strictly, a competency trap refers to the false belief that what worked in the past will work in the future. However, the effect we hypothesize should occur even if this belief is true (that is, if the NN is working as expected).
4) Data requirements. On-going training of an NN requires a stream of labeled data that is cleaned and prepared in the same way as the original training data. For each new NN in use, the information infrastructure and routines of the organization must be capable of providing this data in a timely manner.
5) Interdependence. Sequential interdependence will arise between data sources (e.g., transactions) and the NN training routines. If NN training routines share any data elements (e.g., information about contextual factors that help inform a decision), pooled interdependence will arise between those NNs. This problem is combinatoric and dependencies can be difficult to detect (Malone et al. 1999).
Organizational inertia arises from two mechanisms
First, each NN creates an automated competency trap. We expect that NN applications will work well and improve over time. To the extent that they do, human participants may tend to trust and rely on them more over time. This increases inertia by decreasing the willingness/possibility of change.
Second, increased usage of NN applications that incorporate on-going adaptive training will increase the need for clean, reliable flows of information across the organization, making them more interdependent. Increased interdependence tends to reduce flexibility for the organization as a whole (Levinthal, 1997). Nearly all NN research is conducted on single tasks (e.g., recognizing faces), so the quintessential problem of interdependence between tasks is not taken into consideration. The assumption is that if one NN is good, then more NNs are better.
Another way to approach this topic is to conceptualize organizations as searching on a rugged landscape for solutions to their economic and competitive problems (Levinthal, 1997; Levinthal and Warglien, 1999). However, each task-specific NN application is searching for the optimal solution on a small, task-specific piece of that overall landscape. A NN may become extremely good at determining which customers should get credit. However, it cannot consider the larger landscape: should we be in the business of offering customers credit in the first place? On a rugged landscape, local search (task level) will not lead to a global optimum (organization level).
While machine learning fosters efficiency at the task level, it may tend to foster inertia and lack of flexibility at the organizational level. Organizational inertia is present when the speed of change in the core features of an organization is lower than the rate of environmental change. Each of the side effects of machine learning that we have identified (competency traps and increased interdependence) are directly associated with organizational inertia and an increased risk of organizational failure, especially in turbulent environments (Benner & Tushman, 2003; Levinthal, 1997). Of course, in a perfectly stable environment, the issues raised here might not be problematic.
Benner, M. J., and Tushman, M. L. 2003. “Exploitation, Exploration, And Process Management: The Productivity Dilemma Revisited”, Academy of Management Review, (28), pp. 238-256.
Levinthal, D. A. 1997. “Adaptation on Rugged Landscapes,” Management Science (43), pp. 934-950.
Levinthal, D. A. and M. Warglien 1999. “Landscape Design: Designing for Local Action in Complex Worlds,” Organizational. Science (10), pp. 342-357.
Malone, T. W., Crowston, K., Lee, J., Pentland, B., Dellarocas, C., Wyner, G., Quimby, J., Osborne, C., Bernstein, A., Herman, G., and Klein, M. 1999. “Tools for Inventing Organizations: Toward a Handbook of Organizational Processes,” Management Science (45:3), pp. 425-443.
March, J. G. 1991. “Exploration and Exploitation in Organizational Learning”, Organization Science, (2:1), pp. 71-87.