Executive Summary
Markets, bureaucracies, and machines are inventions designed to process information at speeds, in quantities, and with accuracies that surpass human capabilities. In all three systems this information processing is made possible by reducing reality to narrow inputs (e.g., bits, prices, entries on bureaucratic forms) and then detecting patterns and pattern conformance from these inputs. Recognizing this commonality, this paper treats these systems as members of a set of artificial intelligences and uses the experience with markets and bureaucracies to suggest descriptive, predictive, and prescriptive insights about machine intelligence.
The resulting insights complement and sometimes modify observations derived by comparing machine intelligence with the “natural” intelligence of the human mind. They normalize machine intelligence—it is not as unique as the human-machine contrasts alone suggests. Moreover, considering markets, bureaucracies, and machines together highlights that many concerns about machine intelligence are about how that intelligence amplifies the powers of bureaucracies and markets. Finally, this approach illuminates ways in which machines, bureaucracies, and markets function as an ecosystem, exchanging data, co-evolving, posing analogous challenges of control, and intensifying and complicating requirements for regulation.
Section 1 describes inadequacies in prevalent definitions of intelligence, “artificiality,” and the composite concept of “artificial intelligence,” then proposes that a different approach to these concepts can broaden and deepen understanding of machine intelligence. It describes how machines, bureaucracies, and markets can usefully be regarded as a set of artificial intelligences invented to complement the limited abilities of individual human minds to discern patterns in large amounts of data. This section identifies the foundational reductionist and correlative character of these three systems.
Section 2 applies this perspective to show that the “artificiality,” and related characteristics often described as singularly disquieting and alienating attributes of machine intelligence, are understood better when considered alongside the histories of bureaucracies and markets. When bureaucracies and markets were introduced, they too were seen as artificial and alienating. The fact that over time these concerns dissipated suggests that artificiality is predominantly a marker for unfamiliarity and not likely to endure as a concern about machine intelligence.
Section 3 addresses the widespread concerns about machine opacity and unpredictability. While noting that stock markets pursue transparency through methods such as standardized accounting principles and required disclosures, the discussion points to widespread acceptance of opacity and unpredictability in these markets. It observes that if markets were predictable, we would not need them. Moreover, in general, intelligence is an emergent attribute. If a system is not opaque, it is commonly described as “just computation.” Drawing on human experiences with markets, bureaucracies, and machines, this section distinguishes four contexts in which opacity ranges from completely acceptable to unacceptable and suggests that the design of machine intelligence and policies to control that intelligence should be focused accordingly.
Section 4 examines dependency on data as a shared attribute of machines, markets, and bureaucracies. Though the shibboleth that “data is the new oil” is commonly recited with respect to machines, data is also the lifeblood of bureaucracies and markets. All three systems depend on obtaining, processing, and thwarting distortion (or “poisoning”) of information. However, their most characteristic modes of obtaining information differ: bureaucracies have historically secured information by command, markets by seduction, and machine systems by simulation or scavenging. As a result, complex patterns have emerged in which these systems share and compete for data. These patterns will change as simulation, the construction of virtual worlds, and the flow of information from the Internet of Things become more significant as sources of data for machines. Understanding this and other likely developments is helped by seeing that machines, bureaucracies, and markets function as an ecosystem, each co-evolving with the others.
Section 5 explores the nature of this ecosystem and shows how, because of their commonalities, the commingling of markets, bureaucracies, and machines amplifies characteristics in each. In addition, this commingling gives rise to hybrids such as Uber that combine these intelligences in new ways. The discussion observes that, when introduced, all three systems were often described as reliable, value-free processes for efficient allocation and decision-making. However, experience with markets and bureaucracies shows that as failures occur and values underpinning these systems are revealed and challenged, efforts at regulation result. This section argues that this history is a likely harbinger of crisis and regulation of machine intelligence. However, while bureaucracies and markets had no competition as artificial intelligences at the time of their introduction, the future of machine intelligence will be affected by the pre-existing power of bureaucracies and markets. In fact, the most immediate and important concerns about machine intelligence will not be about machine intelligence, but rather about how bureaucracies and markets utilize machine intelligence and then about how society regulates the three systems together.
In sum, this paper argues that the presently dominant tendency to compare machine intelligence to the intelligence of the human mind narrows and distorts understanding. Present understanding should be complemented by consideration of bureaucracies and markets. Widening the aperture of comparison opens an array of possibilities for present insight and future investigation.