Skip to main content

Stop Scaring the Kids!



Of two approaches to design—1) reacting to the urgent and 2) responding to the important—this is about the second:

Please stop anthropomorphizing plumbing systems! Frankly, you’re scaring the kids! We need to talk.

Computer systems are plumbing systems for electrons and photons, similar in behavior as plumbing systems are for water and waste. During my training to become a guided missile technician in the Navy I was told to use my understanding of plumbing systems to understand the more complex electronic systems I would be working with. It worked. Both systems were logical following the laws of physics. I was soon trouble shooting, fixing and maintaining complex electronic systems that at first had appeared to be incomprehensible.

‘Artificial intelligence’ systems, like plumbing systems, have proven to be immensely useful in the lives of human beings. Despite the ever-growing hype around artificial intelligence systems—AI—these systems are not sentient. They are actually life-less. They are not composed of living matter. Physicist and biologists are very clear about the difference between dead matter and living matter. Computers are not organic brains or nervous systems and are certainly not minds. They can be said to be ‘like’ (analogous to) some aspects of biologic systems but, they do not obtain to any higher order of human capacity. Some AI researchers have begun to try to provide a more balanced perspective of their technology but obviously much more needs to be done.

Computer systems do not think, have consciousness, make judgments, display intelligence, engage in creativity, learn, communicate knowledge or wisdom, or have intuitions, or insights. They do not harbor grudges, eat too much, nor look at themselves in the mirror. They cannot make an ‘ought’ from an ‘is’ or navigate ‘epistemic freedom’. They do not have ‘free will’. They are composed of dead matter—silicon, plastic, metal, rubber... 

 
The Banana Junior 6000 Computer

The integrated actions of these functional assemblies of dead matter is impressive, even awe inspiring, and have proven to be invaluable assets to the human enterprise. The contributions of the complex computer assemblies to human endeavors have proven to be essential in obtaining improvement in the human condition, increasing well-being and promoting progress. However, this assistance has not come from the agency of another human-like being. 

Customers and consumers are the purported beneficiaries of AI while the electronic plumbers are the technical designers behind AI. Artificial intelligence forms an interface between the two groups while serving the interests of the owners of AI.


AI Interface

AI digital assemblies are animated by human ‘design intelligence’ that creates rules of sequencing—algorithms—which, on occasion, display a functional capacity called artificial intelligence. But, the outcomes of animated computer algorithms are the direct results of the design decisions made by electronic plumbers—their visions of what is desirable—on behalf of owners, and not of any independent machine’s ‘intelligence’. 

We—society in all of its manifestations—need to have serious conversation about who gets to formulate the design briefs for AI systems. We need to engage in serious conversations about the development of performance specifications of AI systems and the concomitant prescriptive specifications as related to stakeholders, customers, future generations et. al. We need to talk about who loses and who gains in the production and actualization of AI systems. We need to talk about how these systems are designed and for whom. We need to talk about who is responsible and accountable for unintended consequence. We need to talk realistically about the true nature of artificial intelligence systems. 


AI is being hyped using terms and meanings that do not realistically apply to it. This mode of hyping—a form of marketing on steroids in the computer age—was more tolerable in the days of vapor-ware. Back then, it was understood that this kind of hype was essential for attracting money—investments or grants that would facilitate selling the ideas undergirding high tech services or the hardware needed to develop AI. Today, however, AI has created enough wealth to afford to let serious discussions emerge about its realistic nature and potential. It will be difficult to initiate such discussions given the power, prestige, wealth and control that comes from exploiting AI’s selected potentials—but the conversations need to take place.

The hype around AI has not supported the emergence of discussions about serious issues concerning the relationship between human agency and technology, or this technology’s impact on individuals, societies and the world at large. Although we are hearing more and more apologies from AI billionaires for the damage their technologies have inflicted—what we need right now are more serious discussions with them—without the hype. We need to spend less time and energy dealing with apologies from successful technologic innovators concerning what damage their products have done unintentionally, and more time engaged in critical discussions about how to design AI systems that work for the benefit of living matter—e.g. human beings. In order to have these serious conversations we first need to stop attributing qualities and attributes of human nature to dead matter.

A branch of my ancestry tree includes people, who at one time, believed that natural features and forms were dwelling places for spirits. The enlightened world forcibly dissuaded them from their set of spiritual beliefs. Early missionaries zealously offered a more realistic perspective that replaced animism with rationalism. Anthropomorphizing rocks, trees, or mountains was considered to be pagan and uncivilized. Where are the rational missionaries in today’s digital communities of artificial intelligence—offering objective and reasoned discourse for why one should not anthropomorphize dead matter such as electronic plumbing systems? 

The challenge for the champions of the anthropomorphized AI is the same as that which is shared by anyone working with complicated and complex systems—including human systems. The analysis of a complex system reveals its components and the relationships, links, and connection between components. But, given that the emergent whole is often something new to human experience, synthesis—in contrast to analysis—is more difficult to describe and name. A complex machine often shows synthetic qualities and activities, which are extremely difficult to name, because there are no meanings based on past experience that apply. Thus, the names and meanings of known and familiar activities are copped from other domains. Naming and framing new emergent realities is an extremely difficult and powerful challenge. Historically, we have leaned on compounding things we know from experience to simulate things that are new to us.

For instance, Native Americans living on the Great Plains called the horse a ‘sun dog’ when it first appeared in their landscape. They then called the steam engine, used by early railroads, an ‘iron horse’. A familiarity and understanding of ‘horseness’ was used by Europeans as well. The early automobile was called a ‘horseless carriage’ and the first bicycle was called a ‘dandy horse’. After some experience with these new things, the borrowed horse-related descriptors were dropped and more appropriate terms and meanings were applied to them. It is time for promoters of artificial intelligence to do the same.

An ask:
 Please don’t equate human capacities with the emergent behaviors of complex technologies and functional assembliesBe real about what AI is and what its potential is. Help promote realistic conversations around what would be desirable, prudent and possible for artificial intelligence to become.

Freud called humans ‘prosthetic gods’:

Man has, as it were, become a kind of prosthetic God. When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times.
—Sigmund Freud, Civilization and Its Discontents (1930)

Remember that a robot’s visions are actually our own visions. The robots—AI— are our prosthesis–they are a part of us, not apart from us.



 

Comments

  1. I agree that there is a lot of hype and mystery about AI that causes extreme arguments. I suggest learning how it works and then using it on a practice set of data to understand what is happening. I was curious about AI and learned it online and played with some code. To explain it to myself in language I can understand (and to keep it real because you need "good" data sets and you need to really think about how AI can help), I used philosophy from Hegel's/Fichte's dialectic: thesis, antithesis, synthesis. Another way to look at AI is to simplify AI to what it is actually doing. For engineers I would say that it reminds me of curve fitting like interpolating values using engineering tables that don't have the exact values required, or even iterating rebar sizes for concrete using the computer, or deflection of beams when there are more unknowns than known values. You still have to use judgement and see your results in action and check with others in the field. I'm interested in how to use AI to conduct research and how to make reliability and validity claims regarding the results--we can't accept the results as the right answer, we have to have a way to validate AI's claims. I haven't seen anything about how to use AI for research, only that AI creates business solutions of which I haven't really noticed any earth shattering advances in the business world because of AI.

    ReplyDelete

Post a Comment

Popular posts from this blog

Design, Wicked Problems & Throwness

Horst Rittel is one of the seminal residents in my 'Berkeley Bubble'. Recently a friend and colleague sent me an article about ‘double-wickedproblems’ . I have become ever more aware of the increasing number of references to ‘wicked problems’ in all forms of media that seem to have missed Rittel’s deeper insights . This brought up the concern I have about the use and miss-use of the term ‘wicked problem’.  The term ‘wicked problem’, first introduced by Rittel in West Churchman’s seminars at Berkeley, was in reference to his conceptualization of the impossible challenge of dealing with significant social issues using traditional, rational, ‘problem solving’ methods. In most cases what are miss-diangnosed as ‘wicked problems’ are actually complex or complicated problems that can be simplified or broken into smaller 'tame' problems allowing for a straight forward 'problem solving' approach to be taken. This approach is believed by many to be capable

Center for Systemic Design draft prospectus

    PROSPECTUS Center for Advanced Systemic Designing Introduction  Our futures can be approached in four ways: 1) drifting—adapting to whatever happens,  2) colliding—reacting and enduring,  3) retreating—backing away from undesirable states or conditions, or   4) advancing—navigating into desirable states-of-affairs. The norm nowadays is to drift, collide or retreat into the future. The fourth approach, the proactive approach, is the more apt response given the complex challenges and rising expectations that are the new norm for the foreseeable future.  The fourth approach depends on the agency of individuals who have the capacity to handle the challenge of securing desired outcomes in indeterminate situations on behalf of concomitant stakeholders and clients. They achieve this by serving—design agency—as members of design teams and design cohorts. These systemic designers are skilled polymaths who have the ability to create assemblies of essential elements into coherent whole system

Give Someone a Fish....Teaching & Learning

  During the process of developing a series of   master classes in systemic designing    (www.haroldgnelson.com/masterclasses) I became aware of a critical issue. Many of the terms I was using, such as ‘learning’ and ‘teaching’, had been hollowed out by the predominance of AI-related terms in public discourse like ‘machine learning’ and ‘artificial intelligence’ (AI). In addition, the dominating hype or shallow understandings of the ideas behind the terms I was using further hampered any meaningful communication with others. Even the term ‘masterclass’ had lost common meaning — much like the term ‘equal’ has lost shared meaning among mathematicians.     Common terms like ‘innovation’, ‘change’, ‘creativity’, ‘agent’, or ‘paradigm shift’ are among a growing list of words that have become mere tags or indicators rather than carriers of useful information in shared discourse. In my  master classes,  for example, key terms like ‘learning’, ‘teaching’, and ‘knowledge’ are central concepts t