Credit: The University of Michigan
The Allure of Technology
The 18th century ushered in the industrial age and with it came an undetected dogma that has permeated our society and culture. The dogmatic belief was this: Technology is synonymous with progress. This is the myth of the modern society. Technology develops and advances it invariably provides solutions to age-old problems and our future gets brighter as a result. Carried on the wings of technology we advance further towards our true humanity. So sayeth the myth of our age. By and large, this myth held true through the 18th, 19th and early 20th century. This is in large part because of technology’s ability to pull us out of a subsistent life and into a life of plenty. But now, in the 21st century, technology no longer brings us closer to an ideal humanity but only adds abundance on top of overabundance, consumption to overconsumption. For us present day users of technology this is a provocative obstacle. For professionals who code, design, engineer, manufacture, integrate and use technology in their work the problem is even more inscrutable. To solve problems, to make work easier, to make a company more profitable, to enhance a user experience, technology qua technology is no longer a surefire way to achieve these good ends. Despite noble motives, technology poses a nascent threat of creating a world full of the technological emptiness or worse, advances that are downright hostile to human thriving. The saying, “We were promised hover boards and instead got 141 characters.” directs us to the reality. We need not look any further than our own daily experiences to see to see the hollow, superfluous applications, and uses of our smartphones. Is this the “progress” promised by technology? Perhaps the allure of technology, and it’s danger, is best summarized by MIT professor and AI pioneer, Joseph Weizenbaum:
“Science [Technology] promised man power. but as so often happens when people are seduced by promises of power, the price exacted in advance and all along the path, and the price actually paid, is servitude and impotence. Power is nothing if it is not the power to choose.” (Computing Power and Human Reason, Joseph Weizenbaum, W.H. Freeman & Co. Ltd. 1976).
Credit: Getty Images
Despite the manifest deficiencies and wanton use of technology, voices still ring out for an optimistic future, a future predicated on the progress of technology . You’re already familiar with the buzzwords. Generative AI, virtual and augmented reality, autonomous vehicles. These dazzling, shiny, bright technologies possess an allure that is difficult to resist. They are the undisputed technologies of the future because they are nascent technologies of the present. However, if look to the future with optimism it should only be because of an increase in of human flourishing, not simply because of the increase in technology. Human flourishing will be achieved not because of technological progress but despite it. To ensure the thriving of individuals, communities, and societies, rules and principles ought to exist to govern and limit the development, design, implementation and use of technology.
Section 1: The Automation Paradox, Algorithms, and a The Dream Unfulfilled
At the heart of the technological allure lies a promise. Through some technology, app, automation, software, or algorithm, we will be able to solve problems and perform the activities of our personal and professional lives more efficiently and effectively. Yet, this dream remains largely unfulfilled, the promise broken. The attribute of technology that embodies this promise best is automation.
Automation means letting a machine do the work for you. Solve an advanced math problem with a calculator, heat your home with a furnace, run an advanced computer simulation of a manufacturing facility. There is nothing inherently bad or wrong about automation. But automation always and necessarily entails giving full control, including and especially decision making, over to a technology. whether it’s a steam engine or a computer program, automation is the act of machines taking the place of humans. When automation is employed, greater efficiency and safety is usually achieved than simply relying on individual actors alone. But these results are achieved only when it works. When automation fails to perform correctly it creates new problems and challenges that humans are not prepared or equipped to manage. In his book “The Design of Everyday things”, Don Norman explains what he calls the Paradox of Automation:
“When the automation works, the tasks are usually done as well as or better than by people, moreover, it saves people from the dull, dreary routine tasks, allowing more useful productive use of time, reducing fatigue and error. But when the task gets too complex, automation tends to give up. This of course, is precisely when it is needed the most.” (The Design of Everyday things”, Don Norman, 2013 basic books, pg. 213).
Norman goes on to discuss many of the other issues surrounding automation, specifically the propensity for automation to fail abruptly and without warning and for individuals to have insufficient knowledge to deal with the crisis when the automation stops working. Indeed, as automation gets better and failures become rarer (though not completely eliminated) human actors will have less experience and situational knowledge than any time in the past. This means that increasingly, when it’s time to intervene, we won’t know how.
One glaring example of the Paradox of Automation is by the Boeing 737 Max crashes. The Boeing 737 MAX was poorly designed with many known flaws and characteristics. Rather than use technology to craft a better plane, the Boeing aerospace engineers poorly designed a plane with bad aerodynamics. The poor design was known, but the new design was cheap. To overcome the poor design, A software was installed to automatically compensate for the plane’s bad aerodynamics. The design of the plane caused the plane’s nose to elevate and the software automatically pushed the plane’s nose down.
Credit: PAUL WEATHERMAN/BOEING
When the cause of the 2018 Lion Air Flight 610 and the 2019 Ethiopian Airlines Flight 302 were determined, it was ruled that the failure of this algorithm was the key contributing factor. Moreover, despite years or experience and training the pilots were unable to respond adequately in the crisis because they had never encountered such an issue. Between the two flights, the Paradox of Automation took 346 lives.
One person who has appreciated this paradox is American lawyer, consumer rights advocate, and former presidential candidate, Ralph Nader. Even before his grandniece was killed in the 2019 Ethiopian Airlines Flight 302, Nader has rallied against what he calls the “Arrogance in the Algorithm”. Even as far back as 2016, Ralph Nader has called for a restraint on the hubris inherent in some of today’s most ambitious technological undertakings. The hubris is one of delusion and control, that through some technological advancement we can finally achieve a level of total control. In 2016, Nader warns about algorithms and driverless vehicles:
“Too many complex variables cannot be anticipated with driverless cars, especially when the vast majority of cars on the road are traditional driver-directed cars, who behave in a whole variety of ways that aren’t predictable. Not to mention bicycles and puddles and potholes, and shiny sides of trucks, that’s why I call it the arrogance of the algorithm.”
The dream of technology as the ultimate means of control, safety, and human flourishing remains and will remain unfulfilled. If anything, our technological history should clearly direct us to curb our enthusiasm for technological advances. Moreover, it should temper our beliefs regarding the limits of technology and its uses. The Paradox of Automation should compel us to seek rational principles and rules for the creation and use of technology and not pursue “advancement” and “progress” with reckless abandon.
Section 2: Technology as Ideology.
To consider technology responsibly, we must go beyond the risks of bad-natured and negligent use of technology. Technology also creates and influences numerous dynamics that effect individuals, communities, and societies. One of the most important of these dynamics is the issues of technological legacy, maintenance which will befall future generations. This consideration is best expressed by the question, “How hard should future generations have to work to circumvent the technologies we’ve invented today?”
Humans evolve, needs change, priorities shift, societal norms and taboos alter. When we create and use technology, whether at a personal or commercial level, we are not just implementing a neutral device. We are integrating our ideas, priorities, and values into the world, or at the very least making decisions about the use of technology based off those same attributes. Through technology we create a world that reflects our values.
To act responsibly, we must have an appreciation for this reality with an eye for the future. Not merely trying to predict what they will want or need, but rather allowing future generations the flexibility to choose for themselves based on their values and priorities, unincumbered by the technological legacy of yesteryear.
An innocuous anecdote may suffice for an example. I was recently at the hospital for the birth of my son. Because it was the middle of the night, and I had been up all day, I had a headache and asked the attending nurse for a Tylenol. The nurse, however, could not give me a Tylonol because the pharmacy wouldn’t allow drugs to be administered to non-patients. Despite being a licensed nurse and the request an over-the-counter medication she was constrained to act in a way that (I suppose) she would have wanted to act. Technology is not merely an electronic device, mind you, but the collected knowledge of current best practices, standards, and procedural norms. This small simple vignette demonstrates what’s at stake. In a prior time, maybe just a generation ago, a nurse would’ve had the latitude to see me as a human being, make a decision, and offer me 500mg of an over-the-counter medication. Now, we are in an time where increased focus on hospital expenses, medical liabilities, lawsuits, and billing cycles governs the way technology is used and what procedures are followed. In light of this, the constraint on the nurses autonomy makes sense.
If a future society seeks to undo some of these procedures, to what extend will our own technology, riddled with our own values, impede this transition? Will our children be able to free themselves from an internet experience mediated by malign algorithms? Will they be able to disentangle a large centralized federal government if they deem it’s worth doing? Through the way we’ve implemented technology, have we already made the decisions for them and striped them of their autonomy and right to self-determination?
Section 3: Conviviality — The Gold Standard of Technological Use
The above two sections, I hope, have sufficiently demonstrated the need for a principles-based approach to the development and use of technology. Though many examples abound, the Paradox of Automation provides just one example of the inscrutability of technology and its potentials. Moreover, by recognizing the way that technology is entangled with our own ideology (values, priorities, taboos) we can mindfully develop and use technology in a way that is more accommodating to future generations. Future generations ought to be able to alter, change, and respond to the challenges and problems of their times, based on their own values, unencumbered by ours. (Indeed, don’t we also wish that past generations would have afforded us this same luxury?) What remains to be discussed is the telos of technology — to what end should technology be developed, implemented, and used?
The gold standard for technological innovation should be “conviviality” — the harmonious coexistence of individuals and communities with technology. This idea was first presented by Ivan Illich in his 1973 book “Tools for Conviviality”:
“Convivial tools [technology] are those which give each person who uses them the greatest opportunity to enrich the environment with the fruits of his or her vision.” “Tools for Conviviality,” Ivan Illich, 1973. Harper & Row Publishers, Inc. pg. 34).
Credit: American Affairs
This remains the chief aim of technological pursuits for several reasons. First, this conception of what technology should do is open to a variety of technologies, modes of operating, procedures, standards, and various applications. Secondly, the ethos of conviviality is not loaded with any priorities, uses cases, taboos, or values. It is merely an expression that technology should aim to benefit human beings and should do so in a harmonious way. Third, it recognizes natural scales and limits of technology that we’ve thus far uncovered. “Only within limits can machines take the place of slaves,” Illich declares. Without limits, we become slaves to technology in various forms of overconsumption, irresponsibility, and indulgence.
Section 4: Practical Rules.
We now have everything in place to prescribe rules that lead to desirous outcomes in the pursuit of, use of, and creation of technology. Thus far we have elucidated the risks of unbounded technology and the hubris that many of today’s leading voices hold. We have illustrated the ways in which technology is never a neutral device, mode, or enterprise, but is greatly enmeshed with our own ideas, priorities, and values. And finally, we have put forth a purpose for technology — what it ought to do. With these in mind, the goals presented hereafter strive to bound and limit technology while also keeping the aim of conviviality in mind.
Rule #1: Productivity Over Counterproductivity: One of the greatest assets of technology is its ability to make us more productive. Indeed, the benefit of productivity is one that is appealed to most. But as Michael Simmons has shown, over the last 50+ years, productivity (GDP per hour worked), despite technological advances, has actually decreased. How come? Productivity didn’t decrease despite innovations, it decreased because of them. In the last 50 years, technologies have focused less on productivity and more on the things that are good for business. Planned obsolescence, model year design alterations, subscriptions, and superfluous add-ons are good for business but not for conviviality. Any technology that cannot demonstrably improved productivity should not be pursued.
Credit: Michael Simmons
Rule #2: Decentralization Over Centralization: Technology should empower individuals and communities, rather than centralizing power in the hands of a few. Centralization breeds alienation, obstructs transparency, and creates complexity. The Cambridge Analytica scandal is a well-known case study and cautionary tale about the importance of data security and privacy. But it never would’ve happened without Facebook’s most important feature — the centralization of online interaction. By creating a single platform for all people to interact, connect, and advertise, Facebook gained access to personal information of more than one billion users. This created an opportunity for a massive data breach (the Automation Paradox, is alive and well). But this scandal also exposed the way that technology can obfuscate transparency and be used to alienate and exploit users of the site. In this particular example, the data breach demonstrated the predatory and biased practices of news sites and advertisers that was able to be achieved through a centralized system. Decentralization foils many of these bad practices and drastically mitigates the risks. In fact, many movements such as the open-source movement and the IndieWeb are making great strides in decoupling user experiences from centralized “corporate” websites. Technological decisions that consolidate rather than distributes authority and knowledge should be avoided.
Rule #3: Design for Humans: The prime movers of technology are humans. In every technological endeavor there is, at some level, an interaction with a human being. This interaction should be one of the primary focuses of the design, use, and implementation of a technology. The strengths and limitations of human actions and decision making are well-known. In the design of human-machine interactions the human strengths should be exploited, and the weaknesses supported. In particular, technology should only be pursued if it incorporates good human-centered design principles like:
1) Prefer simplicity to complexity — Structure tasks to minimize an individual’s cognitive load, especially around working memory, planning, and problem solving.
2) Embrace transparency — The way a human-machine interaction is executed and evaluated (judged correct or incorrect) should be transparent. Visibility into these processes allows the user to know what is possible and how things should be done, while also enabling the user to gage the effects of their actions.
3) Exploit Natural connections — Exploit natural connections between intentions and actions, between actions and the effects on the systems.
4) Recognize the power of constraints — Constraints guide the user to the next appropriate action or decision.
5) Anticipate Errors — Humans will make mistakes. Good technological use and design can’t try to prevent these from happening, but it should also understand that they will. Build, use and implement technologies that are flexible enough to handle and recover from human errors.
Rule #4: Self-Limitation: Technology should be used to create new ways of self-limitation. This means not only that new technologies should be developed to stop current issues of overconsumption and proliferation of harmful technologies, but that all future technologies should also be deployed with mechanisms for bounded and responsible use. Lewis Mumford, the historian and philosopher of technology expresses it in this way:
“A sound and viable technology, firmly related to human needs, cannot be one that has a maximum productivity as its supreme goal: it must rather seek as in an organic system to provide the right quantity of the right quality at the right time and the right place in the right order for the right purpose…To this end regulation and direction, in order to ensure continued growth and creativity of the human personalities and groups concerned ,must govern our plans in the future as indefinite expansion and multiplication have done during the last few centuries.” (“The Myth of the Machine: The Pentagon of Power,” Lewis Mumford. 1970. Mariner Books. Chapt 5: science as technology).
In particular, we see the current crisis of smartphone addiction which comes as the result of dopamine hacking and gamification from the developers. This requires more than a conscious effort to curb aimless and gluttonous uses of technology; future technologies would not exploit such a vulnerability and would proactively limit and prevent such exploitative actions.
Rule #5: Prevent Irreversible Changes: Finally, technology cannot be implemented in such a way as to prevent its undoing. Future generations ought to have free-will and be unencumbered by our generation’s decisions about technology. Again, Joseph Weizenbaum expresses the threat of not following this principle best.
“[Technology] becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure.” (“Computing Power and Human Reason”, Joseph Weizenbaum, W.H. Freeman & Co. Ltd. 1976. Chpt. 1).
Technology must be used and implemented in a way that makes it possible to alter or remove at future dates. This means not only reducing interconnected complexity between systems, but also maintaining a memory about how things can be completed without any specific technology.
Technology is not an unmitigated good. The value individuals, communities and society extract from technological progress and innovation is only in proportion to the use of that technology aids in the conviviality of life. The modern myth of progress through science and technology for their own sake has time and again proved false, particularly in an age when science and technology reap abundance on top of overabundance and fills our world with the new and superfluous instead of the more productive, bespoke, humane.
Technology may not be an unmitigated good, but it isn’t innately bad. If I’m aware of any risks in this essay, it is the risk of sounding like a luddite. Technology has a place in society and will continue to be the principle driving factor for many generations to come. Nevertheless, technology begets great responsibility that should not and cannot be dismissed. Technology comes loaded with complexity and paradoxes. Among other things these complexities create technology that acts or fails in unexpected, rogue fashions. It is pure hubris to think that further technological advancement, software innovations, or an algorithm can bring this characteristic of technology to an end.
Technology is not innately bad nor an unmitigated good, but that does not mean that it is neutral either. In fact, any decision to pursue, develop, design, create, and employ technology stems from a decision which is itself mediated by a myriad of priorities and beliefs. These beliefs necessarily entail that every technology is deeply intertwined with ideology and values that may not be shared over the centuries and amongst future generations.
Lastly, technology has a purpose. It does not make sense to blindly pursue technology for its own sake, especially in a world filled with scarcity — time, money, energy. The telos of technology ought to be one that promotes human flourishing and harmony, conviviality.
With these considerations in mind, the only responsible course of action is to guide our own use of technology. The presentation of practical principles for the design and use of technology is a necessary starting point. The principles presented here aim to guide and educate all users of technology, particularly prompting them to think mindfully about the way they employ various technologies in their own lives. Moreover, it is my hope that these principles will have a greater impact on engineers and designers who face more numerous decisions about how to design, create and implement new and emerging technologies.
Responsible tech: principles for technological use and development was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.