Features

Whom Do You Kill?

By Ray Gallon | STC Associate Fellow

“We shape our tools and thereafter our tools shape us.”
—John M. Culkin, often attributed to Marshall McLuhan

Anyone who has seen Stanley Kubrick’s 2001: A Space Odyssey would have a hard time believing the notion, once taught in engineering schools— especially in the United States—that engineering solutions are socially neutral. Our constructed environment shapes and conditions us at the same time that we’re engaged in inventing and reinventing it. The advent of a connected world via the Internet has clearly shown how technological innovations affect social behavior, and how that social behavior then leads to the creation of new utilities that, once again, modify the way we live and interact with each other.

Enter the world of connected objects, otherwise known as the Internet of Things (IoT), and its enormous capacity to capture Big Data. Add the power of artificial intelligence (AI) into the mix, and we have to start asking, “What does it mean—not just in economic or labor terms, but for human society, if we have inanimate but ‘intelligent’ machines interacting with us, just like other members of our work teams, communities, or even our families?”

The so-called “fourth industrial revolution” is marked by the fact that for the first time, machines not only help us to make decisions, they decide in our place. Those of us who labor in the information trenches are finding that our jobs are changing rapidly. We must learn to write in a different way, we need to know how to find and curate information from disparate sources and concatenate it in meaningful ways, and we need to tailor our information to specific personal needs and contexts, even so far as to adjust for users’ emotions.

The New Reality of Content

These seemingly science-fictional requirements are already becoming day-to-day reality for some of our colleagues, and will likely become the norm within a very few years. The delivery of this kind of highly contextualized, highly personalized information on any kind of scale presumes that artificial intelligence will be in charge to a large extent. And that suggests that one of our most important tasks, as specialists in information design, creation, and delivery, will be to ensure that what gets delivered is appropriate, accurate, useful, and makes sense to the user.

Above all, we cannot let machine-generated errors become responsible for creating gender, ethnic, racial, or ability-based biases, or for endangering life, health, or safety. In short, we carry a heavy ethical responsibility on our shoulders. The way in which we carry out that responsibility will have a huge impact on the role we play, and its importance, inside our organizations.

The recent problems with the Boeing 737 Max-8 aircraft have pointed out how absence of a simple warning light that informs pilots of anti-stall system malfunctions can prove fatal (CBS News 2019). As we depend more and more on automated systems, it is important to understand that information has multiple roles. It continues to be important in aiding users to learn how a product works, and to eventually become experts in its use. But information also has a vocation toward product managers and design engineers; it is critical to the interpretation of Big Data pools, and is key to the proper sharing of responsibility between humans and machines.

So Whom Do You Kill?

As an example, let’s take a question that has become almost a cliché in the post-AI world, the “whom do you kill” conundrum for self-driving cars. This problem is usually posed as a binary choice: If a vehicle is faced with the choice between striking a group of pedestrians, almost certainly killing them, or striking an obstacle, almost certainly killing those aboard, which should it chose? Researchers at the Massachusetts Institute of Technology (Awad et al. 2018) set up a “moral machine” to investigate whether any global ethical principles could be determined, based on how people responded to a series of different binary “dilemmas” where they could chose according to criteria such as:

  • Age
  • Gender
  • Physical condition
  • Human or animal
  • Many or few

The results provide an interesting cultural study, which I recommend to readers. But it strikes me that the binary fashion in which these dilemmas have been posed is significantly mistaken. Most human drivers, faced with one of these hypothetical situations, would seek to avoid killing anyone, regardless of what preferences their given cultures might have for any one of the criteria above. The human brain is a very powerful instrument that responds intuitively with reflex actions that we have not “reasoned” out in the way participants in the “moral machine” did. The sad truth is that sometimes, even if we seek to avoid killing anyone, deaths do occur.

Can we program a machine to avoid killing anyone? The most powerful deep learning algorithms, coupled with their machine hosts, have the complexity of a bee’s brain—nowhere close to our own capacities. However, if restricted to a narrow domain of action, their ability to calculate and extrapolate makes them capable of feats far in excess of our own capacities—but only in that domain. A specialized AI should be capable of being better at avoiding automobile-related deaths than we humans are. If it can’t do that, we shouldn’t be letting driverless cars onto the street.

One might argue that once the majority of cars are autonomous and interconnected, the intelligent IoT network will reduce the number of situations in which such a call needs to be made, and that’s almost certainly true­—cars will be spaced, speeds will be more regular, overall traffic flows will be more fluid, emergencies less frequent. But no matter how good our machines are, some injuries and deaths will still occur due to road accidents, and human error will continue to play its part in them. How willing are we to accept this fact? Who will bear liability for insurance claims? Will it be the car manufacturer? The software publisher? The programmers? The car owner? We don’t have clear answers to these questions at this time, but we can be sure that insurance companies are working out their particular approach to them right now.

When we have surgery in a hospital, we recognize that sometimes patients die—their condition was too far gone, or on occasion, a surgeon makes a mistake. When we are operated on by medical robots, which exist today, are we prepared for anything less than a 100 percent success rate? What criteria will we apply for medical malpractice by a robot?

Ethics and Social Impact in Technical Communication

Above all, whatever directions society decides to take, how will we explain this to the new owner of a driverless car? How will we prepare patients before being operated on by medical robots? What information should we be giving them, how should it be written, and to what extent should technical communicators be discussing subjective issues like ethics in their documents? Are we prepared to tackle this new responsibility?

At the time of this writing, few, if any, of the technical communication programs in universities and other post-secondary institutions have a unit on ethics. I believe they should all have one, and that ethics should be an essential part of every technical communicator’s education.

The authors of the “moral machine” study offer this thought:

Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them (Awad et al. 2018).

The blogger known as “Writingprincess” takes a different tack. She is design research lead for Ideo and has been working on AI for a long time. She thinks we should stop using the word “ethics:”

By definition ethics is transient, it changes with the make-up of the group creating it. So by definition “ethics,” [SIC] are going to be different culturally, which is why it’s a bad framework for trying to tell people how to design future technology that’s to be used universally…

[Autonomous vehicles] do not “recognize,” individual pedestrians as much as they recognize the speed of every object in their path and judge based upon criteria like rate of speed, height, distance traveled, etc., to determine if an “object” is a person or a car. Right now, it doesn’t care if it’s a black person, a woman, a cat, or a dog. And it’s probably good to keep it that simple (Writingprincess 2018).

She goes on to suggest that the best way to proceed is with “mindful” or “human centered” AI design that combines humans and AI in a hybrid pair, with the AI designed first and foremost to serve human needs. “When you follow that line of thinking you don’t have to slice and dice all those different dilemma scenarios” (Writingprincess 2018). The assumption that “autonomous” means that humans never interact with, or override, AI decisions is mistaken. Humans and machines must, and most certainly will, collaborate, and when there is doubt, the humans should take the lead.

As technical communicators, content strategists, information architects, knowledge managers, and other specialists in the information domain, we must also be prepared to take the human role—to face and explain to our users the difficult debates, tough decisions, and multidimensional dilemmas that new technologies are throwing in our paths—and to help them to use our products well. This means they use them for some good purpose, to serve humans, and without doing humans any harm.

References

Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-Francois Bonnefon, and Iyad Rahwan. “The Moral Machine Experiment.” Nature International Journal of Science 563 (2018): 59–64. https://www.nature.com/articles/s41586-018-0637-6.

“Boeing to Make Standard an $80,000 Warning Light That Was Not On Doomed Planes.” CBS News. Retrieved 4 May 2019. https://www.cbsnews.com/news/boeing-737-max-plane-crash-company-to-make-standard-light-warning-pilots-of-sensor-malfunction/.

Writingprincess. 25 October 2018 “Ethical Dilemmas May Not Be the Best Way to Explore Designing Better AI.” Medium. Retrieved 13 May 2019. https://medium.com/@writingprincess/stop-using-ethical-dilemmas-to-explore-ways-to-create-better-ai-its-stupid-8fbd4c6ecfe5.

RAY GALLON (rgallon@me.com) is President and Cofounder of the Transformation Society, which provides training and consulting around digital transformation and organizational learning, and currently teaches at the universities of Barcelona and Strasbourg. He is Co-Chair of the Transformation and Information 4.0 Research and Development group of the World Federation of Associations for Teacher Education (WorldFATE). He is an Associate Fellow of STC and formerly an STC Board Member and President of STC France. Ray has over 40 years as a communicator, first as an award-winning radio producer and journalist (CBC, NPR, France Culture, Radio Netherlands International, Westdeutscher Rundfunk, Deutsche Welle), then in the content industries. Ray shares his life between the south of France and Barcelona, Spain, and you can follow him on Twitter @RayGallon.