Columns

Ethical Questions: When Users Can’t Tell You What They Want

By Amanda Krauss | Guest Columnist

This column features ethics scenarios and issues that may affect technical communicators in the many aspects of their jobs. If you have a possible solution to a scenario, your own case, or feedback in general, please contact column editor Russell Willerton at russell.willerton@gmail.com.

In a 2018 Interaction Design Association talk, Fiona McAndrew made a challenging observation: “For the products we design, we cannot rely on consumers to accurately describe their ethical and privacy needs.”

This may not be a welcome thought for those of us who make our living talking to users; after all, isn’t the lion’s share of our job to find out what users really want? We assume that if we use the right interview and usability testing techniques, we will get to the bottom of our users’ needs, wants, and desires.

McAndrew is right, however, and her point is easily demonstrated with other research. She cites, for example, Mozilla’s 2017 survey, titled “How Connected Are You,” showing that the more tech savvy a person is, the more he or she is concerned about the loss of privacy. The implication is that those who don’t actually build technology—that is, the majority of users—do not have the base knowledge or mental models needed to understand the current digital landscape, let alone to tell us what they need in scenarios they don’t know they’re encountering.

In a similar vein, Digital Content Next runs consumer research asking users what kind of treatment they expect from organizations like Facebook and Google. Consumers regularly report that they do not expect their behavior to be tracked across the Web, despite this being a well-established and documented practice. In her New York Times editorial, techno-sociologist Zeynep Tufekci wonders if there is such a thing as informed consent in the modern age. In Tufekci’s view, multi-page, jargon-filled privacy policies, combined with the fact that these policies are attached to software that many people must use in their jobs, mean that being informed, let alone choosing to consent, is not a realistic option for consumers:

“Given this confusing and rapidly changing state of affairs about what the data may reveal and how it may be used, consent to ongoing and extensive data collection can be neither fully informed nor truly consensual—especially since it is practically irrevocable.”

The overall problem, then, is that people engaging with the products we design do not have the mental models to inform their expectations; much of what affects their broad Web experience happens so far behind the scenes that it is invisible. Finally, while consumers may not be able to describe their needs, a recent Pew study shows that they do have a decreasing feeling of trust for digital institutions. All of this is to say that consumers may not know exactly how to describe what kinds of privacy protections they want, but they do know that they want something better.

What ethical frameworks can we use to address this problem?

McAndrew’s observation places a heavy burden on those who participate in making Web content or products. She further notes that, when it comes to designing ethically, there is not yet an easy set of established patterns we can follow, nor a simple checklist that will solve our problems. It is at this point, then, that people participating in any sort of design decision need to think about the ethical framework they should use.

The article I recommend to my UX colleagues is “Using Ethics in Web Design,” by Morten Rand-Hendrickson. Rand-Hendricksen, a practiced Web developer and designer, combines a realistic picture of Web development processes with an admirably thorough job exploring and explaining the four main branches of traditional Western ethics. Importantly, he encourages his audience to make ethical decisions by asking questions from several different branches, rather than seeing the world through a single lens.

The Greater Good

As Rand-Hendricksen notes, the typical framework used in industry settings is broadly utilitarian and requires that a decision maker weigh potential harms against the “greater good” in the results. The framework is popular in part because it is relatively easy to understand; one might frame it simplistically as a type of “pros and cons” list. In my experience, weighing harms and goods also appeals to the more mathematically-minded stakeholder: If I want to talk about potential harms, for example, I can do so using reassuringly concrete metrics such as NPS (net promoter score) or CSAT (customer satisfaction).

This framework, however, has limits. As Rand-Hendricksen and others have acknowledged, “the greater good” immediately discounts the experience of numerical minorities. The most obvious failing here is the need for Web accessibility standards: If the majority of a website’s users are not disabled, does that mean we can ignore accessibility requirements? Most designers I know would answer with a resounding “no.” In a similar vein, groups who are under-represented in tech settings, such as women and people of color, may seem like a minority to those developing products; thus, their viewpoint is often excluded when considering who might benefit or suffer from design decisions.

Furthermore, in business contexts, the “greater good” does not reflect the needs of a single group; often, the calculations include what is good for the business, rather than simply focusing on what is good for users. Too often the good of the users is pitted against (for example) the bottom line: In this scenario, as design researcher Erika Hall notes in her thoughtful post for Mule Design, your design is only as ethical as your business model allows it to be.

Other Approaches

To combat the utilitarian habits we’ve fallen into, Rand-Hendricksen proposes a “four-pier” approach to ethical choices and starts with questions based on non-utilitarian frameworks. For this column, I’ll focus on a few questions that are most pertinent:

  • What kind of world are you building for your end user?
  • What kind of person do you become in the process?
  • Are you upholding your duties of care?

These are excellent questions to ask around privacy and ethics, as they require us to turn attention to ourselves, both as ethical beings and as actors whose work affects others. Do we want to build a world in which our users aren’t fully aware of what they’re agreeing to? Are we upholding our duty of care when we assume that users have the same technological knowledge that we do? If we ignore the fact that, for example, people never read long privacy policies, what does that mean about our own personal role in design choices?

Recommendations

When defining “the greater good,” examine your own team’s composition, and be sure to include people who have different experiences than yourself. As McAndrew suggests, this should include both individuals representing user problems, as well as subject matter experts who can weigh in on the potential harms of a feature that an individual product team might not see.

Accept the human in human behavior. One of McAndrew’s key points is that people will often sacrifice privacy for convenience—but that doesn’t mean that we should treat this action as a simple choice or as evidence of what the consumer genuinely wants. If we use a framework considering “duty of care,” that includes recognizing the complexity of the technological landscape, as well as our users’ place in it.

Experiment with different frameworks by asking questions. Questions are a great tool for ethical thinking. In my experience, the pitfall of a traditional classroom-style ethics approach is that it requires laborious explanations. First you must outline the base ideas for each stakeholder, then you must ask them to apply a newly-learned abstract framework immediately, without time for reflection. Starting with questions speeds discussion, and if anyone does ask about theoretical underpinnings, then you can proceed with more traditional explanations and educational resources.

Understand your own values. While asking what sort of person you become may seem like an introspective exercise in a business environment, it’s really the key to all other decision making. Ethics thought leaders such as Samvith Srinivas have noted the importance of identifying your own purpose and values before starting to think about your company’s values. In many cases, you will not have a lot of time to deliberate before your company expects you to act. It is important to understand your own values and ethics before you have to apply them.

Resources

Caltrider, Jen. “10 Fascinating Things We Learned When We Asked The World ‘How Connected Are You?’” The Mozilla Blog. 2017. https://blog.mozilla.org/blog/2017/11/01/10-fascinating-things-we-learned-when-we-asked-the-world-how-connected-are-you/.

Doherty, Carroll, and Jocelyn Kiley. “Americans have become much less positive about tech companies’ impact on the U.S.” Pew Research Fact Tank blog. 2019. https://www.pewresearch.org/fact-tank/2019/07/29/americans-have-become-much-less-positive-about-tech-companies-impact-on-the-u-s/.

Hall, Erika. “Thinking in Triplicate. Mule Design Studio blog. 2018. https://medium.com/mule-design/a-three-part-plan-to-save-the-world-98653a20a12f.

McAndrew, Fiona. “Designing for Privacy and Ethics.” Video. Dublin: IxDa, 2018. https://interaction18.ixda.org/program/talk-designing-for-privacy-ethics-mc-andrew-fiona/.

Rand-Hendricksen, Morten. “Using Ethics in Web Design.” Smashing Magazine. 2018. https://www.smashingmagazine.com/2018/03/using-ethics-in-web-design/.

Srinivas, Samvith. “A Strategy for Ethical Design in the Attention Economy.” 2018. https://www.uxbooth.com/articles/a-strategy-for-ethical-design-in-the-attention-economy/.

Tufekci, Zeynep. “Facebook’s Surveillance Machine.The New York Times. 19 March 2018. https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html.

AMANDA KRAUSS, PH.D., (amanda.n.krauss@gmail.com) is a Senior User Experience Researcher at Indeed.com. Most recently, she has given talks on privacy, design ethics, and persuasive research presentations. She’s also done research and product development for the Texas Tribune, taught at Vanderbilt University, and consulted for the News Revenue Hub. In her free time, she likes playing with structured data and creating Python Twitter bots.