By Michael Trice | STC Member
One of the overriding concerns for knowledge workers across all fields is the unsteady state of online conversation and moderation. Spaces like Twitter, Facebook, Reddit, and YouTube dominate these conversations. But public forums for customer engagement have been vital in technical communication, from MathWorks and Apple’s public knowledge bases to Stack Overflow and GitHub collaborations. And while most of these spaces remain relatively safe from the most disruptive aspects of popular social media for now, GitHub has certainly found itself susceptible to use as a space for organizing harassment. It’s also true that Twitter and Facebook have become increasingly popular spaces for providing customer support, handling complaints, and offering other forms of service outreach. Thus, the same failures of moderation that lead to cultural and political breakdown and disruption on social media are technical communication problems for both institutions and society.
The disruption of social media takes numerous forms: the rise of conspiracy, the unsteady relationship between private and public reputation, the harassment, the sudden surges of activism, and many more difficult-to-predict eddies and swirls. It is highly unlikely to abate anytime soon—maybe not for decades. The medium of print offered an incredible long-term boon after development of the printing press, but it also destabilized society for generations. How can industry navigate prolonged instability?
Perhaps, in industry, the massive disruption wrought by social media most resembles the impact of World Wars I and II upon professional ethics in engineering and the sciences. Some ethicists have started to return to that shift in discussing digital accountability. In calling for increased ethical scrutiny of large image datasets used in AI training, Vinay Uday Prabhu and Abeba Birhane show how concepts of informed consent trace back to the 1964 Helsinki declaration and 1947 Nuremburg code. In our recent work, Liza Potts, Rebekah Small, and I have also discussed the value in understanding how industry codes of ethics differ before and after the World Wars.
Before the post-war shift, most engineering codes of ethics resembled that in Figure 1. They focused heavily upon the values of integrity and personal trust as the main principles of the professional. The professional also served peers and customers. There is a marked lack of attention paid to public welfare or communities beyond peer interactions; modern codes now frequently show concern beyond individual companies.
Yet, after World War I and II, the codes of ethics shifted significantly, as seen in Figure 2. Now these shifts occurred for many complicated reasons. Nationalization and globalization of industry were both key. The rise of corporations, regulations, and formalized training as part of the professional process played a role. But as Parbhu and Birhane noted with the advent of new medical rules, industries that had played a destructive role during the war embraced ethical reform to help rebuild and stabilize public trust.
Perhaps Vannevar Bush’s article “As We May Think” most directly wrestled with the question of what scientists and engineers must do now after having “left academic pursuits for the making of strange destructive gadgets.” Industry needed citizens to trust and believe in the good will of science and engineering again. This need for public trust increased following the destruction of war, the inhumanities of engineering as a tool of the Holocaust, and the eventual, persistent threat of global nuclear war. During a time of great disruption, it is incumbent upon industry to hold itself to account and demonstrate a capacity of care for humanity.
Consider what a typical code of ethics now looks like in Figure 2. Note the inclusion of both principles and canons: principles tend to be a group of 3–6 fundamental values supported by canons, or guiding rules. While shown here in part, the American Society of Mechanical Engineers (ASME) code includes ten canons in total as of the last update in 2012.
The National Society of Professional Engineers (NPSE) goes even deeper, with six principles, 14 canons, and dozens of clarifying points added within those canons. I note that both ASME and NPSE begin with a fundamental principle highlighting that their fields serve the “welfare of humanity” or “the public.” Honesty is not replaced as a value, but public care is placed above and in support of that trust.
One mechanism of this care was industry reorienting codes of ethics around universal values based upon the welfare of all humanity. This reorientation went beyond the single value of personal integrity championed in the pre-war era.
It is similarly vital that industry now demonstrate a capacity of an ethic of care. Jared Colton, Steve Holmes, and Josephine Walwema note that particular tactics of technical communication are best judged upon whether they enact harm or “horror” upon the most vulnerable. Much like concepts of informed consent and seeking the welfare of the public, this type of principle value asks members of industry to demonstrate first to the public, especially the marginalized public, that we place their well-being before any other goal. Our sense of online community moderation can reflect values of clarity and openness. Such a principle might even inspire technical communicators, as Miriam F. Williams and Natasha Jones suggest, to develop new, imaginative responses to address the social injustices and ethical issues of our unstable times.
The public understands that the internet has supported a deeply unstable time of disruption. Numerous industries and institutions have seen the public’s trust degrade over recent decades. While technology and industry are not solely to blame, industry can move to rebuild public trust. Just as industry used post-war codes of ethics to change hearts and minds, industry can now use codes of ethics to embrace transparency and accountability to promote the public good in online platforms. An ethic of public care can inform principles of open communication and free speech on social media.
MICHAEL TRICE (firstname.lastname@example.org) is a Lecturer at Massachusetts Institute of Technology, U.S.A., in the Writing, Rhetoric, and Professional Communication program. He holds a doctorate in Technical Communication and Rhetoric from Texas Tech University. Michael’s career in industry includes working for Apple Computer, Hart InterCivic, and Deutsche Bank.
AAE Code of Ethics. 1920. Accessed 7 April 2021. http://ethics.iit.edu/codes/AAE%201922.pdf.
ASME Code of Ethics. n.d. Accessed 7 April 2021. https://www.asme.org/wwwasmeorg/media/resourcefiles/aboutasme/get%20involved/advocacy/policy-publications/p-15-7-ethics.pdf.
Birhane, A., & Prabhu, V. U. 2021. “Large Image Datasets: A Pyrrhic Win for Computer Vision?” In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1537–1547).
Bush, V. 1945. “As We May Think.” The Atlantic Monthly, 176, no. 1: 101–108.
Colton, J. S., Holmes, S., & Walwema, J. 2017. “From NoobGuides to# OpKKK: Ethics of anonymous’ tactical technical communication.” Technical Communication Quarterly, 26, no.1: 59–75.
Jones, N. N., & Williams, M. 2020. “The Just Use of Imagination: A Call to Action.” Teacher Scholar Activist. Accessed 7 April 2021. https://teacher-scholar-activist.org/type/gallery/.
NSPE Code of Ethics. n.d. Accessed 7 April 2021. https://www.nspe.org/sites/default/files/resources/pdfs/Ethics/CodeofEthics/NSPECodeofEthicsforEngineers.pdf.
Trice, M., Potts, L., & Small, R. 2019. “Values Versus Rules in Social Media Communities.” In J. Reyman & E. M. Sparby (Eds.), Digital Ethics: Rhetoric and Responsibility in Online Aggression. Routledge.
This column features ethics scenarios and issues that may affect technical communicators in the many aspects of their jobs. If you have a possible solution to a scenario, your own case, or feedback in general, please contact column editor Russell Willerton at email@example.com.