70.4 November 2023

Reporting Online Aggression: A Transnational Comparative Interface Analysis of Sina Weibo and Twitter

doi.org/10.55177/tc934647

By Chen Chen and Xiaobo Wang

ABSTRACT

Purpose: This study investigates Sina Weibo’s and Twitter’s reporting interfaces from the perspective of transnational, multilingual users whose experiences challenge mononational and monocultural technology designs. Using two cases of online aggression, we analyze how these interfaces marginalize transnational feminist users. The purpose of this project is to call for social justice-oriented interface design that can better support transnational users on global social media platforms.

Method: Drawing from comparative rhetorical studies, critical interface analysis, and virtue ethics, we develop a social justice-oriented comparative critical framework for interface analysis. We then apply this framework to our experiences reporting aggression on Sina Weibo and Twitter through two case studies.

Results: In both cases (one in the forms of direct attacks or misinformation against women and feminists, due to attacks on feminists in China and another on women’s reproductive rights in the US), we find that Weibo and Twitter offer limited options for us to report online aggression toward transnational feminist users. Both platforms designed their reporting interfaces with the aim of efficiency that reduces complexities of how one might interpret the violation categories on the interfaces. But for transnational users who report such attacks in a cross-cultural context, the cultural or social values imparted from the interface may not acknowledge the complexity of their experiences.

Conclusion: The scanty reporting options on both platforms show the limitations of monocultural and monolingual design of such interfaces as well as the nation-based policies of these platforms.

Keywords: Reporting Online Aggression; Comparative Critical Interface Analysis; Weibo; Twitter; Transnational; Feminist

Practitioner’s Takeaway:

We offer some suggestions for interface designers to better support transnational users, especially for reporting interfaces:

  • including multiple languages on the interface;
  • making the reporting mechanisms more apparent;
  • providing opportunities for users to reflect on what value systems drive their reporting decisions;
  • giving users options within the reporting interface to directly challenge or give feedback on the options/policies provided on the interface, granting them more agency; and
  • developing more appropriate policies and operational interfaces that encourage, rather than limit, users’ ability to engage their political rights.

The impetus of this project came from the authors’ experiences as Chinese scholars living in the United States who have witnessed or experienced online aggression in the forms of direct attacks or misinformation against women and feminists due to attacks on feminists in China (Xiao, 2021) and on women’s reproductive rights in the US (Totenberg & McCammon, 2022). As transnational, multilingual users of social media platforms from both China and the US, operating in the “borderlands” (Mao, 2006) of the “liminal space” (Sun, 2020), we have become weary of the increasingly toxic online environments in both countries. When engaging with transnational politics in these spaces, we’re frustrated with the lack of consideration for complex transnational positionalities in the design of these technologies, especially in their reporting features for purportedly addressing such aggressions. As our study on China’s Sina Weibo and U.S.’s Twitter (two microblogging technologies) later shows, these platforms are often designed based on the cultural and legal parameters of its home country and dominant political and cultural values.

Situated in scholarly conversations on the rhetorics and ethics of social media platforms and interfaces and human-centered, cross-cultural technology design, we investigate Weibo’s and Twitter’s reporting interfaces from the perspective of transnational, multilingual users whose experiences challenge mononational and monocultural technology designs. This article aims to answer these questions:

  • How can we perform critical interface analyses in transnational and cross-cultural contexts?
  • How can the interface design of microblogging technology reporting features contribute to the continued oppression of marginalized transnational and transcultural users (such as women and feminists)?
  • How can we better design interfaces for social justice with transnational users in mind? In other words, how can designers account for different and possibly competing cultural values that may emerge as transnational and transcultural users engage with their technologies?

Digital rhetorical studies have been examining technologies and platforms critically and rhetorically, not as value-neutral tools (Selfe & Selfe, 1994; Arola, 2010; Brown, 2015; Holmes, 2016; Walton et al., 2019). As Edwards and Gelms (2018) argued, “Platforms grant access, but they also set the conditions for that access. Platforms promise to be catalysts for public participation, but they also mask their role in facilitating or occluding that participation. Platforms make decisions, but they often downplay, obfuscate, and/or black box those decisions” (para. 3). This blackboxing is especially problematic when it comes to processing violations and aggressions, particularly when platforms hold little responsibility to address abusive and harassing behaviors (Brown & Hennis, 2020).

Rhetorical, critical, and ethical approaches to interface analysis and design require that we not only see the interface as a place of technological, social, and cultural interaction (Carnegie, 2009) but also as a meaning-making site itself (Sano-Franchini, 2018). Thus, interfaces materialize this blackboxing in moments of user interactions (Sun & Davidson, 2014). Therefore, digital rhetoric and platform studies scholars argued for solutions that speak to policy development regarding content moderation that raises the accountability of platforms, in order to protect users against online aggression (Gillespie, 2018; Reyman & Sparby, 2020; Trice et al., 2020).

Refuting the neutrality of platforms allows technical communication researchers to pay critical attention to how values are embedded in technological features and functions, and to question the design of interactions afforded by interfaces which reflect a platform’s ethos, identity, and thus shapes user experiences, not only in terms of its appearances but also its technological functions as afforded by the interface (Sparby, 2017; Sano-Franchini, 2018; Gallagher, 2020). Sano-Franchini’s (2018) critical interface analysis focused on users’ embodied experiences when interacting with an interface over time. She then provided a list of questions for UX practitioners to consider so that they can be more mindful of how UX design can “potentially uphold and/or undermine citizen voices, public deliberation, and equal access and opportunity, especially considering the political use of platforms such as Facebook” (p. 402). We are interested in what this might look like in cross-cultural and transnational contexts.

Technical and Professional Communication (TPC) scholars have drawn on interdisciplinary approaches to examine cross-cultural technology design (Sun, 2012; Sano-Franchini, 2017; Wang & Gu, 2015, 2022; Gu & Yu, 2016), which developed tools for us to examine how values are imparted from technological artifacts and how cultural, political relationships or social structures can emerge in technology use (Orlikowski, 2000). To account for power differences in this design work, extending work from critical design in human-computer interaction (HCI) studies (Bardzell et al., 2012), Sun (2020) argued for a critical and relational examination of how micro interactions in users’ everyday lives are influenced by macro institutional structures that may be designed with the more dominant western/Global North epistemologies. Sun’s (2020) “practice-theoretic approach” focuses on “engaging with and transforming the world through embodied activity, mediated by artifacts, based on shared understanding” (p. 39), treating “practice as the unit of analysis and design intervention” and “a more engaged interaction between human and technology through an integrative perspective” (p. 43). This aligns with values of human-centered design (HCD) with social justice aims (Dombrowski et al., 2016; Jones, 2016; Costanza-Chock, 2020) that consider how communication design can enact injustices and how we can amplify and center the experiences and practices of those who are economically, culturally, socially, and politically disadvantaged (Walton et al., 2019).

Like Sun, we posit that not only should we examine and practice localized design of technologies, but we should also learn about how global users engage with technologies, not only from their home cultures but also their adopted cultures. For example, given increasing tensions between China and the US, transnational social media users like us often need to navigate online spaces where anti-feminist values manifest in tandem with competing nationalist values in different ways. In this article, we develop a social justice-oriented comparative critical framework for interface analysis that accounts for cross-cultural and transnational experiences and the broader impacts of our digital experiences for a technomoral future (Vallor, 2013). Then, we apply this framework to the analysis of the reporting features on Weibo and Twitter. We conclude with some discussion of the implications of this research.

A Social Justice-Oriented Comparative Critical Framework for Interface Analysis

Comparative rhetoric studies focus on how we can represent the “other” so that the other does not lose its own otherness or that such representation does not turn out to be “useful” only to the Euro-American West, thus inviting global, transnational perspectives that challenge the Euro-America-Centric paradigm (Mao et al., 2015). Therefore, comparative methodology is ontological and epistemic (Mao et al., 2015), which requires researchers to engage in the “art of recontextualization,” “informed by an outright rejection of any external principle or overarching context to determine the context of the other, and it further relies on terms of interdependence and interconnectivity to constitute and regulate representation of all discursive practices” (Mao, 2013, p. 218). For interface analyses, this means recognizing that transnational users’ interpretation of and engagement with the interface necessarily enacts a recontextualization where the cultural context of the interface encounters the cultural context of users.

Sano-Franchini (2018) defined critical interface analysis as “the meaning-making function of the site that blends theory, critique, and reflection on embodied experience in a recursive fashion, understanding that the relationship across the three can lead to an intentionally reflexive critical approach” (p. 391). In particular, this method pays attention to users’ affective, cultural experiences when interacting with an interface. To answer her question: “What kind of society do we want to live in?” (p. 403). We bring a virtue ethics approach to further help us understand how affordances of an interface may shape user behaviors and dispositions (Gallagher, 2020).

A rhetorical virtue ethics approach to interface analysis asks that we pay attention to how interfaces encourage different kinds of ethical commitments from users. Drawing from three virtue ethics traditions, Vallor (2013) developed a technomoral framework that pays attention to the “social context of concrete roles, relationships, and responsibilities to others” surrounding technologies, with a focus on moral self-cultivation (p. 33). To add a rhetorical dimension to Vallor’s conception of virtue ethics, Colton and Holmes (2018) argued for contextualizing virtue ethics through habitual development (hexis). Hexis is defined as “the disposition, state, or bodily comportment of a person brought about by the development of habits” (p. 32). In other words, how might a user’s interaction with an interface cultivate certain ethical commitments? Answering these questions naturally requires considering cultural contexts.

Finally, we argue that this framework must be social justice-oriented, because cultivating virtues in cross-cultural contexts requires us to consider how transnational users may be marginalized. Walton et al. (2019) used Young’s theory of “faces of oppression” to interrogate ways that TPC practices can enact oppression, including marginalization, cultural imperialism, powerlessness, exploitation, and violence. Marginalization excludes particular groups from meaningful participation in society. For comparative interface analysis, we ask: How are the interfaces marginalizing transnational, multilingual users? Cultural imperialism occludes the perspectives of the oppressed groups and sets up a dominant culture as a norm by which other cultures are judged and found lacking. For comparative interface analysis, we ask, How does the interface reproduce colonial, imperial values, while erasing cultural values from historically marginalized groups? Further, people experience powerlessness when they lack autonomy and authority. When an interface actively marginalizes certain users, it can render them powerless when interacting with the interface.

Marginalization, cultural imperialism, and powerlessness can lead to violence, including physical and psychological attacks and/or threats, as well as the reliving and witnessing of violence and violent acts that harm people’s bodies, minds, and possessions. For comparative interface analysis, we might ask: How does an interface facilitate violence toward transnational, multilingual users? Finally, exploitation can be understood as material and labor exploitations of people of color. In digital contexts, we can also see exploitation as exploiting users by profiting from user engagement and data on a platform (Jarrett, 2018). For interface analysis, we ask: How does an interface impart a platform’s assumptions about global users’ labor and data in transnational spaces? Finally, we must also pay attention to how these faces of oppressions can intersectionally (Crenshaw, 1991) manifest in cross-cultural interface interactions.

Bringing these theoretical and methodological approaches together, we adopt a comparative critical interface analysis framework with a social justice-orientation and an attention toward how interface design can cultivate different ethical dispositions that may contribute to or redress faces of oppression, particularly for transnational, multilingual users. Our framework operates with the base assumptions of Sano-Franchini’s (2018) critical interface analysis methodology, paying attention to not only the features and functions of the interface but also “the ideological and cultural values and assumptions imparted through the interface” (p. 391). When considering the affordances and limitations of the interface and use, she argued that we should also consider the user environment, and embodied experiences and emotions so we can unpack the “memories, literacies, and histories” the interface relies on, which can benefit some users but exclude others” (pp. 391–392). We further extend this heuristic by drawing attention to transnational users’ complex positionalities and making more explicit a social justice orientation to identify how transnational users may be excluded by an interface design. Our rhetorical use of virtue ethics also extends a relational understanding of how the interface cultivates virtues or vices in transnational contexts (Colton & Holmes, 2018; Vallor, 2013). Thus, we employ the following heuristic questions in addition to Sano-Franchini’s method (2018):

  • How do the ideological and cultural values imparted by the interface interact with the cultural and ideological values of transnational users?
  • How do transnational users navigate these competing values? What kinds of affective experiences may they have? What dispositions and deliberations are facilitated by the interface and how do they impact transnational users?
  • How does the interface empower or fail users through cultivating virtues or vices (hexis) in transnational contexts?
  • How does the interface reproduce colonial, imperial values, while erasing cultural values from historically marginalized groups through its design? How does the interface facilitate online aggression and violence toward transnational, multilingual users?
  • How does the interface impart a platform’s assumptions about these users’ labor and data in transnational spaces?

To apply these heuristic questions, our framework requires us to traverse cultural contexts by first analyzing cultural values embedded in the interface design from culture A (see Figure 1), and then by interpreting those values from the perspective of transnational users who also embody values of culture B. This process of recontextualization may reveal tensions between the values of the interface that are more monocultural and nationalistic and the values of transnational users which are perhaps more multitude. It’s also important to note that while we use culture A and B in the framework, it doesn’t mean that we take a monocultural view of a nation but to simply indicate the cross-cultural traverse. A transcultural user from any given country can easily identify additional cultural values they embody that may clash with values of the dominant, national, and cultural values of the interface. While this is presented as a linear process, the analysis is iterative.

Flowchart
Figure 1: Social justice-oriented comparative critical interface analysis framework

Findings

Our analysis focuses primarily on how Twitter’s and Weibo’s* reporting interfaces handle two types of aggressions: direct (personal) aggression and value-driven misinformation, which we have experienced and observed as users of both platforms. In each case, we present an issue related to feminists and women’s rights that has traversed across from one country to the other, reflecting cultural and ideological tensions. Given how fast social media technologies change their features, the interfaces we examined here were collected from April 2021 to January 2023. In applying our framework, we start by describing the features on the interface, including the options of violences that can be reported and how we navigated those options given our positionality as transnational feminist users, focusing on the first four bullets of the questions of the framework. We identify how the cultural and ideological values imparted from one interface clashed with our cross-cultural and ideological values as transnational feminist users, thus limiting the deliberative acts afforded by the reporting options, impacting our affective experiences. For our cases, we contextualize this interpretation in the feminist discourses in the two national contexts, noting how users interacted with one platform and then the other around the same case. In this process, we identify how neither platform fully accounts for the values that transnational feminist users like us embody. Ultimately, we illustrate how these interfaces may cultivate certain ethical commitments, such as expediency, and political ideologies that lead to the oppression of transnational feminist users.

Case 1: Reporting direct aggression

In 2021, Chinese feminist activist Xiao Meili posted a viral video on Weibo, showing a smoking man attacking her and her friends when they asked him to put out the cigarette, yelling gender-based slurs. The initial wave of online support for Xiao quickly turned when some Weibo users pointed out that Xiao is a “feminist” and then wrongly accused her of being a supporter of Hong Kong independence, “a nation’s traitor” influenced by “foreign forces,” ultimately leading to Weibo deleting Xiao’s account (Xiao, 2021). While we don’t have the space to delve deeper into this complex and controversial discourse around feminisms on Weibo here (Chen & Wang, 2022; Huang, 2022), many Chinese Internet users see “feminism” and “feminists” as terms and ideas imported from the West. Combined with the increasing nationalistic and an anti-“foreign forces” discourse in Chinese digital spaces, many users equate feminists with “foreign forces” that must be condemned.

When having joined an online campaign to support Xiao, Chen received multiple comments attacking her as a feminist and a national traitor influenced by “foreign forces.” In this case, we share how Chen navigated the reporting interfaces when “complaining” about a comment made to her. The English version of Weibo uses “complain” rather than “report” because the original Chinese term 举报 has a more punitive connotation than the more neutral tone of “reporting.”

On Weibo, while the general “complain” interfaces look fairly similar for different types of content (posts, comments, users), violation options available for each type of content may not be the same. When complaining about a comment on her post (“no wonder you are located in the U.S.”), Chen had to choose the type of violation and then “the specific reasons” for the complaint both on the same interface. Here, Chen considered this accusation as a personal attack given that her post was only supporting Xiao Meili, and not “colluding with foreign forces.” However, what is labeled as “specific reasons” under “personal attack” are means of attacks not “reasons” as labeled (see Figure 2).

While further explanations are provided for some violation types, the user is not given time or space in one interface to ponder on the meanings or implications of these options or question these categories through direct engagement with Weibo, emphasizing an ethics of expediency (Katz, 1992). Therefore, a user is encouraged to make a quick decision and submit the complaint and wait for a response from Weibo, often simply a confirmation message with zero follow-up, or suggesting to “blacklist” a user so they will no longer see that person’s posts or comments.

The complaint of comment
Figure 2: Weibo complain interface for complaining about a comment highlighting specific reasons under the option “personal attack”

More problematic is the way violations are defined, categorized, and framed, particularly for transnational users. This is perhaps why Chen’s complaint may never contribute to improving the toxic, nationalistic, patriarchal affective economy on Weibo (Huang, 2022), regardless of which violation type she chooses. For example, it’s hard to decide whether it’s better to choose “personal attacks” or “online violence” to label the aggression she received. Instrumentally, the option “personal attacks” affords users to report anyone who has attacked them personally. But socially, what counts as “attack” reflects Chinese cultural and political values. Then, under “personal attack,” it can be hard to choose a reason, especially when a nationalist comment can embody all three options—unkind discourse, insult and hurl abuse, and promote hate and discrimination—yet only one option is allowed.

From a virtue ethics perspective, we may see such an interface as not facilitating a relational understanding. When defining relational understanding, Vallor (2013) drew from Mengzi who argued that “blind, unthinking conformity to the conventional social order” (p. 81) is not beneficial as it fails to contextualize human relationships. Although the comment itself doesn’t have explicit insulting or hateful language, contextualizing it in Weibo’s anti-feminist affective economy can reveal the nationalistic logic behind it that is insulting and hateful. Ultimately, Chen chose “unkind discourse” due to the mild language of the comment, yet she could also see it as “promoting hate.”

“Promote hate and discrimination” also doesn’t recognize that some groups are unjustly targeted, e.g., women and feminists. When Xiao’s case broke out in April 2021, Weibo’s interface did actually have a violation type of “gender-based discrimination” under the category of “promoting hate” (see Figure 3). That option was no longer there soon after Xiao’s account was deleted on the platform. Rather, it includes “promoting hate and discrimination” as a reason for personal attacks, thus reducing a social problem to the individual level. This change further marginalizes transnational feminist users like Chen, because it no longer recognizes the anti-feminist and nationalistic nature underpinning such attacks, rendering users powerless.

The complaint of comment screenshot
Figure 3: An image posted by the Weibo CEO shortly after Xiao Meili’s incident in April 2021 that shows “gender discrimination” as reason for the violation type “promoting hate”

Like many Chinese sociopolitical events, Xiao’s case also traveled to Twitter where overseas Chinese nationalists continued to attack her and the Chinese feminist movement. On Twitter, one can report an account if the profile information has abusive content, or tweets that have enacted or threatened violence. While, on Weibo, users only have the option to label the type of attack, Twitter’s interface lets users first identify who this report is for, what happened, and then identify the means through which it happened, separating the nature of the violence from the acts of violence, then finally validate the report by identifying the specific type of violation (see Figures 4, 5). When the attack is identity based, users are further asked to verify the specifics of the attack. Because users have to traverse across interfaces rather than staying in one as on Weibo, it arguably does a better job at cultivating relational understanding and prudential judgment, with more friction in the process of reporting to help users pause and deliberate over their decisions before submitting a report.

Screenshots for Twitter reporting
Figure 4: Screenshots of Twitter reporting interfaces when choosing the options of “attacked because of their identity” for “someone else or a specific group of people”

Screenshots for next step in reporting Twitter
Figure 5: Further steps of Twitter reporting process interfaces continued from Figure 4”

Yet, the types of direct aggressions still reflect limited cross-cultural understandings that can prove inadequate when the reported content involves issues from non-English countries and cultures. In this case, Chen tried to report a Chinese tweet that directly attacked Xiao Meili for “到处操作拉帮结派 搞男女对立 暗中支持西方MeToo反动组织” (“operationalizing collusion everywhere to promote man vs. woman opposition, secretly supporting western reactionary organization MeToo”). The tweet could be seen as a direct attack on Xiao based on her identity as a feminist, or spreading misleading information as MeToo is not an organization nor is it necessarily reactionary against the Chinese political system. However, as the tweet was directly accusing Xiao of colluding with the so called “Western forces,” Chen chose to report this as an attack on Xiao’s identity, wrongfully purporting that she is a traitor to her country.

It’s more difficult to label the “how” of this act, as the options on Twitter don’t account for the complexity of China’s socio-political situation. We might select any of the options including “harmful tropes;” “wishing them harm;” “spreading fear about them because of their identity;” and “encouraging others to harass them because of their identity” even though the language may not explicitly reflect these. But for transnational feminist users like Chen, none of these most accurately capture why this post can be seen as an attack on Xiao’s identity due to the Chinese nationalist ideological construct of feminism.

It’s clear that both platforms recognize identity-based direct aggression or attacks on individuals or groups of people. But for transnational users who report such attacks in a cross-cultural context, the cultural or social values imparted from the interface may not acknowledge the complexity of their experiences. For example, discrimination and hate on Weibo no longer accounts for specific marginalized groups such as feminists, while on Twitter “identity attacks” doesn’t explicitly account for how the attack on someone’s identity can be based on political values that are rooted in the values, ideologies, and norms of a different country.

Case 2: Reporting Misleading Information

Case two presents Xiaobo’s user experience on Twitter and Weibo while reporting comments or users against women’s reproductive rights. On June 24, 2022, the U.S. Supreme Court officially reversed Roe v. Wade, ending the constitutional right to abortion that had been upheld for nearly a half century (Totenberg & McCammon, 2022). This case reflects, particularly, the failures of reporting value-driven misinformation on both Twitter and Weibo. Here, by “value-driven,” we mean the kind of information that is often driven by particular religious or political views, such as conservative vs. liberal, especially on women’s reproductive rights, that is not scientifically sound but difficult to argue against with only scientific evidence.

On Twitter, Xiaobo tried to report a post reflecting conservative Christian values (see Figure 6). When reporting, she found the interface is too vague or too general, not accounting for social and political controversies (see Figure 7).

Screenshot of a tweet
Figure 6: A tweet using visual and textual arguments against feminist slogan “My Body, My Choice” by equating abortion with “killing babies”

As shown in Figure 7, reporting categories “Attacked because of their identity,” “Harassed or intimidated with violence,” “Shown sensitive or disturbing content,” and “Shown misleading info” all seem relevant to the case. The tweet can certainly make users who support abortion rights feel attacked due to their identities, because the post suggests the equivalence between choosing abortion or supporting abortion and a “baby killing” crime. “Shown sensitive or disturbing content” is also relevant because the images and texts describe baby killing and babies’ bodies within the womb. Yet, the real problem of the tweet is the value behind it, enacting intersectional oppression to those who have no power to change the new law, who don’t have money to travel out of their states to get a surgery, and those who have to relive violent experiences such as rape or incest.

Twitters reporting interface
Figure 7: Reporting Interface on Twitter for misleading information

As this tweet is presenting a misleading argument, Xiaobo clicked “misleading info” to see what the next step is for users to choose, but she found only two sub-categories available: misleading information related to the COVID-19 pandemic or the election process (see Figure 8), neither is applicable. Users then would have to go all the way back to the beginning of the reporting interface to try out other options. This implies that Twitter consents anti-feminist or anti-women discourses despite its policy condemning gender-based violations.

Twitter reporting interface next step
Figure 8: Reporting Interface on Twitter about misleading information (cont’d)

Ostensibly, misleading information is Twitter’s response to the rampant fake news in the post-truth world. However, the root problem of the post-truth world and divisive rhetorical landscape is not just a lack of facts or wrong information, but how information is presented with values and ideologies. For example, while feminist users see this tweet in Figure 7 as scientifically wrong, “pro-life” believers may see it as fact. However, Twitter’s reporting option under “misleading info” does not offer space for users to provide input on how such tweets can be misleading in the abortion rights debates. In the end, Xiaobo’s only options were to either mute or block the user under concern, making her feel powerless and frustrated as a user.

The limited construct of “misleading info” as a violation category doesn’t cultivate the virtues of honesty, justice, care, and civility, as it fails to recognize the nuances of this politicized discourse. Without an adequate reporting category, the opportunity is missed to educate the other users who have no idea of how their post could have impacted other users. Without recognizing that other political issues can also be breeding grounds for misleading information, these limited interface choices fail to enact the virtues of justice and care toward marginalized groups and to foster civic discussions and deliberations.

The debates and discussions on U.S. abortion law also happened on Weibo. In a video post by a Key Opinion Leader, “李三金Alex”, located in the US, he tells the story of a Louisiana woman who was told by her OB that her baby or the fetus had acrania, a rare congenital disorder in which a fetus’ skull does not form inside of the womb; a fatal diagnosis that means even if she were to give birth to the baby, the baby would die. However, according to Louisiana laws, she was denied an abortion, a cruel reality for many women in the US after the overturn of Roe v. Wade (Sanchez & Alonso, 2022). Among the many comments on this video, Xiaobo found one particularly disturbing: “The U.S. population is decreasing and there has to be a way to solve this issue” (see Figure 9).

Screenshot of comments
Figure 9: 李三金Alex’s video with comments

When trying to complain about this comment, Xiaobo chose “inappropriate value orientations,” which seems to indicate a value-oriented concern (an option not available on Twitter). However, the interface does not provide further categories on what’s considered “inappropriate value orientations” (see Figure 10). This design does not cultivate virtues of care and perspective, as it forces users to quickly label the action without deliberating over what’s considered inappropriate value orientation, leaving users to judge based on their own opinions about what is appropriate or not. In fact, Weibo has very detailed definitions on what are considered inappropriate values (see Appendix 1), which users may not be aware of.

Screenshot of complaint
Figure 10: Weibo’s inappropriate value orientation reporting category

The inappropriate information options in Weibo’s Community Guidelines seem to protect a wide range of rights from racial and gender rights to disability rights as well as civility such as not using aggressive words and phrases. However, on the interface, “inappropriate value orientations” is treated as a separate category from other forms of violations that may fall in other categories such as “personal attacks” or “promoting hate.” Using the word “value” explicitly in the interface can be seen as a suggestion to prompt users to consider the dominant social and political values as the standard. For example, ways to “干扰公共秩序” (disrupt public order) can be seen as upholding the “harmonious society” in the contemporary Chinese socio-political environment. In this case, the comment Xiaobo tried to report might not be categorized as violating any policy, yet it presents arguably a discursive violence toward all women who live in the US regardless of their races.

As a transnational feminist, Xiaobo wanted to promote feminist rights and transnational understanding on Weibo, thus seeing this post as problematic. Yet, the categories considered “inappropriate values” don’t reflect a global orientation. As in case 1, nationalist affective economy and antagonism toward Western countries, especially the US, on Weibo emboldens such discourse, supported by platform policies as well. In a separate category on “harmful information about current affairs and politics,” the community guidelines even explicitly state that such content can include any that threatens the country’s (China’s) unity, sovereignty, and the national border, explicitly driven by nationalist values (see Appendix II).

Discussion

The framing of reporting interfaces on both platforms reflects different values of the platforms. On Weibo, reporting aggressions and violences is labeled as “complain,” a term that reflects a more punitive and antagonistic tone. On Twitter, “report” is a more neutral term and the reporting interfaces can be separate, albeit linked to other functions such as “mute” or “block” a user. Reporting interfaces are procedural, as the goal is to allow users to report or complain to the platform about issues that they deem problematic. On Weibo, the original Chinese term for “complain” 举报 has a cultural connotation of originated in the era of Cultural Revolution where it referred to telling on people to an authority because they violated dominant social and political values. This meaning downplays the procedural aspect but encourages people to treat this function as a method to tell on people, reinforcing an ideological echo chamber. On the other hand, Twitter promotes slower deliberation by making users click through several interfaces to deliberate their reporting actions. This can cultivate a virtue of care and perhaps self-control that may reduce rash actions of reporting.

However, ultimately, both platforms have designed their reporting interfaces with the aim of efficiency, which reduces complexities of how one might interpret the violation categories on the interfaces. While Weibo does have a detailed policy that explains different kinds of violations, the interface presents simplified versions with language that is consistently vague and aligned with dominant political ideologies of its policies. These policies render transnational users who may critique such values powerless. On Twitter, we see more explicit acknowledgement of aggression or violence against marginalized communities, such as those based on race and gender, but this nod to diversity isn’t integrated with its categories of misinformation which don’t account for how people with different values may understand “facts” differently. It’s also limited only to issues related to COVID-19 and election processes, reflecting a vagueness akin to Weibo’s language. Even within the category of political processes, for example, we only see options related to elections. Thus, how can we report on Weibo content that directly attacks and dehumanizes American people when “Western forces” are seen as a threat to national security? And how can we report on Twitter the content that misrepresents reproductive justice and the transnational feminist #MeToo movement?

As we discussed in the introduction, digital rhetorical scholars and platform researchers have argued for a value-driven approach to content moderation policies on social media platforms. The design of the reporting interfaces is intricately connected with policies related to content moderation and online community wellbeing, which often tend to be primarily based on the cultural and legal parameters of the country of its uptake. As such, these policies, when “traveling” with transnational users, may not work as well, incurring problems not unlike what Rebecca Dingo (2012) pointed out when studying the networked arguments of public policy development in the context of transnational feminism. Specifically, she used Inderpal Grewal’s term “transcoding” to show how “as rhetorics move they do not always carry the same ideological assumptions” (p. 31). In the context of technological design, we might examine how social media use policies/community guidelines as a genre might have been transcoded across different genre uptakes. For example, as a localized uptake of microblogging technology, Weibo modeled much of its initial design and development ideas after Twitter, but in its community policy language it transcoded policy related to community wellbeing to a very value-driven set of policies, based on dominant political and nationalistic ideologies. Certainly, when mainstream U.S. media and politicians tend to discuss China in a monolithic and othering way, it is also hard to imagine Twitter adopting sufficiently ethical policy when assessing the nuances of Chinese content.

For reporting online aggression and violence, what counts as aggression (such as promoting hate or sharing misleading information) is not universally defined. Both platforms recognize these as inappropriate acts, but their policies interpret these acts, or transcode this universal disdain toward these acts differently. On Twitter, the hateful conduct policy does recognize that certain groups are more disproportionately targeted online and even acknowledges the role of intersectionality (“Hateful conduct policy,” 2022). However, these “protected categories” don’t explicitly account for people who are abused for their activism or ideologies that are against an authoritarian state. On Weibo, policy on promoting hate clearly condemns the act of labeling people based on biological, psychological, geographical, or cultural markers in order to discriminate, or attack them (see Appendix I). But a parallel category providing rules on talking about current and political affairs and social information clearly indicates what’s considered harmful information, prioritizing interests of national borders and security (see Appendix II). For a feminist labeled as a “nation’s traitor,” this rule can override concerns about the validity of this act of promoting hate. Similarly, on Twitter, the category of misleading information only prioritizes issues related to elections and COVID-19 in the US, without acknowledging other contexts of civic participation.

Conclusion

To conclude, we return to the questions that Sano-Franchini (2018) asked at the end of her article to provoke more equitable, accessible design interactions for civic deliberations, especially when dealing with political issues. Our study also works toward Sun’s (2020) question on “how [we should] design a social media technology that is efficient, effective, engaging, and empowering for culturally diverse users in this increasingly globalized world” (p. 191). How should technologies be designed to empower transnational users and also cultivate more equitable interactions? Specifically, in the context of reporting online aggression, how might a reporting UI design mediate a transnational user’s experience that accounts for the different and possibly competing cultural values that the user may bring? Further, how can interface design empower users to cultivate more positive hexis in addressing the toxic online culture and online aggression?

Of course, the cultivation of hexis is dependent on context. Sun’s (2020) practice-theoretic critical approach to design provides helpful insights, an approach that pays attention to both the messy daily “micro social interactions” of users but also the “macro view” and the resulting global interconnectedness (p. 203). Thus, it pushes technology designers to ultimately promote both cultural and epistemic diversity. Nevertheless, as our cases show, addressing online aggression makes this challenging. Some may even question how one might design with values that should guide transnational user behavior on a platform often bounded by nation-based policies and regulations: how might we decide what’s appropriate and what’s not given this complex context? How can designers account for what Sun (2020) called the “liminal” experiences of transnational users?

To address such a seemingly impossible dilemma, we turn to critical design and Black technical and professional communication scholars who have inspired us to think and design transformatively. According to Bardzell et al. (2012), critical design is meant to be provocative and transgressive, sometimes challenging the existing ideologies and implicit biases of users. Black TPC scholars have always done this, in centering the experiences of Black TPC genres and practices. In Jones and Williams’s (2020) call for the “just use of imagination,” they urged that we recognize that systems of oppression were designed “in support of white supremacist and racist ideas and ideals’’ and thus we must actively use “imagination” to “dismantle, [and to replace] oppressive practices with systems that are founded on equality, access, and opportunity.” We recognize that this work emerged from the Black/African American context, and we are not suggesting that all transnational users embody similar kinds of experiences. However, Jones and Williams (2020) helped us recognize how systems are oppressive and limiting by design. To transnational feminists, the oppressive experiences we studied here also require us to recognize that the reporting interfaces were intentionally designed to uphold an ethic of expediency and efficiency with a punitive approach, and technologies are intentionally designed with monocultural values and managed within nation-state borders. Thus, rather than seeing these issues as a failure of the system, as Jones and Williams (2020) warned us, we seek to imagine transformative reporting interface design.

To especially account for transnational users negotiating multiple cultures, why not design reporting interfaces to be a space of reflection and negotiation that cultivates users’ relational understanding? To do so, we recommend that social media platforms consider transnational user-centered and accessible design in the following ways:

  • Include different languages in their reporting process so users can better understand what such procedures are like, and how they should proceed.
  • Offer ways to make the reporting mechanisms more apparent, such as by including buttons that lead to pop-up screens where users can click and read reporting policies, definitions of each reporting or complaint category, next steps in reporting, so that they can pause and consider cultural values or value systems to facilitate more careful judgment, instead of being pushed to make quick decisions.
  • Provide opportunities for users to reflect on what value systems drive their reporting decisions, giving users some control over how they might define violations or issues they have encountered but have no way of reporting.
  • Give users options within the reporting interface to directly challenge or give feedback on the options/policies provided on the interface, thus giving them more agency to challenge the inadequacy of the existing violation options and reporting features.

Importantly, reporting interfaces should consider multiple ways that content can be oppressive to marginalized groups, expand the perspectives of violation types and increase flexibility on how problematic content is categorized, and offer a more dynamic approach which challenges the traditional or dominant ideological understanding of civility or equality. This would require designers and platform owners to conduct deeper research into cultural conflicts and global political tensions and how they influence a user’s everyday life, in order to develop more appropriate policies. Finally, the methodological framework we built here should be helpful to designers and researchers of any interface design. This comparative rhetorical and virtue ethics approach invites researchers and practitioners to engage in more critical, social justice-oriented work across national borders and cultural boundaries.

References

Arola, K. L. (2010). The design of Web 2.0: The rise of the template, the fall of design. Computers and Composition, 27(1), 4–14. https://doi.org/10.1016/j.compcom.2009.11.004

Bardzell, S., Bardzell, J., Forlizzi, J., Zimmerman, J., & Antanitis, J. (2012). Critical design and critical theory: The challenge of designing for provocation. Proceedings of the Designing Interactive Systems Conference (DIS ‘12) (pp. 288–297). ACM.

Brown, J. (2015). Ethical programs: Hospitality and the rhetorics of software. University of Michigan Press.

Brown, J., & Hennis, G. (2020). Hateware and the outsourcing of responsibility. In J. Reyman & D. M. Sparby (Eds.), Digital ethics: Rhetoric and responsibility in online aggression. 17032. Routledge.

Carnegie, T. A. M. (2009). Interface as exordium: The rhetoric of interactivity. Computers and Composition, 26(3), 164–173.

Chen, C., & Xiaobo, W. (2022). Contemporary Chinese feminist rhetorics: #MeToo in China.” Enculturation. https://www.enculturation.net/contemporary_chinese_feminist

Collins, P. H. (2008). Black feminist thought: Knowledge, consciousness, and the politics of empowerment (2nd ed.). Routledge. https://doi.org/10.4324/9780203760635-22

Colton, J. S., & Holmes, S. (2018). Rhetoric, technology, and the virtues. Utah State University Press.

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press.

Crenshaw, K. Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241–1299.

Dombrowski, L., Harmon, E., & Fox, S. (2016). Social justice-oriented interaction design: Outlining key design strategies and commitments. Proceedings of the 2016 ACM Conference on Designing Interactive Systems, 656–671.

Edwards, D. W. (2020). Deep circulation. In E. Beck & L. Hutchinson Campos (Eds.), Privacy matters: Conversations about surveillance within and beyond the classroom (pp. 75–92). Utah State University Press.

Edwards, D. W., & Gelms, B. (2018). Special Issue on the Rhetoric of Platforms. Present Tense, 6(3). https://www.presenttensejournal.org/editorial/vol-6-3-special-issue-on-the-rhetoric-of-platforms/

Gallagher, J. R. (2020). A pedagogy of ethical interface production based on virtue ethics. In J. Reyman & D. M. Sparby (Eds.), Digital ethics: Rhetoric and responsibility in online aggression (pp. 69–84). Routledge,

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gu, B., & Yu, M. (2016). Society for Technical Communication East meets West on flat design: Convergence and divergence in Chinese and American user interface design author(s). Technical Communication, 63(3), 231–247.

“Hateful conduct policy.” Twitter. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Holmes, S. (2016). ‘Can We Name the Tools?’ Ontologies of Code, Speculative Techné and Rhetorical Concealment.” Computational Culture, 5. http://computationalculture.net/can-we-name-the-tools-ontologies-of-codespeculative-techne-and-rhetorical-concealment/

Huang, Q. (2022). Anti-feminism: Four strategies for the demonisation and depoliticisation of feminism on Chinese social media. Feminist Media Studies, 1–16.

Jarrett, K. (2018). Exploitation, alienation, and liberation: Interpreting the political economy of digital writing. In J. Alexander & J. Rhodes (Eds.), The Routledge handbook of digital writing and rhetoric (pp. 423–432). Routledge.

Jones, N. N. (2016). The technical communicator as advocate: Integrating a social justice approach in technical communication. Journal of Technical Writing and Communication, 46(3), 342–361.

Jones, N. N., & Williams, M. F. (2020). The just use of imagination: A call to action. ATTW. https://attw.org/blog/the-just-use-of-imagination-a-call-to-action/

Katz, S. B. (1992). The ethic of expediency: Classical rhetoric, technology, and the Holocaust. College English, 54, 225–275.

Mao, L. M. (2013). Beyond bias, binary, and border: Mapping out the future of comparative rhetoric. Rhetoric Society Quarterly, 43(3), 209–225. https://doi.org/10.1080/02773945.2013.792690

Mao, L. M., Wang, B., Lyon, A., Jarratt, S. C., Swearingen, C. J., Romano, S., Simonson, P., Mailloux, S., & Lu, X. (2015). Manifesting a future for comparative rhetoric. Rhetoric Review, 34(3), 239–274.

Orlikowski, W. J. (2022). Using technology and constituting structures: A practice lens for studying technology in organizations. Organization Science, 11(4), 404–428. http://www.jstor.org/stable/2640412

Reyman, J., & Sparby, D. M. (2020). Digital ethics: Rhetoric and responsibility in online aggression. Routledge. https://doi.org/10.4324/9780429266140

Sanchez R., & Alonso M. (2022). Louisiana woman who alleges she was denied abortion after fetus’ fatal diagnosis says ‘it should not happen to any other woman’.”https://www.cnn.com/2022/08/26/us/louisiana-abortion-nancy-davis-fatal-condition/index.html

Sano-Franchini, J. (2018). Designing outrage, programming discord: A critical interface analysis of facebook as a campaign technology. Technical Communication, 65(4), 387–410.

Sano-franchini, J. (2017). What can Asian eyelids teach us about user experience design? A culturally reflexive framework for UX / I design. Rhetoric, Professional Communication and Globalization, 10(1), 27–53.

Selfe, C. L., & Selfe, R. J. (1994). The Politics of the interface: Power and its exercise in electronic contact zones. College Composition and Communication, 45(4), 480–504.

Sparby, D. M. (2017). Digital social media and aggression: Memetic rhetoric in 4chan’s collective identity. Computers and Composition, 45, 85–97.

Sun, H. (2020). Global social media design. Oxford University Press. https://doi.org/10.1093/oso/9780190845582.001.0001

Sun, H. (2012). Cross-cultural technology design: Creating culture-sensitive technology for local users. Oxford University Press.

Sun, H., & Hart-Davidson, W. F. (2014). Binding the material and the discursive with a relational approach of affordances. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3533–3542. https://doi.org/10.1145/2556288.2557185

Totenburg, N., & McCammon, S. (2022, June 24). Supreme Court overturns Roe v. Wade, ending right to abortion upheld for decades. https://www.npr.org/2022/06/24/1102305878/supreme-court-abortion-roe-v-wade-decision-overturn

Trice, M., Potts, L., & Small, R. (2020). Values versus rules in social media communities: How platforms generate amorality on reddit and Facebook. In J. Reyman & D. M. Sparby (Eds.), Digital ethics: Rhetoric and responsibility in online aggression (pp. 33–50). Routledge.

Xiao, M. (2021).【404文库】肖美丽自述:反二手烟被网暴炸号,攻击我的人是谁?(Xiao Meili’s own account: Cyberbullied and had my account deleted for opposing secondhand smoking, who attacked me?) China Digital Times. https://chinadigitaltimes.net/chinese/665064.html

Vallor, S. (2018). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford UP. 

Walton, R., Moore, K., & Jones, N. N. (2019). Technical communication after the social justice turn. Routledge.

Wang, X., & Gu, B. (2015). The communication design of WeChat: Ideological as well as technical aspects of social media. Communication Design Quarterly, 4(1), 23–35.

Wang, X., & Gu, B. (2022). Ethical dimensions of app designs: A case study of photo- and video-editing apps. Journal of Business & Technical Communication, 36(3), 355–400. https://doi-org.ezproxy.shsu.edu/10.1177/10506519221087973

About the Authors

Chen Chen, Ph.D. is an assistant professor of technical communication and rhetoric at Utah State University. Her research focuses on advocacy and resistant rhetorical practices by marginalized communities as civic and tactical technical communication in transnational contexts. In particular, she has been working on disaster response communications and transnational feminist activism in Chinese and Chinese diasporic. Her work has been published in Enculturation, Technical Communication, SIGDOC Proceedings, and several edited collections. She has also published on pedagogical research and has done work examining professionalization processes of graduate students and early career faculty in extra-institutional disciplinary spaces. She can be reached at chen.chen@usu.edu

Xiaobo Wang, Ph.D. is an assistant professor of English at Sam Houston State University, where she teaches undergraduate and graduate courses in rhetoric and technical communication. Her research focuses on the intersectional areas of communication design, feminist rhetoric, comparative rhetoric, and intercultural technical and professional communication. She can be reached at xiaobo.belle.wang@shsu.edu.

APPENDIX I. Weibo’s Definitions of Inappropriate Value Orientation (excerpted from Weibo’s community guidelines, May 27, 2021; translation our own)

  1. Using exaggerated titles to attract internet traffic, especially when post titles and content don’t align.
    1. Publicity stunt on negative topics
    2. Trolling: fueling debates, quarrels, or distorting original content/information of events/topics to deteriorate relationships between or among different communities/groups of users and lead on fights on the Internet
    3. Other approaches to boost internet traffic that violate individual or organizational rights.
  2. Promoting hate (Promoting hate refers to the action of using specific physical, psychological, geographical, cultural, and other features to categorize groups of people and tagging them as opposites or enemies, and then using such categorizations to spread or communicate relevant information in order to normalize the marginalization, degradation, discrimination, attacks, or harm on certain groups/communities.)
    1. Organize, lead on, guide majority users to discriminate, defame, insult, or hate individuals or groups based on the following:
      1. Ethnicity, race, religious beliefs;
      2. Gender, age;
      3. Geographical areas, cultural practices of folklores;
      4. Severe illness, disabilities;
      5. Other physical and psychological features
    2. Organize, lead on, guide majority users to disturb public order on Weibo and beyond, including:
      1. Work order of governmental institutions
      2. Operations of corporations
      3. Releasing, performing, and broadcasting of literary, film, and movie works
      4. Releasing, performing, and broadcasting of video games, equipment, relevant products, and exhibitions
      5. Broadcasting of sports events and video games to be held as planned
      6. Legit media’s legit reports
      7. Other forms of interruption on public order
    3. Organize, lead on, or guide majority users to complain or report maliciously on Weibo or other platforms.
    4. Rude Aggression: Using uncivil words and phrases, posting cursing or attacking the dead kind of speeches.
    5. Doxxing: exposing other people’s personal information and calling on others to conduct irrational doxxing.
    6. Exposing private information or visuals maliciously: By excerpting other people’s screenshots or texts, intensify conflicts and instigate cyber violence deliberately.
  3. Other Inappropriate information
    1. Describing natural disasters and/or disastrous accidents inappropriately
    2. Content that shows sexual implications, seductive in nature that leads to sexual fantasies
    3. Shown bloody, horror, cruel content that make the human body and mind uncomfortable
    4. Promoting vulgar, kitsch content
    5. Other content that negatively impacts online ecology

The definition, demonstrations, resolutions, methods to solve the issue, and results of posting inappropriate information are stipulated in “Weibo Complaint Processing Detailed Policies.”

APPENDIX II. Weibo’s rule forbidding harmful information about current affairs and politics (excerpted from Weibo’s community guidelines, May 27, 2021; translation our own)

Users shall not post harmful information about current affairs and politics.

Harmful information about current affairs and politics includes posts that threaten national and social security according to current laws and regulations, mainly manifested as:

  1. Violating the basic principles of our constitution.
  2. Threatening the country’s unification, national sovereignty, and national territory integrity.
  3. Leaking national secrets, harming national security or national honor and interests.
  4. Promoting terrorism, extremism or inciting and/or engaging in terrorist and/or extremist activities.
  5. Inciting racial or ethnic hate, racial/ethnic discrimination, harming racial/ethnic unity or racial/ethnic rituals, traditions.
  6. Harming national religious policies, promoting cults, superstition.
  7. Spreading rumors, disturbing social order, and harming social stability.
  8. Distorting, vilifying, desecrating, and negating the deeds and spirit of national heroes and martyrs; insulting, slandering, or otherwise damaging the names, portraits, reputations, and honors of national heroes and martyrs.
  9. Promoting gambling, violence, murder, terror, or abetting crime.
  10. Inciting illegal assemblies, associations, processions, demonstrations, and gatherings that disrupt social order.
  11. Other content prohibited by laws, administrative regulations, and national regulations.

The definitions, presentations, principles, methods, and results of harmful information about current affairs and politics are stipulated in the “Weibo Complaint Processing Detailed Policies.”


*We are using the English version of Weibo in this article to see if and how Weibo translates its interface. Most features on the interfaces are not translated into English, as our examples show.