71.2 May 2024

From Interpersonal Privacy to Human-Technological Privacy: Communication Privacy Management Theory Revisited

By Xiaoxiao Meng

https://doi.org/10.55177/tc304825

Abstract

Purpose: Communication privacy management (CPM) theory is a major theory explaining the tensions between disclosing and concealing private information in interpersonal communication. By considering differences in interpersonal and human-technology information disclosure and drawing on existing work related to privacy and technology, this article presents CPM theory as a broad theoretical framework for human-technology privacy boundary management.

Method: This research employed a speculative theoretical approach by drawing on existing literature and synthesizing it to both apply and extend CPM theory’s propositions to human-technology privacy boundary management.

Results: CPM theory can be applied to understand the dynamics of human-technology information disclosure and should incorporate technological literacy as a key consideration in human-technology privacy boundary management. Legal ties characterize human-technology privacy boundary coordination instead of social ties. Additionally, in human-technology information disclosure contexts, CPM theory should provide guidance regarding managing third parties that may gain access to information.

Conclusion: CPM theory is the most comprehensive framework for how individuals manage privacy boundaries, be it in interpersonal or human-technology contexts. By considering technology as a property of technological actors instead of an actor itself, CPM theory in human-technology contexts becomes a flexible theoretical framework for understanding information disclosure and privacy boundary management, both for existing technologies (e.g., social media, online shopping platforms, artificial intelligence, Internet of Things) and future technologies.

Keywords: communication privacy management (CPM), human-technology, information disclosure, privacy boundary management

Practitioner’s Takeaway:

  • Practitioners should consider the six rules of human-technology boundary management when developing communication technology-related privacy policies: namely, cultural expectations, socio-demographic variables, motivations, situational context, risk-benefit analysis, and technological literacy.
  • Newer technologies enable private information flows among technological platforms. Practitioners need to understand the ease and scale with which private information can be transmitted across platforms to design appropriate privacy boundaries that balance business interests with user expectations of acceptable private information disclosure.

Introduction

The issue of privacy intrusion in the Digital Age has gained significant attention in technical communication (Frost, 2021; Green, 2021). Technical communication scholars, particularly human-computer interaction scholars, are increasingly paying attention to privacy issues (Luo et al., 2022). For example, in the context of extended reality, the importance of privacy and security issues in the design of technical instructions has rapidly increased due to the risks of devices continuously collecting data from users (Rantakokko, 2022a). There appears to be an emerging consensus in technical communication that, with the further development of artificial intelligence or data mining, privacy and security issues will become increasingly prominent (Rantakokko, 2022b). Therefore, finding ways to protect and manage privacy in technological contexts has become urgent (Acquisti et al., 2014; Guzman et al., 2019). However, to the best of my knowledge, there is currently little theoretical guidance for technical communication researchers and practitioners on privacy management in technological contexts. As such, this article extends one of the most widely used theories in privacy studies, CPM theory, to build an integrated theoretical framework to guide and assist researchers and practitioners in understanding and protecting user privacy in technological contexts.

The CPM theory originated as communication boundary management (CBM) theory to explain how married couples manage private information disclosure with their spouses (Petronio, 1991). Later, Petronio (2002, p. 2) broadened CBM theory into CPM theory to emphasize the centrality of privacy in interpersonal contexts. As a creator and developer of this theory, Petronio (2002, p. 86) argued that “CPM theory provides a systematic approach to understand disclosure about the self by focusing on the process of privacy management,” and other scholars have agreed with this view (Colaner et al., 2021, p. 3). Following its introduction, CPM theory emerged as a major theory examining the process of privacy boundary management in face-to-face interpersonal communication (Steuber & Solomon, 2011). As media technology evolved, interpersonal communication became increasingly mediated. CPM researchers then extended their study of privacy boundary management to private information disclosure through technologically mediated channels (Colaner et al., 2021; Zhang & Fu, 2020).

Although researchers have further expanded CPM theory to technological contexts by studying social media, the key focus is still on how individuals manage privacy boundaries in interpersonal contexts. Although Petronio later acknowledged that new technologies have influenced the theory, it remains limited to interpersonal communication mediated through social media (Petronio & Child, 2020). However, scholars have increasingly recognized the importance of privacy boundaries with technology because of the prevalence of technology in everyday life. The high-profile Cambridge Analytica scandal involving Facebook demonstrated how personal information disclosed to technological actors can be harvested and misused (Breuer et al., 2020). As algorithm-driven platforms (e.g., Google, Uber, Amazon, Alibaba, and Tencent) become increasingly ubiquitous and indispensable for everyday living, concerns over private information disclosed to such technological actors are likewise increasing.

A small but growing number of scholars agree that CPM theory is suitable for human-technology contexts and have applied CPM theory to examine how users manage their privacy with technology. Researchers explored users’ privacy management strategies in e-commerce platforms (Metzger, 2007); their trust and willingness to use cloud-based storage applications (Widjaja et al., 2019) and the Internet of Things (Pal et al., 2020); and users’ perceptions and behaviors regarding data ownership, privacy rules, and turbulence. Each of these elements comprise the tenets of CPM theory (Zimmer et al., 2020).

Although researchers have extended CPM theory to include information exchange between individuals and technology, there is little work examining how the differences between the nature of human-technology and interpersonal communication affect CPM’s applicability to human-technology communication. Private information in the context of human-technology communication is conceptually broader (Metzger, 2007) and includes social media profiles, online or app search history, time spent on websites or applications, and past purchases or subscriptions. Interpersonal information disclosure is governed by social norms and expectations, but such norms do not characterize human-technology information disclosure. Furthermore, the extent of the revelation or concealment of personal information in human-technology information disclosure is also likely related to technological literacy. Technologically literate individuals can better leverage technical affordances to conceal information. Hence, the extent to which CPM theory, which was developed based on the nature of interpersonal communication, can adequately account for human-technology information disclosure is unclear.

This article differs from previous studies on CPM in technological contexts in two aspects. First, most existing studies examined differences in privacy management in specific technological contexts compared with interpersonal communication, but they examined only specific aspects of CPM theory and did not comprehensively examine the tenets of CPM theory. Second, this article establishes a macro-analytical framework instead of exploring micro-level privacy management strategies, privacy concerns, and information disclosure intentions for specific technological contexts, which is more useful for technical communication practitioners who may not be able to control the specific technological contexts within which they work.

An added advantage of CPM theory as a macro-theoretical framework is that it can accommodate many existing theoretical perspectives and empirical work in the broader online and digital privacy literature. In doing so, existing work related to technology and privacy can be more coherently organized. Gaps and opportunities to expand theoretical understanding of human-technology privacy management can also be more easily identified when there is a coherent theoretical framework that relates existing work with one another. In the sections that follow, I detail how the main tenets of CPM theory are also adequate for theorizing human-technology privacy management as a broad framework and can unify many existing theoretical perspectives of online privacy management. I then conclude with a general discussion of CPM as a broad theory of individual privacy boundary management and outline the potential limitations of this work.

Methodology

This research tries to determine the extent to which CPM theory developed in interpersonal communication contexts can be applied to technological contexts. I first conducted a systematic literature review (Zhou, 2023) to better identify existing research, work, and perspectives on the connection between CPM theory, technical communication, and interpersonal communication. The systematic literature review is recognized as an efficient and acceptable way for categorizing and evaluating existing work.

I first collected literature concerned with CPM and privacy issues in technological contexts to explore the potential for synthesis. I performed a search limited to publications published within this timeframe from January 1991 to December 2021, and I used major academic search engines (i.e., EBSCO, ScienceDirect, JSTOR, Emerald Insight, and Google Scholar) to retrieve relevant literature. The search terms used include “communication and privacy,” “platform privacy,” and “technology and privacy.” Several criteria were considered when screening the literature. For example, articles needed to be written in English, use privacy as the core theme, be in the field of social science (where CPM originated), and be peer-reviewed. The process of article collection and filtering ultimately yielded 108 articles for further analysis.

Next, I employed a speculative approach to theorizing to analyze the literature. The speculative approach is a form of reasoning that is based on but goes beyond empirical observations (Bryant et al., 2012). This approach is about reflecting on the essence and value of things to make a qualitative judgment about them, meaning what they ascribe to and what their position is in the world (Ross, 2017). The speculative approach is not a rejection of the empirical approach but a recognition of its inherent limitations, such as its heavy focus on textual data. Through logical reasoning, such as deduction, the typical characteristics of things are compared and identified, and the essential characteristics of things that are consistent or different from each other emerge.

Speculative theorizing as a methodology is less common and differs from more well-known methodologies, such as grounded theory. Here are the key differences between the two: Grounded theory is based on an inductive approach, where theories and explanations are derived from empirical data. It emphasizes building theories grounded in the data collected from the research participants. On the other hand, the speculative approach is more deductive and hypothetical in nature. It involves proposing speculative explanations, making conjectures, and exploring possibilities without necessarily relying solely on empirical evidence. Grounded theory aims to provide an in-depth understanding of a particular phenomenon or social process. It seeks to generate theories that explain the observed empirical data. The speculative approach, on the other hand, may be used to explore possibilities, imagine alternative futures, or challenge existing assumptions or theories. It may be employed to stimulate creativity, critical thinking, or to generate novel ideas. Given the lack of theoretical familiarity with privacy in technological contexts in many fields, including technical communication, the utility of the grounded theory approach is potentially limited. Thus, the speculative approach to reflecting on and extending CPM into fields like technical communication is more appropriate at this juncture, and future theorization can adopt a grounded theory approach for further development.

The speculative reasoning approach follows three steps: seeking a common ground, counterfactual reasoning, and prefactual reasoning (Huang et al., 2021). Seeking a common ground refers to reaching a consensus about the features of technological contexts and assumptions of CPM. Counterfactual thinking refers to “imagining how events could have been different,” (Huang et al., 2021, p. 1), such as, “If I hadn’t gotten caught in that traffic jam, I would have arrived at the train station on time.” Prefactual thinking concerns “how things will vary from the current reality” (Huang et al., 2021, p. 6); prefactual thoughts take the form, “If action X is taken, it will lead to outcome Y.”

Following the steps of speculative reasoning in the Socratic tradition (Verene, 2016), I first examined the consensus about the features of technological contexts and assumptions of CPM. For example, scholars contend that CPM assumes a dialectical relationship between privacy exposure and privacy protection, which is reflected in the circular system of boundary coordination, rule development considerations, and boundary turbulence. Then, I engaged in counterfactual reasoning to see which assumptions of CPM would have been changed if CPM theory had not been proposed in interpersonal communication. For example, I explored the possibility of private information disclosure for the sake of enhancing emotional ties in human-technology interactions. Lastly, I engaged in prefactual reasoning to see which assumptions would be satisfied if CPM theory were used in a technical context. I revised the original propositions of CPM technological contexts using counterfactuals but kept the original propositions that remain consistent in technological contexts. Detailed processes of speculation and argumentation are shown in the results.

Speculative reasoning can be applied to the field of technical communication studies. First, when anticipating user behavior, speculative reasoning allows researchers to anticipate and understand user behavior in response to new technologies. By speculating about how users might engage with and adopt new technical communication tools, researchers can gain insights into user preferences, expectations, and challenges. This knowledge can inform the design and implementation of user-friendly and effective communication systems. Second, speculative reasoning can help researchers examine the broader societal impact of technical communication. By speculating about the potential consequences and implications of technological advancements, researchers can assess technology’s influence on social structures, cultural practices, and power dynamics; this enables a critical examination of the social, political, and economic implications of technical communication, and supports the development of responsible and sustainable practices.

In summary, speculative reasoning offers valuable insights and perspectives to technical communication research. By engaging in speculative thinking, researchers can explore future possibilities, address ethical concerns, inspire design innovation, anticipate user behavior, and examine societal impact. This approach enriches the field of technical communication and enables researchers to remain proactive and responsive to the dynamic nature of technology and communication.

Six Foundational Considerations in Human-Technology Boundary Management

CPM proposes five foundational considerations for rule development to coordinate privacy boundaries among individuals: (a) cultural expectations, (b) gender differences, (c) motivation for revealing and concealing, (d) situational context, and (e) risk-benefit in revealing and concealing information (Petronio, 2002, p. 37). Gender differences in CPM theory, although not explicitly articulated, refer to cisgender differences. CPM theory, to the best of my knowledge, has not extended the conceptualization of gender differences to other gender identities, such as transgender or non-binary identities. As such, the discussion of gender differences in rule development and privacy boundary coordination in this research is biased toward cisgender individuals and may not be completely applicable to other gender identities. Theorizing beyond the cisgender category is important to ensure greater inclusivity, but it is beyond the scope of the present article.

Much of existing literature examining online or digital information disclosure relates to the process of rule development and how individuals manage privacy boundaries with technology. For example, studies on online or digital information disclosure employing contextual integrity theory (Nissenbaum, 2010) are more concerned with how situational context influences disclosure, while those employing privacy calculus theory (Culnan & Bies, 2003) broadly examine the risk-benefit of personal information disclosure. Like CPM theory, the theoretical connotation and logic of contextual integrity theory and privacy calculus theory make considerations for situational context and risk-benefit in relation to revealing and concealing information, which shows that CPM theory has the potential to unify other existing theories of network privacy.

I further propose a sixth foundational consideration, technological literacy, to CPM theory’s theorization of human-technology privacy boundary management. Information disclosure in human-technology contexts is likely to vary with differences in technical skills and knowledge because of differences in individual capability to effectively use technical means or solutions to control personal information. In the following sections, I detail how the five foundational considerations for rule development proposed in the original CPM theory in the interpersonal context as well as my proposed sixth consideration of technological literacy can be used as a broad framework of human-technology information disclosure.

Cultural expectations

Privacy is a culturally specific phenomenon. People are socialized into certain culture-specific privacy norms, and those norms are foundational to their ideas of privacy (DeCew, 1997). The importance of privacy hinges on cultural expectations (Altman, 1977) that inform ideas about appropriate privacy boundaries and their formation (Petronio, 2002). Empirical studies have examined the role of culture in human-technology information disclosure, providing evidence of the important role that culture plays.

For example, Liang et al. (2016) found that privacy settings were more effective in encouraging self-disclosure of geolocation information on Twitter for collectivist societies than in individualist societies, and the influence of cultural values on self-disclosure of Twitter geolocation information was conditional on trust. Another study examining perceptions of privacy-convenience trade-offs for facial recognition technology in China, Germany, the United Kingdom, and the United States found the Chinese to be the most accepting of this technology, while Germans were the least accepting (Kostka et al., 2021). A systematic meta-analysis also found that culture moderates the relationship between privacy concerns and protection behavior (Baruh et al., 2017). These findings suggest that—similar to the interpersonal context—CPM theory as a broad framework of human-technology privacy management should also consider the role of culture in how individuals form privacy boundary rules with technology.

Socio-demographic variables

A foundational consideration for boundary rule development in CPM theory in interpersonal information disclosure contexts is gender. According to Petronio (2002), women and men develop different rules from their own vantage points to regulate interpersonal privacy boundaries. In human-technology contexts, evidence suggests similar gender differences in personal information disclosure. Women are more willing to reveal their favorite music, books, and religion on social network profiles than men, whereas men are more likely to disclose their phone numbers (Tufekci, 2008). Additionally, women are, generally, more concerned about privacy risks than perceived benefits in online disclosure, while men focus on utilitarian benefits instead of hedonic benefits (Sun et al., 2015). A recent study has found that gender moderates the relationship between privacy concerns and self-disclosure on social network sites (Zhang & Fu, 2020). It should be noted that most of the literature on gender differences and privacy implicitly consider gender as a binary construct, and these findings may not be applicable to non-binary genders.

However, in addition to gender, scholars have found that other demographic factors such as age, education, and income are often related to online privacy skills and literacy (Park, 2013), and these skills affect how people cope with privacy issues (Gerber et al., 2018). For example, studies have found age differences in privacy concerns and privacy protection when using Facebook (Van den Broeck et al., 2015). These findings suggest that socio-demographic factors besides gender can also influence human-technology privacy management. Considering socio-demographic differences will expand CPM theory’s potential, as a broad framework for understanding how individuals draw privacy boundaries with technology, to explain boundary rule formation in human-technology contexts. Thus, this article refers to this element as “socio-demographic variables” rather than the “gender differences” originally proposed in Petronio (2002), because the concept is broader than gender in the context of this article.

Motivations

Often, people make behavioral changes based on specific motivations when faced with risks (Rogers, 1975). Motivations such as “reciprocity, liking, and attraction” influence individuals’ privacy boundaries and disclosure in interpersonal contexts (Petronio, 2002, p. 54). The desire to express emotions (Jones & Archer, 1976), strong subjective norms (Heirman et al., 2013), enjoyment, self-presentation (Krasnova et al., 2010), and social support (Waters & Ackerman, 2011) are other motivations to share and disclose personal information. In the interpersonal context, motivations for information disclosure appear to be influenced by social and psychological considerations.

However, in human-technology contexts, especially in technologically driven societies, motivations for disclosing personal information can also include the desire to fulfill basic needs, such as using shopping platforms or mobile payment applications. Disclosing personal information in human-technology contexts can also fulfill self-esteem needs, such as using artificial-intelligence-driven face editing applications. Motivations, therefore, remain an important boundary rule formation consideration in CPM theory as a broad frame of human-technology privacy management, but the range of motivations can expand beyond social-psychological motivations to incorporate basic and self-esteem motivations for the framework to be a more comprehensive account of human-technology privacy management. In the context of technical communication, there is a need for practitioners to understand users’ needs for new technologies beyond their psychosocial motivations. Here, we can also refer to Maslow’s hierarchy of needs, including physiological needs, safety needs, love and belonging needs, esteem needs, and self-actualization needs. Technical communication practitioners should also consider users’ needs for security and privacy when developing technology or software.

Situational context

CPM theory recognizes that privacy rules in interpersonal privacy management are not static, general rules; rather, these rules are often influenced by specific contexts and situations. The three broad categories of contexts or situations that Petronio (2002) identified in interpersonal information disclosure include traumatic events, therapeutic situations, and life circumstances. The role of context in human-technology information disclosure has mostly been examined using contextual integrity (CI) theory (Nissenbaum, 2010). By linking personal privacy protection to information regulation protection in specific contexts, CI theory provides a conceptual and analytical framework for assessing the flow of personal information to explain why certain patterns of information flow are acceptable in some contexts but not others (Zimmer et al., 2020).

Empirical research related to human-technology privacy management using CI theory falls into three broad categories: investigating online media companies such as Facebook (Hull et al., 2011; Shvartzshnaider et al., 2018), examining how context influences sharing of personal information and privacy expectations (Hoyle et al., 2020), and understanding how new technologies such as biometric technologies (Norval & Prasopoulou, 2017), contact tracing platforms (Vitak & Zimmer, 2020), and the Internet of Things (Apthorpe et al., 2018) influence the flow of personal information. CPM theory, as a broad framework to understand how users form boundary rules in human-technology information disclosure, can leverage CI theory and its associated research to better account for the role of context in human-technology privacy boundary management.

Risk-benefit analysis

Privacy-disclosure benefits and risks are key considerations in CPM’s theorization of how individuals decide on boundary rules when managing interpersonal privacy. Benefits such as “self-clarification, social validation, relationship development, and social control” are weighed against the level of risks of real or hypothetical repercussions (Petronio, 2002, pp. 66–67). In the context of human-technology information disclosure, there is already a rich body of research based on privacy calculus theory examining risk-benefit considerations. Privacy calculus is a key theory to explain the privacy paradox phenomenon in which users’ concerns about online privacy risks do not correlate with their online behaviors (Chen, 2018). The theory posits that users calculate tradeoffs between the potential costs and benefits of self-disclosure before deciding to reveal personal information online (Culnan & Bies, 2003).

Scholars have applied privacy calculus theory to social platforms to explore the costs and benefits of online self-disclosure (Dienlin & Metzger, 2016; Krasnova et al., 2010; Lee & Yuan, 2020). Beyond social media, privacy calculus has been applied to understand how consumers balance privacy risk beliefs and personal interest in online transactions (Dinev & Hart, 2006), how consumers respond to smartphone applications that collect driving behavior data (Kehr et al., 2015), how crowd-funders weigh the need for medical crowdfunding support against perceived privacy risks (Gonzales et al., 2018), and how users balance the tensions between perceived privacy risks and better personalized service in the context of the Internet of Things (Kim et al., 2019). CPM theory, as a broad framework of human-technology privacy, can incorporate privacy calculus theory and its associated empirical research to understand how individuals assess risks and benefits in boundary rule formation within human-technology privacy management contexts. To decrease users’ perceptions of privacy risks, technology providers (such as cloud storage providers) should give users mechanisms to protect their privacy and security. At the same time, cloud storage providers may want to offer more functions and better services to improve perceived benefits or utilities. Low perceived usefulness and high technical utility are the main rules that users consider when sharing information.

Technological literacy

For CPM theory to be a viable framework for human-technology privacy management, the role of technological literacy in boundary rule formation is important. Spilka (2009) emphasized the importance of digital literacy for technical communication in the 21st century. Digital literacy is also important for technical communication practitioners working in privacy in technological contexts. Unlike interpersonal information disclosure, personal information disclosure between individuals and technology requires technological knowledge and skill or technological literacy, meaning knowledge of and skills in computer-related and technical-related functions (Bunz, 2003). Park (2013) examined the role of three types of digital literacy—including internet familiarity, surveillance awareness, and policy understanding—in managing online privacy behaviors and determined that users with a high level of knowledge are more likely to exercise information control than those with a low level of knowledge.

Studies have also found that some users disrupt Facebook’s algorithm customization by providing inaccurate personal information (Bucher, 2017), while others manage privacy boundaries through strategies such as deleting browsing history (Young & Quan-Haase, 2013). Less technologically literate individuals may not understand how technology works and are likely less adept at using technological means to manage online privacy (Büchi et al., 2016), resulting in greater reluctance to disclose information through technological platforms or applications. Therefore, for CPM theory to be a viable framework for human-technology privacy management, it should incorporate technological literacy as a foundational consideration in boundary rule formation. Improving technological literacy is closely related to the practice of technical communication, especially in communicating the relationship between algorithms, design principles, and the user experience with technology and privacy to the public and industry professionals.

Three Steps in Boundary Coordination in Human-Technology Privacy Management

Boundary coordination is the second type of rule management process under CPM theory. There are two types of privacy boundaries: personal boundaries where private information is wholly self-managed and has not been revealed, and collective boundaries where private information has been disclosed to others and is co-regulated (Petronio, 2002). CPM theory focuses on collective boundaries in interpersonal information disclosure, which are coordinated using three types of management rules that coordinate boundary linkages, boundary permeability, and boundary ownership (Petronio, 2002). Boundary linkage considers who and how people become privy to personal information, which implicates the “breadth, depth, and amount of private information” (Petronio, 2002, p. 99) that can permeate these linkages. The nature of boundary ownership for such information depends on both linkages and permeability.

The CPM theory’s original focus was interpersonal privacy management; thus, it considers only human actors in theorizing boundary coordination. In human-technology contexts, non-human actors can also receive an individual’s personal information. For example, when an individual uses a ride-hailing mobile application such as Uber, personal information such as name, mobile number, and geo-location information is typically provided to the driver through the application. Not only are the individual and the driver privy to such information, the individual’s mobile phone, the remote server which receives the ride-hailing request for dispatch to a driver, and the driver’s mobile phone are all privy to the information. The collective boundaries of such personal information are coordinated by both human and technological actors. In the sections that follow, I expand on CPM theory and boundary coordination in human-technology privacy management through the three types of management operations it theorizes, namely, boundary linkages, permeability, and ownership.

Boundary linkage coordination management

Boundary linkage coordination in human-technology privacy boundary management falls into two major categories: linkages involving the individual who discloses personal information to both human and technological actors, and linkages involving disclosure to only technological actors. An example of the former would be the disclosure of personal information on dating platforms, such as Tinder, where personal information is disclosed to technological actors, such as one’s mobile phone and Tinder’s backend servers, as well as other human users. An example of the latter would be the disclosure of personal information on online shopping platforms, such as Amazon, where personal information is disclosed only to technological actors except in occasional instances, such as seeking customer service support.

When boundary linkages are made, CPM theory posits that all who are linked have the potential to divulge the disclosed information, even though the information is generally considered the discloser’s property; as such, collective negotiation of rules or the introduction of new linkages into existing rules is necessary (Petronio, 2002). According to CPM theory, the nature of boundary linkages depends on both the proportion of personal information and the strength of social ties. The greater the proportion of personal information a discloser contributes and the weaker the strength of social ties in boundary linkages, the less power and control the discloser can exert over disclosed information.

However, in human-technology information disclosure, technological actors do not form social relations with human actors; social ties cannot shape the nature of boundary linkages between human and technological actors. Many of the rules in CPM theory influencing boundary linkages (such as confidant selection, linkage timing, topic selection, personality characteristics, and acquisition of private information [Petronio, 2002]) are inapplicable to boundary linkages with technological actors. Instead, I contend that, because of differences in interpersonal and human-technology information disclosure, the nature of boundary linkages with technological actors in CPM theory as a broad framework of human-technology privacy management is shaped not just by the amount of information human actors disclose but also by technology affordances and legal ties. By legal ties, I refer broadly to the legal obligations between social and technological actors in coordinating privacy boundaries.

Technology affordances constrain privacy boundary possibilities in human-technology privacy management because the extent to which individuals disclose personal information to technological actors also depends on what the technology allows to be disclosed. I conceive affordances broadly as the “multifaceted relational structure” (Faraj & Azad, 2012, p. 254) between technology and users that enables or constrains the range of possible behavioral outcomes in a particular context. For example, selective disclosure of location information from some mobile applications but not others is possible only if the technology affording the concealment of such personal information is available to the individual. This is a reminder for software developers to provide users with the capability to selectively disclose information, while technical communication practitioners should pay attention to explaining how to exercise this capability.

Furthermore, technology affordances also offer the potential for severing boundary linkages, a possibility that is unlikely to exist in interpersonal boundary linkages. For example, technology can afford users the ability to delete personal information. Through technological affordances, human-technology boundary linkages can be established and severed as information is disclosed and deleted. In contrast, interpersonal boundary linkages are near impossible to sever unless recipients of disclosed personal information can remove the disclosed information from memory. Technology affordances can also provide a temporal dimension to boundary linkages, such as allowing the automatic deletion of browsing or purchase histories disclosed to technological actors after a predetermined period. Technology affordances in CPM theory offer new ways of thinking about boundary linkages, such as linkage severance and linkage temporality. Technical communication practitioners should explore and communicate strategies on how to leverage the affordances of technology to help users protect their privacy.

The nature of boundary linkage coordination in the human-technology context also differs from the interpersonal context because coordination is mostly shaped by legal ties instead of social ties. Petronio (2002) noted that weak social ties reduce the obligation to preserve private information. Because social ties do not exist between social and technological actors, boundary linkage coordination is mostly shaped by legal ties; weak legal ties reduce the obligations of technological actors to protect private information. The strength of legal ties can vary according to the strength of specific privacy policy agreements between users and technological actors, or overarching regulations enacted by legal institutions, such as the European Union’s General Data Protection Regulation (GDPR) law. If such agreements or regulations are absent or poorly considered, legal ties are likely to be weak, and technological actors have fewer obligations to protect private information.

Because boundary linkage coordination with technological actors is mostly shaped by legal ties, third parties who are not recipients of disclosed personal information do not form a boundary linkage, based on CPM theory’s definition of boundary linkages. However, these third parties may also influence boundary linkage coordination, despite not forming boundary linkages through information disclosure. Examples include legal officers, policymakers, or non-governmental organizations (NGOs), such as the Electronic Frontier Foundation (EFF) that advocate digital privacy rights. Considering the roles of relevant third parties in boundary linkage coordination potentially enriches CPM theory as a broad framework of human-technology privacy management because they offer an additional pathway to coordinate boundary linkages. On a separate note, in the interpersonal context, considering the roles of relevant third parties such as counselors and mediators also opens new areas of inquiry in interpersonal boundary linkage coordination because such parties also offer an additional pathway to the coordination of boundary linkages based on professional instead of social ties.

Boundary permeability and ownership

In privacy boundary coordination, beyond linkage coordination, CPM also theorizes boundary permeability and ownership coordination. Permeability considers how personal information can cross boundaries, while ownership considers legitimate possession and control rights to personal information (Petronio, 2002). As discussed in the previous section, human-technology privacy management is mostly characterized by legal ties instead of social ties. Therefore, boundary permeability and ownership are also coordinated based on legal ties, such as privacy policy documents that users of technology generally must accept before they can exchange personal information with technological actors.

Privacy policy documents specify how technological actors can use information that users provide to them. Under CPM, as a broad framework of human-technology information disclosure, these documents constitute codified forms of rules on boundary permeability and ownership coordination. Users are unable to negotiate such rules individually with technological actors, but they can involve third parties such as the government or NGOs to mediate or negotiate on their behalf. The determination of access and protection rules in boundary permeability coordination and the definition of privacy borders in boundary ownership in human-technology information disclosure can also become a tripartite model involving human actors, technological actors, and mediating actors who do not form boundary linkages through processes of information disclosure, such as governments and NGOs. To theorize boundary permeability and ownership in CPM theory as a framework for human-technology privacy management, it may be necessary to draw on literature in legal studies.

However, from a communication perspective, the influence of legal ties on boundary coordination implies that how users manage boundary permeability and ownership coordination with technological actors can also be shaped by the extent to which individuals understand their legal rights to personal information ownership and sharing. Information processing theories, such as the elaboration likelihood model (ELM) (Petty & Cacioppo, 1984), can help explain the influence of perceptions of the law on individual boundary permeability and ownership coordination with technological actors, especially perceptions of privacy policy agreements. Though the content of every privacy policy agreement is the same for every user, differences in how individuals process the information in such agreements can influence their understanding of the boundaries of permeability and ownership. For example, individuals who process privacy policy agreements using the central processing route, as opposed to the peripheral processing route as theorized by ELM, may understand the boundaries of ownership and permeability differently with technological actors. Incorporating information processing theories into CPM theory provides a more nuanced understanding of how the law influences boundary coordination in human-technology privacy management from a communication perspective.

Boundary Turbulence in Human-Technology Boundary Management

The third type of rule management process under CPM theory is boundary turbulence. In interpersonal information disclosure, individuals may engage in various considerations to form rules about privacy boundaries and then coordinate boundaries using these rules. However, boundary coordination is a complex process; coordination failures are inevitable at times because of the inability to develop, carry out, or enforce rules in coordinating linkages, permeability, and ownership (Petronio, 2002). CPM theory identifies six factors in interpersonal privacy management that can cause boundary turbulence: intentional rule violations, boundary rule mistakes, fuzzy boundaries, dissimilar boundary orientations, boundary definition predicaments, and privacy dilemmas (Petronio, 2002).

Many of these factors that can cause boundary turbulence in interpersonal privacy management often result from differing ideas and expectations of appropriate social, cultural, or ethical norms. In human-technology privacy management, if privacy policy documents explicitly codify boundary coordination rules, turbulence factors such as dissimilar boundary orientations and boundary definition predicaments are unlikely. When privacy policy documents are not sufficiently clear or detailed in articulating boundary coordination rules, the factors of fuzzy boundaries, boundary rule mistakes, and privacy dilemmas can cause boundary turbulence in boundary coordination between the user and technological actors.

Even if privacy policy documents comprehensively articulate boundary coordination rules, perceptions of fuzzy boundaries, boundary rule mistakes, and privacy dilemmas can still occur. Such perceptions exist for several possible reasons: failure to read the documents in detail, low levels of literacy or inability to comprehend the documents, excessive legalese, or lack of trust in these documents. These perceptions are more likely to occur in human-technology privacy management relative to interpersonal privacy management because it is much more difficult to clarify such perceptions with technological actors compared to human actors. Factors that potentially influence individual perceptions of appropriate boundaries and rules that result in boundary turbulence are also important for CPM theory to consider.

Finally, besides examining the factors or perceptions that can cause boundary turbulence, I contend that it is also important to theorize the effects of boundary turbulence in CPM theory as a broad framework of human-technology privacy management. There is much existing work on how individuals feel a loss of power and control over online privacy, which are manifestations of the effects of boundary turbulence from the CPM theory perspective. Scholars have variously termed such effects privacy fatigue (Choi et al., 2018), privacy apathy (Hargittai & Marwick, 2016), privacy cynicism (Lutz et al., 2020), or helplessness (Cho, 2021). Some scholars have proposed the theoretical framework of “digital resignation” to describe people’s feelings of a loss of control in how digital entities such as platforms use their personal information (Draper & Turow, 2019). In addition to theorizing the effects of boundary turbulence, it is also important to examine the different ways in which individuals cope when boundary turbulence occurs with technological actors. Scholars have proposed various models of coping that involve cognitive, emotional, and behavioral strategies (Beaudry & Pinsonneaul, 2005; Cho et al., 2020).

Theorizing the effects of boundary turbulence and how individuals cope with the turbulence is important for CPM, because the experience of and coping with these effects can potentially influence how users form boundary rules and coordinate privacy boundaries in future communication with technology. The role of prior privacy experiences and their influence on how individuals manage privacy is especially salient for CPM as a broad framework of human-technology privacy boundary management. This is because, given the prevalence of mobile technology and applications for many daily activities, individuals are likely to interact continually with technological actors and may bring their experiences from managing privacy boundaries with previous technological actors into their decisions on privacy boundaries with new technological actors. Individuals with positive prior experiences are likely to form different boundary rules and coordinate privacy boundaries in future interactions with technological actors compared to those with negative prior experiences.

Conclusion and Discussion

This article demonstrated CPM theory’s utility as a broad framework of privacy boundary management in human-technology information disclosure. CPM originated as a theory of privacy boundary management between married couples and was then expanded to a broader theory of privacy boundary management in interpersonal communication contexts. As social media became increasingly popular, scholars used CPM theory to examine how individuals maintain privacy boundaries with other individuals; CPM theory in the social media context was mainly used to examine how privacy boundaries were managed using technology, not with technology. Although some scholars have applied CPM theory to examine privacy boundary management with technology, to the best of my knowledge, there has not been a detailed and systematic examination of the application of CPM theory to understand human-technology privacy management that considers differences in interpersonal and human-technology communication.

This article addresses this gap in the literature by providing a detailed account of how the main tenets of CPM theory can be applied to understand human-technology privacy boundary management as a broad theoretical framework. By using the main tenets of CPM theory instead of theorizing from the ground up, my article leverages the significant extant literature and theoretical perspectives on privacy in technological contexts, situating them under a single, coherent framework of privacy boundary management. This approach not only allows these perspectives to theoretically relate to one another, it also provides a more holistic understanding of privacy in technological contexts. Identification of theoretical and research gaps is also easier when the relationships among these perspectives are clearly outlined under CPM as a broad theoretical framework for human-technology privacy boundary management. Broadly, I summarize this article’s contributions into three main areas.

First, in expanding CPM theory to human-technology privacy management, I have demonstrated that CPM theory works well as a broad framework for theorizing the management of privacy boundaries with technology. When CPM theory expanded from theorizing individuals to institutions (Raynes-Goldie, 2010), boundaries remained the main theoretical object of concern. Similarly, boundaries remain the main theoretical object of concern in this extended framework. I also present how much of the extant literature on privacy in technological contexts can also be understood as explaining different aspects of human-technology boundary management, namely boundary rule development, coordination, and turbulence. Much of the CPM theory explaining the intricacies of interpersonal privacy management is difficult to apply directly to human-technology privacy management. However, by conceptualizing CPM theory as a broad framework that incorporates existing theoretical perspectives to understand boundary management, CPM theory can extend its theorization of boundary management to account for how individuals manage privacy boundaries with technology by leveraging extant literature in relevant areas.

Second, I identified a number of important differences in the nature of interpersonal information disclosure and human-technology information disclosure to enhance CPM’s theoretical applicability to boundary management with technology. In particular, I highlight the critical importance of technological literacy in boundary rule development because it is near-impossible to disclose personal information to technological actors without sufficient technical knowledge and skills. Moreover, in boundary-rule coordination with technological actors, social ties do not apply. Rather, boundary coordination is achieved through legal ties. I believe that considering these two key differences between the nature of interpersonal information disclosure and human-technology information disclosure not only provides a more comprehensive account of human-technology boundary management but also offers new possibilities for further theoretical developments in CPM theory as a broad framework of human-technology information disclosure.

Finally, because human-technology information disclosure is coordinated through legal ties, the role of third parties (such as NGOs and governments) also influences boundary coordination, though these parties may not form a boundary linkage through information disclosure. Scholars have argued that online personal data control (Lutz et al., 2020) and privacy protection (Hoffmann et al., 2016; Seubert & Helm, 2020) are difficult because legal ties disproportionately favor technological actors due to high litigation costs. Therefore, theorizing the role of third parties in human-technology boundary management is critical because individuals can only turn to these third parties to influence technological actors if they lack financial resources. As a side note, I suggest the roles of third parties, such as counselors and arbitrators, can also be considered in privacy boundary coordination in certain interpersonal settings.

The theorization of CPM in human-technology privacy management is not without limitations. First, I have focused on how the individual manages privacy boundaries with technology, but the field of privacy in technological contexts is much broader and includes legal, political, and computer science dimensions that my extension of CPM as a broad theoretical framework cannot account for. However, scholars such as Bräunlich et al. (2020) have proposed macro-level privacy models, and I see my work as complementary to them. Second, I deliberately adopted a broad view of technology in this article and did not provide a specific definition of technology. My intention is to develop a flexible framework that can accommodate understanding of information disclosure and boundary management for existing technologies (e.g., social media, online shopping platforms, artificial intelligence, Internet of Things) and future technologies. This model conceives technology as a property of technological actors, which allows the model to examine human-technology privacy management for a wide range of technologies.

I also note that this work extending CPM theory to account for human-technology privacy boundary management is mainly based on the work of Petronio (2002). To streamline the theory and increase its accessibility, the core ideas of CPM theory in Petronio’s (2002) work were later distilled into five principles (Petronio, 2010) and three elements with eight associated axioms (Petronio, 2013). I have not addressed these later developments because I adopted a more expansive approach using the main tenets in the original CPM theory to account for human-technology privacy boundary management. In doing so, I noted some significant differences between the original CPM theory and the extension of CPM theory as a broad framework of human-technology privacy management. Although I have refrained from engaging with these later developments in part because of the word constraints of this article, I strongly believe and encourage future work on CPM theory to follow the developmental trajectory of the original CPM theory in providing a streamlined account of this framework for human-technology privacy management to increase its accessibility.

In closing, I note that there is increasing empirical research interest in CPM and privacy in technological contexts (Kang & Oh, 2021; Pal et al., 2020; Widjaja et al., 2019; Zimmer et al., 2020). I am hopeful that this study on how CPM theory can be extended to understand and examine human-technology privacy boundary management as a broad theoretical framework will be useful to researchers seeking to understand privacy boundary management at the individual level. Although the extent to which I can flesh out details of CPM theory as a broad theoretical framework to theorize boundary management in human-technology privacy management is constrained by the word limit, I nonetheless believe that I have provided a clear and succinct account of its potential for other researchers to build on this work, either theoretically or empirically, to further understanding of how individuals manage privacy boundaries with technology. I also believe that this article has adequately demonstrated that CPM theory is the most comprehensive account of how individuals manage privacy boundaries, be it in interpersonal or human-technology contexts.

Implications For Practitioners

This article is suitable for technical communication and is of particular interest to practitioners working with new technologies. This article can assist with everyday challenges related to issues of privacy in the current work of these professionals in three main ways.

First, the six rules of development in human-technology boundary management—namely, cultural expectations, socio-demographic variables, motivations, situational context, risk-benefit analysis, and technological literacy—should help guide practitioners. Specifically, (1) practitioners need to consider users’ cross-cultural differences in privacy boundary management rule development and (2) practitioners also need to take demographic differences into account when designing technical products and applications. In addition to gender in interpersonal communication, other relevant socio-demographic factors such as age, education, and income will also influence the development of privacy boundary rules in technological contexts. Also, (3) practitioners should consider Maslow’s hierarchy of needs, (4) the complexity of technology makes the context in which information permeability exists extremely complex, and, therefore, practitioners need to comprehensively consider multiple contexts and scenarios and (5) practitioners need to influence the design of technology to reduce user perception of privacy violations. Finally, (6) improving technical literacy is closely related to the practice of technical communication. Practitioners can communicate more valuable technical knowledge to the public and industry.

Second, based on the discussion of privacy boundary coordination in this article, cross-platform privacy flows are especially worthy of the attention of technical communication practitioners. In interpersonal communication, the spread of personal information is generally limited to a small group of people. However, technology has made the cross-platform flow of privacy extremely complex, encompassing many people and technological actors. The boundaries of cross-platform privacy flows in the context of new technologies are not so clear-cut and predictable. Therefore, practitioners need to appreciate the complexities of privacy boundary management in technological contexts to better balance business and organizational interests with user expectations of private information disclosure across technological platforms.

Acknowledgements

This study was supported by the 2023 Shanghai Philosophy and Social Science Youth Foundation (No. 2023EXW004) and the Fundamental Research Funds for the Central Universities (No. 22120230373).

The author would like to thank the editor and the anonymous reviewers for their valuable comments on the manuscript.

References

Acquisti, A., Gross, R., & Stutzman, F. D. (2014). Face recognition and privacy in the age of augmented reality. Journal of Privacy and Confidentiality, 6(2), 1–20. https://doi.org/10.29012/jpc.v6i2.638

Altman, I. (1977). Privacy regulation: Culturally universal or culturally specific? Journal of Social Issues, 33(3), 66–84. https://doi.org/10.1111/j.1540-4560.1977.tb01883.x

Apthorpe, N., Shvartzshnaider, Y., Mathur, A., Reisman, D., & Feamster, N. (2018). Discovering smart home internet of things privacy norms using contextual integrity. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(2), 1–23. https://doi.org/10.1145/3214262

Baruh, L., Secinti, E., & Cemalcilar, Z. (2017). Online privacy concerns and privacy management: A meta-analytical review. Journal of Communication, 67(1), 26–53. https://doi.org/10.1111/jcom.12276

Beaudry, A., & Pinsonneaul, A. (2005). Understanding user responses to information technology: A coping model of user adaptation. MIS Quarterly, 29(3), 493–524. https://doi.org/10.2307/25148693

Bräunlich, K., Dienlin, T., Eichenhofer, J., Helm, P., Trepte, S., Grimm, R., Seubert, S., & Gusy, C. (2020). Linking loose ends: An interdisciplinary privacy and communication model. New Media and Society, 23(6), 1443–1464. https://doi.org/10.1177/14614448209050

Breuer, J., Bishop, L., & Kinder-Kurlanda, K. (2020). The practical and ethical challenges in acquiring and sharing digital trace data: negotiating public-private partnerships. New Media & Society, 22(11), 2058–2080. https://doi.org/10.1177/1461444820924622

Bryant L., Srnicek N., Harman G. (2012). Towards a speculative philosophy. In Bryant L., Srnicek N., Harman G. (Eds.), The speculative turn: Continental materialism and realism(pp. 1–19). re.press.

Bucher, T. (2017). The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information Communication and Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086

Bunz, U. (2003). Growing from computer literacy towards computer-mediated communication competence: Evolution of a field and evaluation of a new measurement instrument. Information Technology, Education, and Society, 4(2), 53–84.

Büchi, M., Just, N., & Latzer, M. (2016). Caring is not enough: The importance of Internet skills for online privacy protection. Information Communication and Society, 20(8), 1261–1278. https://doi.org/10.1080/1369118X.2016.1229001

Chen, H.-T. (2018). Revisiting the privacy paradox on social media with an extended privacy calculus model: The effect of privacy concerns, privacy self-efficacy, and social capital on privacy management. American Behavioral Scientist, 62(10), 1392–1412. https://doi.org/10.1177/0002764218792691

Cho, H. (2021). Privacy helplessness on social media: Its constituents, antecedents and consequences. Internet Research. Epub ahead of print 14 May 2021. https://doi.org/10.1108/INTR-05-2020-0269

Cho, H., Li, P., & Goh, Z. H. (2020). Privacy risks, emotions, and social media: A coping model of online privacy. ACM Transactions on Computer-Human Interaction, 27(6), 1–28.

Choi, H., Park, J., & Jung, Y. (2018). The role of privacy fatigue in online privacy behavior. Computers in Human Behavior, 81, 42–51.

Colaner, C. W., Bish, A. L., Butauski, M., Hays, A., Horstman, H. K., & Nelson, L. R. (2021). Communication privacy management in open adoption relationships: Negotiating co-ownership across in-person and mediated communication. Communication Research, 49(6). https://doi.org/10.1177/0093650221998474

Culnan, M. J., & Bies, R. J. (2003). Consumer privacy: Balancing economic and justice considerations. Journal of Social Issues, 59(2), 323–342. https://doi.org/10.1111/1540-4560.00067

DeCew, J. W. (1997). In pursuit of privacy: Law, ethics, and the rise of technology. Cornell University Press.

De Guzman, J. A., Thilakarathna, K., & Seneviratne, A. (2019). Security and privacy approaches in mixed reality: A literature survey. ACM Computing Surveys, 52(6), 1–37. https://doi.org/10.48550/arXiv.1802.05797

Dienlin, T., & Metzger, M. J. (2016). An extended privacy calculus model for SNSS: Analyzing self-disclosure and self-withdrawal in a representative U.S. sample. Journal of Computer-Mediated Communication, 21(5), 368–383. https://doi.org/10.1111/jcc4.12163

Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61–80. https://doi.org/10.1287/isre.1060.0080

Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation. New Media and Society, 21(8), 1824–1839. https://doi.org/10.1177/1461444819833331

Faraj, S., & Azad, B. (2012). The materiality of technology: An affordance perspective. In P. M. Leonardi, B. A. Nardi, and J. Kallinikos (Eds.), Materiality and organizing (pp. 237–258). Oxford University Press.

Frost, E. A. (2021). Ultrasound, gender, and consent: An apparent feminist analysis of medical imaging rhetorics. Technical Communication Quarterly, 30(1), 48-62. https://doi.org/10.1080/10572252.2020.1774658

Gerber, N., Gerber, P., & Volkamer, M. (2018). Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers and Security, 77, 226–261. https://doi.org/10.1016/j.cose.2018.04.002

Green, M. (2021). Resistance as participation: Queer theory’s applications for HIV health technology design. Technical Communication Quarterly, 30(4), 331–344. https://doi.org/10.1080/10572252.2020.1831615

Gonzales, A. L., Kwon, E. Y., Lynch, T., & Fritz, N. (2018). “Better everyone should know our business than we lose our house”: Costs and benefits of medical crowdfunding for support, privacy, and identity. New Media and Society, 20(2), 641–658. https://doi.org/10.2196/44530

Hargittai, E., & Marwick, A. (2016). “What can I really do?” Explaining the privacy paradox with online apathy. International Journal of Communication, 10, 3737–3757. https://ijoc.org/index.php/ijoc/article/viewFile/4655/1738

Heirman, W., Walrave, M., & Ponnet, K. (2013). Predicting adolescents’ disclosure of personal information in exchange for commercial incentives: An application of an extended theory of planned behavior. Cyberpsychology, Behavior, and Social Networking, 16(2), 81–87. https://doi.org/10.1089/cyber.2012.0041

Hoffmann, C. P., Lutz, C., & Ranzini, G. (2016). Privacy cynicism: A new approach to the privacy paradox. Cyberpsychology, 10(4). https://doi.org/10.5817/CP2016-4-7

Hoyle, R., Stark, L., Ismail, Q., Crandall, D., Kapadia, A., & Anthony, D. (2020). Privacy norms and preferences for photos posted online. ACM Transactions on Computer-Human Interaction, 27(4). https://doi.org/10.1145/3380960

Hull, G., Lipford, H. R., & Latulipe, C. (2011). Contextual gaps: Privacy issues on Facebook. Ethics and Information Technology, 13(4), 289–302. https://doi.org/10.1007/s10676-010-9224-8

Huang, L., Xie, Y., & Chen, X. (2021). A review of functions of speculative thinking. Frontiers in Psychology, 4657. https://doi.org/10.3389/fpsyg.2021.728946

Jones, E. E., & Archer, R. L. (1976). Are there special effects of personalistic self-disclosure? Journal of Experimental Social Psychology, 12(2), 180–193. https://doi.org/10.1016/0022-1031(76)90069-X

Kang, H., & Oh, J. (2021). Communication privacy management for smart speaker use : Integrating the role of privacy self-efficacy and the multidimensional view. New Media & Society, 1–23. https://doi.org/10.1177/14614448211026

Kehr, F., Kowatsch, T., Wentzel, D., & Fleisch, E. (2015). Blissfully ignorant: The effects of general privacy concerns, general institutional trust, and affect in the privacy calculus. Information Systems Journal, 25(6), 607–635. https://doi.org/10.1111/isj.12062

Kim, D., Park, K., Park, Y., & Ahn, J. H. (2019). Willingness to provide personal information: Perspective of privacy calculus in IoT services. Computers in Human Behavior, 92, 273–281. https://doi.org/10.1016/j.chb.2018.11.022

Kostka, G., Steinacker, L., & Meckel, M. (2021). Between privacy and convenience: Facial recognition technology in the eyes of citizens in China, Germany, the UK and the US. SSRN Electronic Journal, 1–20. https://doi.org/10.2139/ssrn.3518857

Krasnova, H., Spiekermann, S., Koroleva, K., & Hildebrand, T. (2010). Online social networks: Why we disclose. Journal of Information Technology, 25(2), 109–125. https://doi.org/10.1057/jit.2010.6

Lee, Y. H., & Yuan, C. W. (2020). The privacy calculus of “friending” across multiple social media platforms. Social Media and Society, 6(2), 1–10. https://doi.org/10.1177/20563051211055439

Liang, H., Shen, F., & Fu, K. W. (2016). Privacy protection and self-disclosure across societies: A study of global Twitter users. New Media and Society, 19(9), 1476–1497. https://doi.org/10.1177/1461444816642

Luo, M., DeWitt, D., & Alias, N. (2022). Mapping the evolutionary characteristics of global research related to technical communication: A scientometric review. Technical Communication, 69(3), 73–87. https://doi.org/10.55177/tc995833

Lutz, C., Hoffmann, C. P., & Ranzini, G. (2020). Data capitalism and the user: An exploration of privacy cynicism in Germany. New Media and Society, 22(7), 1168–1187. https://doi.org/10.1177/14614448209125

Metzger, M. J. (2007). Communication privacy management in electronic commerce. Journal of Computer-Mediated Communication, 12(2), 335–361. https://doi.org/10.1111/j.1083-6101.2007.00328.x

Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Norval, A., & Prasopoulou, E. (2017). Public faces? A critical exploration of the diffusion of face recognition technologies in online social networks. New Media and Society, 19(4), 637–654. https://doi.org/10.1177/1461444816688

Petty, R. E., & Cacioppo, J. T. (1984). The effects of involvement on responses to argument quantity and quality: Central and peripheral routes to persuasion. Journal of Personality and Social Psychology, 46(1), 69–81. https://doi.org/10.1037/0022-3514.46.1.69

Pal, D., Funilkul, S., & Zhang, X. (2020). Should I disclose my personal data? Perspectives from internet of things services. IEEE Access, 9, 4141–4157. https://doi.org/10.1109/ACCESS.2020.3048163

Park, Y. J. (2013). Digital literacy and privacy behavior online. Communication Research, 40(2), 215–236. https://doi.org/10.1177/0093650211418338

Petronio, S. (1991). Communication boundary management: A theoretical model of managing disclosure of private information between marital couples. Communication Theory, 1(4), 311–335. https://doi.org/10.1111/j.1468-2885.1991.tb00023.x

Petronio, S. (2002). Boundaries of privacy: dialectics of disclosure. In State University of New York Press. State University of New York Press. https://doi.org/10.5860/choice.40-4304

Petronio, S. (2010). Communication privacy management theory: What do we know about family privacy regulation? Journal of Family Theory & Review, 2(3), 175–196. https://doi.org/10.1111/j.1756-2589.2010.00052.x

Petronio, S. (2013). Brief status report on communication privacy management theory. Journal of Family Communication, 13(1), 6–14. https://doi.org/10.1080/15267431.2013.743426

Petronio, S., & Child, J. T. (2020). Conceptualization and operationalization: Utility of communication privacy management theory. Current Opinion in Psychology, 31, 76–82. https://doi.org/10.1016/j.copsyc.2019.08.009

Rantakokko, S. (2022a). Creating a model for developing and evaluating technical instructions that use extended reality. Technical Communication, 69(3), 24–39. https://doi.org/10.55177/tc001245

Rantakokko, S. (2022b). Data handling process in extended reality (XR) when delivering technical instructions. Technical Communication, 69(2), 75–96. https://doi.org/10.55177/tc734125

Raynes-Goldie, K. S. (2010). Aliases, creeping, and wall cleaning: Understanding privacy in the age of Facebook. First Monday, 15. https://doi.org/10.5210/fm.v15i1.2775

Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. The Journal of Psychology, 91(1), 93–114. https://doi.org/10.1080/00223980.1975.9915803

Ross, J. (2017). Speculative method in digital education research. Learning, Media and Technology, 42(2), 214–229. https://doi.org/10.1080/17439884.2016.1160927

Spilka, R. (2009). Digital literacy for technical communication: 21st century theory and practice. Routledge.

Steuber, K. R., & Solomon, D. H. (2011). Factors that predict married partners’ disclosures about infertility to social network members. Journal of Applied Communication Research, 39(3), 250–270. https://doi.org/10.1080/00909882.2011.585401

Seubert, S., & Helm, P. (2020). Normative paradoxes of privacy: Literacy and choice in platform societies. Surveillance and Society, 18(2), 185–198. https://doi.org/10.24908/ss.v18i2.13356

Shvartzshnaider, Y., Apthorpe, N., Feamster, N., & Nissenbaum, H. F. (2018). Analyzing privacy policies using contextual integrity annotations. SSRN Electronic Journal, 1–18. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3244876

Sun, Y., Wang, N., Shen, X. L., & Zhang, J. X. (2015). Location information disclosure in location-based social network services: Privacy calculus, benefit structure, and gender differences. Computers in Human Behavior, 52, 278–292. https://doi.org/10.1016/j.chb.2015.06.006

Tufekci, Z. (2008). Can you see me now? Audience and disclosure regulation in online social network sites. Bulletin of Science, Technology & Society, 28(1), 20–36. https://doi.org/10.1177/0270467607311484

Van den Broeck, E., Poels, K., & Walrave, M. (2015). Older and wiser? Facebook use, privacy concern, and privacy protection in the life stages of emerging, young, and middle adulthood. Social Media and Society, 1(2), 1–11. https://doi.org/10.1177/2056305115616149

Verene, D. P. (2016). Speculative philosophy and speculative style. CR: The New Centennial Review, 16(3), 33–58.

Vitak, J., & Zimmer, M. (2020). More than just privacy: Using contextual integrity to evaluate the long-term risks from COVID-19 surveillance technologies. Social Media and Society, 6(3), 1–4. https://doi.org/10.1177/2056305120948250

Waters, S., & Ackerman, J. (2011). Exploring privacy management on Facebook: Motivations and perceived consequences of voluntary disclosure. Journal of Computer-Mediated Communication, 17(1), 101–115. https://doi.org/10.1111/j.1083-6101.2011.01559.x

Widjaja, A. E., Chen, J. V., Sukoco, B. M., & Ha, Q. A. (2019). Understanding users’ willingness to put their personal information on the personal cloud-based storage applications: An empirical study. Computers in Human Behavior, 91, 167–185. https://doi.org/10.1016/j.chb.2018.09.034

Young, A. L., & Quan-Haase, A. (2013). Privacy protection strategies on Facebook: The Internet privacy paradox revisited. Information Communication and Society, 16(4), 479–500. https://doi.org/10.1080/1369118X.2013.777757

Zhang, R., & Fu, J. S. (2020). Privacy management and self-disclosure on social network sites: The moderating effects of stress and gender. Journal of Computer-Mediated Communication, 25(3), 236–251. https://doi.org/10.1093/jcmc/zmaa004

Zhou, Q. (2023). A framework for understanding cognitive biases in technical communication. Technical Communication, 70(1), 22–40. https://doi.org/10.55177/tc131231

Zimmer, M., Kumar, P., Vitak, J., Liao, Y., & Chamberlain Kritikos, K. (2020). ‘There’s nothing really they can do with this information’: Unpacking how users manage privacy boundaries for personal fitness information. Information, Communication & Society, 23(7), 1020–1037. https://doi.org/10.1080/1369118X.2018.1543442

About the Author

Dr. Xiaoxiao Meng is an assistant professor in the College of Arts and Media, Tongji University. She received her PhD in communication from Shanghai Jiao Tong University and was a visiting scholar at the National University of Singapore. Her research interests include privacy studies, digital governance, and media effects. Her current research focuses on theorizing privacy and information exchanges in human-technological contexts, with an emphasis on studying how Chinese users of technological platforms differ from others in their understanding and perceptions of privacy in their everyday interactions with these platforms. Xiaoxiao has presented at major conferences organized by scholarly associations, such as the International Communication Association (ICA) and the Association for Education in Journalism and Mass Communication (AEJMC). In 2022, she was awarded the top student paper by AEJMC’s Communication Theory and Methods division for her work on developing a measure of privacy boundary turbulence in technological contexts. Her research has been published in peer-reviewed journals such as Information, Communication & Society, Journalism Practice, and Asian Perspectives. She has also authored several journal articles in Chinese language journals, such as The Chinese Journal of Journalism & Communication. She can be reached at mengxiaoxiao@tongji.edu.cn.