Features

Forging Research Partnerships Across Industry and Academia

By Michael J. Albers | STC Fellow

As a recent special issue of Technical Communication notes, research is something all members of the field see as central to success. What constitutes research, however, can vary depending on whether you are in industry or academia. This difference can often result in industry practitioners and academics talking past one another versus talking with each other. The key to bridging this divide is to better understand how each side views research. This knowledge can then serve as the foundation individuals use to collaborate more effectively on research projects of interest and benefit across the greater field (see St.Amant & Meloncon 2016).

How Industry Practitioners View Research

One problem is that practitioners and academics often mean different things when they use the term research. Practitioners think of research as answering a question about their current project; they need an answer for a specific situation now. Academics have a more general future-oriented view.

Consider the following example: we have a new Web interface and need to know tradeoffs and best sizes for the icons. Practitioners might perform usability research to figure out the best icon size for the current project. They would run some tests, decide a particular size works, and move on to finishing the product. They have an acceptable answer, and time constraints mean they stop there. It also means that for a new project eight months later, they will repeat the research to determine the best icon size for the new project.

Repeating the work eight months later is not necessarily bad because the goal is to answer a specific question for a specific interface; the previous results gave a starting place, but no assurance that the same sizes would work again on the next project. The bigger problem is that most industry practitioners don’t have clear guidance to shape the initial expectations. That is, they are lacking the more general concepts about how the size of an icon affects the usability.

How Academics View Research

In the previous example, the practitioner looks at this situation and wants to answer the question, “What do I need to do now to address the immediate situation at hand?” (Focus = specific answers to immediate situations.) The academic, however, looks at this same situation and wants to answer the question, “What are the universal principles I need to know to understand this overall situation and anticipate—and address it—when it occurs again?” (Focus = general laws or principles that can be applied widely.) In sum, the academic doesn’t strive to find the correct size icon for this interface, but wants to provide the general guidance everyone can use going forward in most situations and for most interfaces in general. Different interfaces require different sized icons. What academics, in turn, want to know is, “What factors drive selecting an icon size to maximize usability?”

This situation gets to the heart of the industry-academic split over research: A practitioner wants a specific answer to a particular problem. An academic thinks of the problem in terms of trying to figure out the general case. As such, academic research works to uncover and define the rules of the general case, and this objective is essentially the definition of an individual’s research agenda in most social science disciplines.

In the previous example, academics are doing the kind of generalized study that would lead to the formulation we call Fitts’ law (i.e., the idea that relates the size of an icon to the time it takes to move to and click on it). We academics (I include myself) tend to think of uncovering the fundamental issues that drive the answer to give the best implementation of icon size. The academic question is viewed not as “What size of icon works best for this interface?” but as “What factors drive effective icon size across many interfaces?” A useful to know but very different question from the practitioner who needs to define icon sizes for this interface right now.

The end result is that a practitioner can take the results of the academic research and make a good prediction on what size of icon to use on their new project. Usability testing will be required to verify that choice, but academic research provides practitioners with the confidence the initial choices were close to what they needed.

What Academics Research

Before we can explore how to better share research results, let’s stop for a moment and think about what academic technical communication research looks at. In academia, there are two different main groups of researchers: people who study human behavior (e.g., readability, usability, and human-information interaction) and people who study texts (e.g., discourse or rhetorical analysis). Clearly, it’s the people who study human behavior that will have results most applicable to practitioners. However, a sizable percent of the rhetorical analysis looks at why an existing text failed to work with a given audience. Thus, these analyses, too, can be useful to help understand how well-intended texts fail so miserably when they meet a real audience.

(Poorly) Communicating Research Results

To everyone’s loss, much of academic research that can be useful to practitioners is written in a style that is inaccessible to practicing professionals. In 2014, for example, I watched a conference presentation by Ryan Boettger, Erin Friess, and Saul Carliner (2014). Their presentation laid out a claim that the research and theory presented in peer-reviewed journals—including the ones in our field—are written by academic researchers for other academic researchers; not a big surprise. However, they also pointed out that the end result is that research academics poorly communicate to practitioners, the very people who could use those results for practical purposes. Thus, we have the interesting result that even academics think the research they do is poorly communicated to practitioners.

At the same time, practitioners fail to communicate their research needs to academics. This failing occurs for many reasons. Some industry practitioners try to reach out to an academic for help on a project and get rejected because:

  • The need reported on in the manuscript isn’t “my research area” (i.e., the area the academic researcher works in).
  • The academic wants a six-month study to answer a question that needs answered next week—and those reasons don’t even consider the issues of getting the idea of interacting with academics past senior management or the ever-present corporate non-disclosure agreements.

A way to address these problems is to get both groups to better understand the other. One method (discussed in the rest of this article) is engaging in industry and academic collaboration on research projects. Such collaborations can foster understanding of how research is viewed and used across the field. It also helps each side better communicate about research—and the reporting of research results—with the other.

Benefits of Industry-Academic Research Partnerships

Industry-academic research partnerships are highly beneficial to both groups. Academics gain an understanding of practitioner research needs and can work to address them. Academics also gain knowledge of how and what to write for a practitioner audience. In return, practitioners get answers to their questions and gain insight into new ways of answering them. The basic need (and justification to management) may be for help with today’s project, but the long-term return can easily dwarf the initial one-off study results.

Even with the various obstacles, strong reasons exist for why academic/practitioner relationships should improve usability and decrease product development time. The practitioner world is evolving at a rapid rate, and the ivory tower shields many academics from seeing that change. This situation leads to new graduates trained for the workplace of 10+ years ago and not the needs of the current technical communication or usability world. Ideally, industry-academic collaboration will funnel back into the classroom and result in more relevant educational experiences.

Let’s think about how technical communication academics can be brought in to help with industry research projects. At the basic level, this could be integrating academics as a part-time team member for a project. They can provide valuable insight into design decisions and assistance for conducting usability studies. The result is the company gets an improved product to release, and the academic gets an article to publish. The article, in most instances, is a case study of the time spent working on the project. That is a good start and supports leading into longer-term academic-style research. By spending time with a development team, they understand practitioner usability research needs.

Rethinking Research Collaborations

For the company to maximize their benefits, they need to support academic research, which is only at the beginning phase. In terms of the Fitts’ law example, we understand a need to know the relationship of target size to speed of clicks. However, there also needs to be a general study to determine this overall relationship; that is something the academic can do with corporate assistance, but outside of production schedule deadlines. In the end, the company gets access to new information, which it can use for future projects.

As a second example, let’s assume the project was to produce a multimedia healthcare information module to replace a printed brochure. In the first round, an academic would be embedded with the development team, and the resulting interaction would improve the design and usability of the user’s interaction with the information. But this situation also leaves open the general questions that apply to future projects. Such questions include:

  • What part of the change caused the improved comprehension?
  • What parts of the change were detrimental?
  • What are the underlying human behavior factors to consider for future projects?
  • How does the target audience influence those answers?

Answering these questions involves fine-grained studies that can be used to ship future products both faster and with higher quality. The first study is practitioner research and improves the current product, but its results may not be applicable to other products. The second sequence is academic research that finds the underlying issues to generalize the idea. Those results can then be fed back to improve the company’s products.

However, there is a problem. Academics are going to want to publish the results of their research—preferably in an academic journal. (It’s what we do and it’s an expected part of our job that is central to everything from keeping our jobs to affecting our eligibility for pay raises and promotions.) Publication means the results of research studies are available to everyone (well, primarily individuals subscribed to the journals in which the results are published). Corporate managers might see this as giving away intellectual property to their competitors. Why would they want to fund that? This is why you see articles that decline to give specific data, with generic statements like “the study was done in the MIS department of a large mid-western retailer.” Nondisclosure agreements prevent using specific data or the company name, but the important findings get published for the world to see.

Working Together

Effective collaboration is all about asking the right questions from the start. Consider the following scenario: As a practitioner, you finally decide to ask your boss to consider letting an academic on your team. Now you have to decide if the person you are talking with is appropriate for the project. When you do talk with an academic, here’s a list of questions (and tips on interpreting their answers) to consider in planning out effective research collaborations:

  • What is your research area? Appropriate answers include studying human behavior, such as how people use documentation or make decisions. Inappropriate answers include the rhetorical aspects of a situation, discourse analysis, or cultural rhetoric. There is nothing wrong with these research areas, but they don’t really address practical needs/problems, and the person may not have a background to contribute to team discussions.
  • What research methodology do you use? Appropriate answers should include qualitative or quantitative methods. Research on human communication behavior requires testing, interacting with an audience, and interpreting/integrating those results. Inappropriate answers would be rhetorical or discourse analysis. These closely examine the text, but miss the human behavior issues that drive many problems affecting communication, design, and usability.
  • What experience do you have in this topic area? Appropriate answers mirror the current expectations for hiring a senior-level usability person for the project. Inappropriate answers include explanations of why they don’t have any experience, but that their other research experience is equivalent. They need to understand usability research and how human behavior affects the use of materials.
  • What is your experience on team projects? Appropriate answers mirror the hiring expectations for a senior-level usability or technical communication person. Inappropriate answers talk about their collaborative writing projects. Yes, writing with other people is a core academic skill, but all of those people have similar backgrounds. Working with a team that has a range of skills is rare in academic circles.
  • Have you worked in a corporate environment? A “yes” means they understand the pace of corporate projects and understand that once a decision is made the project moves forward. A “no” may mean they will treat your project like another academic project. Those too often move slowly or get pushed aside for teaching or administrative duties. Disagreements can keep coming up and being rehashed because they are rarely deadline driven.

These questions represent a start in the overall process of engaging in effective industry-academia collaboration, and individuals can modify or build on them based on the project on which collaboration would occur.

Conclusion

Developing new ways to engage in effective research requires a blend of practitioner and academic contributions. Practitioners need to define their needs and communicate them to academics. Academic researchers need to learn how practitioners work, what they need, and how to communicate research in an accessible manner. Academics also need to understand the kinds of problems the practitioners consider important.

Ideally, collaborative research projects—when done effectively—allow academics to build on information they receive from practitioners, and practitioners can improve their products based on research published by academics. Realizing such benefits is a matter of understanding how each side views research and selecting research partners carefully and effectively. Truthfully, there is no single answer to how to improve the interactions about research between practitioners and academics, but it is a process we all need to work on improving.

References

Boettger, R. K., E. Friess, and S. Carliner. 2014. Who Says What to Whom? Assessing the Alignment of Content and Audience between Scholarly and Professional Publications in Technical Communication (1996–2013). Paper presented at the Proceedings of the IEEE 2014 International Professional Communication Conference, Pittsburgh, Pennsylvania.

St.Amant, K., and L. Meloncon. 2016. Addressing the Incommensurable: A Research-based Perspective for Considering Issues of Power and Legitimacy in the Field. Journal of Technical Writing and Communication 46.3: 267–283. https://doi.org/10.1177/0047281616639476

MICHAEL J. ALBERS (albersm@ecu.edu) teaches at East Carolina University. His research looks at quantitative analysis and at complex information.