Features March/April 2024

From Technical Communicator to AI Whisperer: The Rise of Prompt Engineering

By Ryan K. Boettger | Member

Technical communicators are ideally positioned to guide AI systems in generating quality content through an emerging field called prompt engineering.

In Intercom’s December 2018 issue on the state of the tech comm industry, over 70% of survey respondents reported that technology had rarely or never assumed the tasks they performed in their jobs. (Carliner and Chen 2018). The same percentage doubted their jobs could be automated by 2030.

A lot has changed in six years, with a global pandemic notably accelerating the development and adoption of artificial intelligence (AI) across various sectors. Experience working with AI has increasingly become a requirement in job postings and is likely to become a standard expectation soon. Researchers at LinkedIn observed job postings mentioning AI have experienced a 17% higher growth in applications over the past two years compared to those without those mentions.

While technical communicators might wonder whether robots will take their jobs, AI has instead opened opportunities to reshape the profession and deepen organizational value. Meanwhile, college students should gain this knowledge before entering the workforce. The demand for content, for example, will only increase, creating a demand for employees who are skilled in the art of clarity, concision, and storytelling.

Sound familiar? These skills remain the core of effective technical communication and will become critical in an age of content bloat and saturation. AI is not replacing technical communicators, but they should consider how they can apply their current skill set differently and whether upskilling would benefit their career. In particular, prompt engineering has emerged as an essential skill and viable career for technical communicators.

Prompt Engineering Defined

Prompt engineering includes the crafting of clear, concise, and effective prompts for AI systems, guiding the AI to create accurate and contextually appropriate responses. Prompt engineers understand how to finesse the AI by using terminology or prompt sequences that generate the best results, facilitating seamless collaboration between the technology and its users. Technical communicators are ideally positioned to excel as prompt engineers, but they have to understand how to translate their skills into the required knowledge areas.

Key Knowledge Areas for Prompt Engineers

Prompting requires more than just being a good communicator. Prompt engineers need to understand language structures, the typical tropes of synthetic text, prompting methodologies, limitations of AI, and ethical considerations.

Understand Language Structures and Rhetorical Strategies

Mastering prompt engineering isn’t just about knowing the technology; it’s deeply rooted in understanding language structures and the art of persuasion. AI makes information more accessible, but it doesn’t inherently grasp the nuances of human language. That’s where your expertise comes in. A thorough understanding of language structures, from sentence diagramming to syntax, is key to guiding AI toward generating more effective and user-friendly responses.

Beyond grammar, knowing how to use rhetorical strategies—concepts like metaphor, repetition, and positive emphasis—makes concepts more convincing and memorable. This skill is vital in prompt engineering, where the goal is to produce AI-generated content that resonates with and motivates the audience. Take Kitty Locker’s research on goodwill in communication. Applying these principles can transform the tone of AI-generated customer service responses, making them more engaging and effective. Understanding language and persuasion – and then knowing how to communicate that to the AI – produces outputs that better meet your communication goals.

Learn Prompting Methodologies

Effective prompting requires a deep understanding of specific methodologies to interact successfully with AI systems. Users who express frustration with AI output can produce better results if they understand basic prompting methodologies. Understanding of audience and purpose, like all effective technical writing, determines how to prompt the AI. There are many prompting methodologies, but the one I offer is Do-What-How—giving a command to define the task and detailing the desired output, format, or style.

DO: Issue a direct command to the AI. Clarify the action you want the AI to take.

WHAT: Specify the task or output. Detail what you expect as the result of the AI’s action.

HOW: Specify the manner or style in which the task should be completed. This includes formatting guidelines, tone, or any stylistic elements you want the AI to incorporate.

As an illustration, consider, “Instruct a general user on how to create a style in MS Word in five steps or fewer, using clear and concise language suited for beginners.” If you want a more specialized prompt try, “Instruct a general user how to create a style in MS Word in five steps or fewer in the voice of Deadpool.” These adjustments make massive differences in the quality and tone of the synthetic text.

In addition, interviewing the AI is another way to learn more about how it was trained. Earlier, I referred to Locker’s goodwill principles as rhetorical strategies. I used this term because I interviewed the AI, first asking if it knew about Locker’s research, and then the term is used to classify her research. Interviewing the AI helps in crafting prompts that align with the model’s training, rather than imposing external terminology.

Recognize the Tropes of Synthetic Text

If you generate enough text with AI, you’ll notice tropes that don’t reflect the best writing practices. Synthetic text often outputs repetitive words, overwritten or flowery prose, or cringey summaries that I call Bow-itis because the AI always wants to wrap everything up neatly.

Prompt engineers elicit higher-quality output with the use of mega prompts—extensive, multi-layered instructions—in addition to adjusting the parameters of the AI or creating fine tunes to better reflect an individual writer or organization’s style and voice. These advanced techniques should be attempted after prompting methodologies.

Learn Different AI Models and Understand Their Parameters

Each AI model has its own purposes, quirks, and prompting guidelines. Besides learning different AI models, prompt engineers must also be able to adjust a specific AI’s parameters. If you use the web-based version of Claude or ChatGPT, you won’t be able to do this; instead, you need to use the developer interface for their models, some of which are easier to access than others.

If you have a ChatGPT account, you can use those same credentials to log in to Open AI’s Playground (https://platform.openai.com/playground). Here, you can choose a model and adjust parameters like temperature, Top P, frequency penalty, and presence penalty.

  • Temperature controls the randomness or creativity of the response. A lower temperature results in more predictable and conservative outputs, while a higher temperature generates more varied and creative responses.
  • Top P (or Top Probability) controls the diversity of the generated content. It’s a way of telling the model to consider only the top percentage (probability-wise) of potential next words. A lower Top P will make the model’s choices narrower or more focused, while a higher Top P allows for a wider array of possibilities.
  • Frequency penalty discourages the model from repeating the same words or phrases. A higher frequency penalty makes the text more diverse and avoid redundancy.
  • Presence penalty discourages the model from repeating the same topics or themes. Increasing the presence penalty can help in generating a narrative that continuously moves forward, introducing new elements and ideas, rather than circling around the same concepts.

Prompt engineers understand how these parameters influence output when adjusted individually and in tandem. For example, a lowered temperature of 0.7 and Top P of 0.8 will generate more predictable outputs with more expected word choices, ideal for most business and technical communication. Creative writers, in contrast, might increase these parameters to generate less predictable and more varied vocabulary.

If you’re only familiar with these web-based interfaces of most AI models, you probably are unfamiliar with these parameters and are beholden to whatever parameters the developers have set the AI to. This can often yield frustrating output that doesn’t reflect intention.

Acknowledge Ethical Considerations

No discussion of AI should take place without acknowledging ethical considerations. Working with AI and synthetic texts means prompt engineering must ensure the content is original, accurate, and human-edited.

Prompt engineers should also be mindful of the language they use in prompts. Many organizations deal with proprietary or client information that should not be fed into web-based AIs. Likewise, the AI should not be prompted to generate text in a specific author’s voice; rather, the prompt should describe the features that characterize that voice. This is another reason knowledge of language structures and rhetorical strategies is vital to this emerging profession.

Transparency about AI’s role in content creation is also essential to maintain trust and integrity, making it clear to audiences when content is machine-generated. If your company doesn’t currently have a policy on AI-assisted writing for both in-house and client work, now is the time to consider creating one.

Finally, AI can unknowingly perpetuate biases from its training data. Much of technical writing is centered on assisting specialized audiences. It’s incumbent to recognize and counteract these biases, ensuring inclusive and stereotype-free content. Ultimately, prompt engineers bear the responsibility for the AI’s output, upholding ethical standards and guiding AI toward producing respectful and accountable communication.

The AI-Assisted Technical Communicator

AI is here to stay. New models seemingly drop every day, and AI is enhancing the software and websites technical communicators use daily for their work. It’s easy to get overwhelmed with the advances and knowing where to begin learning more, but I hope this article offers actionable steps in becoming an effective prompt engineer:

  • Understand language structures: Study language patterns, terminology, and rhetorical strategies used in your field to tune AI outputs.
  • Adopt prompting best practices: Familiarize yourself with techniques like Do-What-How prompts. Know when to apply different approaches.
  • Recognize common AI tropes: Repetition, inappropriate tone/voice, inaccurate facts. Adjust prompts and parameters.
  • Explore AI models’ unique features: Each model has strengths, weaknesses, and favored prompting structures. Adjust temperature and presence penalty for desired output.
  • Uphold ethical AI use: Ensure transparency, accuracy, and oversight in AI content creation. Mitigate biases. Safeguard sensitive data.

Prompt engineering blends language expertise with AI fluency. By skillfully guiding AI systems, technical communicators can unlock efficiency and innovation in the work they do.

References

Carliner, Saul, and Chen, Yuan. “What Technical Communicators Do.” Intercom 65, no. 8 (2018): 13-16.


Ryan K. Boettger is a professor and chair of the Department of Technical Communication at the University of North Texas. He specializes in curriculum development and assessment, content analysis, data-driven learning, and the intersection of AI and writing. His award-winning research has been funded by the National Science Foundation and published in major technical communication and writing studies journals. He is the former deputy editor in chief of IEEE Transactions on Professional Communication and editor for the Wiley/IEEE Press book series in Professional Engineering Communication.