Columns Society Pages

In a World Enabled by Artificial Intelligence, Technical Content Should Become Agentive, Not Remain Assistive: An Interview with Christopher Noessel

By Scott Abel | STC Senior Member

In the digital age, change happens quickly. This column features interviews with the movers and shakers—the folks behind new ideas, standards, methods, products, and amazing technologies that are changing the way we live and interact in our modern world. Got questions, suggestions, or feedback? Email them to scottabel@mac.com.

Interaction design expert Christopher Noessel envisions a world in which smart machines and content can perform tasks for our prospects and customers. In this installment of Meet the Change Agents, Scott Abel, The Content Wrangler, asks Noessel to share with us his thoughts on artificial intelligence (AI), assistive versus agentive solutions, and the future of technical communicators in a world of AI-powered automation. Read this interview for a peek at the potential impact of moving from our tradition of providing content solutions that help people accomplish work toward content solutions that perform work for people.

Scott Abel: Christopher, tell us a little about yourself and your work.

Christopher Noessel: I’m the Global Design Practice Manager for Travel and Transportation industries with IBM, bringing IBM Design goodness to products and clients. In that role, I teach and speak about—and evangelize—design internationally. My “spidey-sense” goes off semi-randomly, leading me to investigate and speak about a range of things from interactive narrative to ethnographic user research, interaction design to generative randomness, and designing for the future.

I am the co-author of Make It So: Interaction Design Lessons from Science Fiction (Rosenfeld Media, 2012), co-author of About Face: The Essentials of Interaction Design (4th ed., Wiley, 2015), keeper of the blog scifiinterfaces.com, and author of Designing Agentive Technology: AI That Works for People (Rosenfeld Media, 2017).

SA: Designing Agentive Technology: AI That Works for People is an important work that I believe is one of the most valuable books written in the information design and customer experience sector in a long time. Can you tell our readers what agentive technology is and how it relates to AI?

CN: Think of a hammer. Think of a steam shovel. Think of a computer. Each of these is a tool a person can use to get things done. For the better part of the last century, we’ve been getting good at making more and more complicated tools. We’ve learned a lot along the way, and accomplished many things, too.

Using a tool isn’t the only way to get things done. In the age of narrow artificial intelligence, we can hand off tasks to an agent and have it do the work how we’d like it done, whenever it should be done, until further notice. That lets you—and your customers—get on with other things. Designing a tool for you to use to do work is different than designing the AI that does the work for you. If you only know how to design hammers, or even just computers, well, you’re behind.

I should explain that narrow AI is a term of art to describe the smart-but-not-human-smart artificial intelligence that we have in the world today: Things like chatbots and voice-enabled personal and home assistants like Apple Siri and Amazon Echo belong in the narrow AI category.

The speculative AI that can generalize and think like a human is appropriately called “general AI,” and while I’m interested in that, there’s little to do about it today as a designer. We may have to wait another 25 to 50 years before we get there.

SA: How did you come to want to write about this subject? Did you have an a-ha moment that led you to this subject?

CN: I was working on the design of a roboinvestor. That’s an agent to which you describe your financial goals, current holdings, and monthly contributions, and it invests and continually watches your portfolio to rebalance and alert you both when a stock is tanking or something is taking off that you’re not taking advantage of. It was such a rich problem, and felt so different, that midway through I thought, “I should look up some best practices around designing for this sort of thing.” I couldn’t find anything and thought, “Well, damn. Someone should write a book about this.” Then there was a pause. Then, “I guess I should write a book about this.”

So, I wrote a book about it.

SA: In general, is it fair to describe most technical communication products (user guides, FAQs, online help, and other types of product documentation) as assistive? And, if so, why? What might we do to make technical communication agentive?

CN: Yes. Assistive AI helps you perform a task while your attention is on the task. Things like finding a signal in noise, getting an answer to a question or help understanding, parsing through advice and predictions. To make technical communication products agentive, you’d want to think about how that content can be used when the user isn’t paying attention to the related product. We can use agents to monitor the situation on our behalf, waiting patiently for changes to the product that will affect the way they work, or the content sources they typically rely on. Then we’d equip the agent to monitor the data streams, and then act if and when appropriate. Next, we’d equip the user with the capability to modify the triggers and behaviors of the agent, allowing them to personalize their experience with it.

SA: In your book, you identify four questions we should ask in order to determine whether a solution should be assistive, automated, or agentive. Can you help us understand these?

CN: The first bit is “Given a computable task that recurs,” and that’s important. If it’s not computable, it’s outside the domain of any AI, and needs to be a manual tool. That said, I rarely run across problems that aren’t in some way computable. Similarly, if the problems don’t recur, it’s not worth the effort to computize them. Not until we get to general AI that can handle sui generis problems.

OK, to the questions:

  1. Can it be delegated? For some problems, we might not want to ethically or legally hand off a problem to a computer. Liability might be another reason. For these cases, we should offer assistance to a human performing the task.
  2. Is there a measurable trigger? For an agent to do its thing, it has to be watching a data stream for a trigger. That trigger can be many things: like a keyword, a time, a valence, a percentage of confidence. But if the trigger can’t be confidently detected by a computer, then an agent can’t know when to do its thing, in which case we should provide a connection to a human who can.
  3. Is the user the focus of the task? Does it require human input? There are some tasks which humans shouldn’t be bothered with because they’re too mundane, too technical, or too critical. Think of an automated door (too mundane) or a pacemaker (too critical). These technologies don’t need a human as part of their regular flow, and so can be automated.
  4. I should also note that there is a fourth mode of interest, and that’s manual. There are times when the user may want or need to work with a tool without any AI involved at all. Any given product or service needs to elegantly and smartly shift between these modes—Manual, Assistive, Agentive, and Automatic.

SA: My research shows that some knowledge workers fear that artificial intelligence and related technologies will cause widespread unemployment in the content creation, management, translation, and delivery arenas. What do you think about these emerging technologies? Will they present new job opportunities for professional communicators, or will they replace us?

CN: It’s one of the two questions I always get when speaking, and rightly so. Technology has always changed jobs and marketplaces. I’m not an expert in the content field, but yes, language is computizable, and I don’t see an easy solution to the market forces at play.

But how we deal with AI as organizations and as a society is up to us. Hopefully most organizations will follow IBM’s goal of human augmentation rather than human replacement. But that’s just a hope, and if it goes the other way, we’ll need to have smart ways to retrain workforces and to rethink our social safety nets. My work in this area has led me to be a big believer in Universal Basic Income.

Editor’s Note: For those unfamiliar with the term Universal Basic Income, it’s a form of social security in which all citizens or residents of a country receive regularly an unconditional sum of money, either from the government or another public institution, independent of any other income they may earn or receive. Universal Basic Income is discussed in relation to the fear that artificial intelligence capabilities will result in widespread replacement of human workers—a 2013 University of Oxford study estimates 47% of U.S. jobs may be at risk within the next two decades because of advances in artificial intelligence and automation. Learn more: “Why Free Money for Everyone Is Silicon Valley’s Next Big Idea.” (June 29, fortune.com)

SA: What’s the first step technical communicators should take to learn how to determine whether their customers could benefit from agentive content or not (aside from reading your book, which I am already planning to recommend)?

CN: Ha. Yes. Read the book, of course. Then ask yourself, “What are we asking our customers to do that we could be doing for them, in persistent and hyper-personalized ways?” Also, find out, “Where are they bored?” And do first-hand research to ask them yourself because that will point you to where agentive modes of interaction might be implemented for the better.

SA: Well, I’m afraid we’re out of space, Christopher. Anything else you’d like to share with our readers as a closing thought?

CN: I’ve opened channels of communication about this topic on Facebook, Twitter, and Slack, and I’m hoping to build a community of practice to refine these ideas as well as collect case studies. Join if you can. Thanks for allowing me to share my views with your readers. If your readers have questions for me, they can reach me at chrisnoessel@gmail.com.

[ms-protection-message]