Features September/October 2023

Audio Describing Tables and Charts: A re-visualization process for sharing complex data sets with people who are blind or who have low-vision

Audio description can be integrated into the design process for any visualization projects, including for describing tables and charts.

By Brett Oppegaard, Qiang Xu, and Thomas Hurtut

Design and production of everyday tables and charts can be conceptualized as a rhetorical communicative process, during which data gets filtered and then arranged into an orderly visual fashion. The underlying data needs to be processed, mediated, and first made sense of, before it gets expressed visually in these daily forms. While tables and charts often appear as unassuming sidekicks, they should not be overlooked, and neither should their rampant inaccessibility. Tables and charts usually contain large amounts of information, cross-referenced with other data, creating complex relationships, multiple layers of meaning, and fertile paths for readers to tailor their inquiries into them based on personal interests. Technical communicators use them for good reasons. Yet what happens in this communication process if the person wanting to read the table can make sense of everything involved with all the data except they cannot see it? Can that visualization process still take place? If so, in what ways? Those were some of the questions we pursued in this study focused on tables, charts, and the accessibility level of those design elements in national park brochures.

Tables and charts in such public-place contexts typically are presented as a purely visual experience with conventional and refined aesthetics, created by a learned graphic designer, showing expertise by adherence to conventions, such as when a table is formatted into a combination of rows and columns surrounded by visual cues, including title and subtitle texts, graphical elements, and a consistent, purposeful color palette. When such a visual object is audio described, as a way for it to become accessible for people who cannot see it, the object becomes both a product of remediation and a prompt for re-visualization, in which the original designer visualized the data, but it’s the audio describer who must remediate the visualization, not the original data, transforming visual media into audible media for the listener to then visualize it again.

To further isolate and examine the visuality of basic tables and charts, we approached them in this study from the perspective of people who do not have direct access to the original data or the original data visualization. People need tables and charts for the information they contain but also for the orderly way that information is presented. Like anyone else, people who are visually impaired can benefit from visualizing clear, concise summaries of larger documents, complicated systems, and complex types of data sets in formats that warrant representative structures, including tables and charts. While such “visualization” might bring to mind the act of literally “seeing” something, visualization can be both physical and intellectual, in the sense that people can conjure an image in their minds whether they have direct visual access to it or not, like when a general is visualizing a global-scale battlefield or an astronomer is pondering the billions of galaxies in our universe. To have vision, to visualize, and to interpret in visual terms all are mental processes that can be done without eyes explicitly doing any of that work. This article will create an exploratory map of this area of interest and assert possibilities for how you can integrate these best practices into the design process for any visualization projects, such as describing tables and charts as seemingly extreme examples. This visualization process through audio description makes your work richer and better for everyone.

Tables and Charts as a Visual Construct

This article focuses on tables and charts, which are two primary, ubiquitous forms of data visualizations. As described above, tables have a number of already identified design issues that researchers are actively studying, but mostly those scholarly inquiries are focused on the visualities of the genre, rather than the visualization processes needed to use them (Schwabish, 2020). A chart, from this perspective, is a visual representation of data that uses graphical symbols and algorithmic drawing rules that affect some of the visual characteristics of those symbols based on the data (Bertin, 1983; Elzer et al., 2007).

In terms of modalities, almost all tables and charts are silent, purely visual objects, making them inaccessible to people who cannot see them, and few guidelines exist in the world today about how to address that inaccessibility. Beyond alternative text, which might read something like “Table about campgrounds,” Audio Description aims to create an equivalent information-exchanging experience for people who listen to the data rather than see it, meaning a table or chart that is well-described would give listeners the same agency and information, regardless of the modality needed or preferred. A major gap exists between the ambitions for equity and accessibility, and the common audible products available for hearing and visualizing tables and charts, when or if such audio even exists (Jung et al., 2021). Recent research also has asserted a gap that may exist between current practices and what the listeners want and expect. For example, in the information visualization scientific community, several attempts have been made to apply machine learning methods to extract information from chart images and then generate some textual descriptions from it, with only limited results to show for it (Choi et al., 2019; Kim et al., 2021).

With chart design, for example, a conceptual model could be created via Audio Description that is grounded in universal semantic categories, such as those by Lundgard and Satyanarayan (2022), prompted by questions such as: What are the descriptive statistical concepts and relations that can be inferred, including extrema and correlations? But we were unable to find any research that validated those questions, or directly applied those ideas, or used any other detailed best practices in real contexts, so we decided to find out what was happening in this wilderness of public discourse. In that vein, we located as many examples as we could of such descriptions in public use, as a way to examine those descriptions for common features, and to create some foundational understandings about the current state of description for tables and charts, to identify significant gaps between practice and theory. To do so, we started by examining a corpus of descriptions that we had generated as researchers in public places during the past decade.

Descriptions from Brochures in Public Places

Our collection of descriptions for this study came from a grant-funded research initiative called The UniDescription Project (UniD), www.unidescription.org. With the U.S. National Park Service as a primary funder and partner in the project, the UniD initiative has brought together teams of describers from throughout North America (mostly in the U.S. and Canada) as well as in the United Kingdom to hold nine hackathon-like workshops focused on Audio Description since 2017 (Oppegaard, 2020).

These workshops have included hundreds of people and dozens of teams from more than 170 U.S. national parks. As a part of these workshops, the teams have created audio-described visitor center brochures, which generally include 15 to 35 descriptions of the visual media on those brochures, including descriptions of photographs, illustrations, maps, tables, and charts, done at different times by different people, reflecting a diversity of approaches, styles, and skill levels. That meant our database had thousands of descriptions to consider for inclusion in our study. We then searched those descriptions trying to identify as many of them about tables and charts as we could find. We wanted to use those models to examine how such descriptions were being done by these teams, mostly composed of a mixture of park staff members, external volunteers, and participants who were visually impaired. During these workshops, we also knew we had not provided any significant instructions or best practices related to tables and charts, meaning the participants were creating rapid and ad-hoc responses to this particular accessibility problem based on whatever skills and experiences they had at the time. We considered all 271 active public projects in our database and imported all the descriptions from them into a separate database.

To find descriptions of just tables and charts in that collection, we conducted searches of these texts for keywords that would identify appropriate artifacts for analysis, including “chart,” “graph,” “plot,” “visualization,” “diagram,” “table,” “list,” “figure,” and “infographic.” We initially determined that 198 of the 271 projects contained at least one description with at least one of those keywords. But that turned out to be a lot of false positives. Because all these terms are loosely applied to a variety of different data visualization artifacts, and sometimes used colloquially, this process involved a significant amount of filtering, and our final sample had to be entirely filtered by hand, involving the authors reading every description and determining if it fit under the labels “table” or “chart” or not. In the end, we had full agreement on what was in and what was out, but we also had relatively few examples that met the cut. In the final round of debate amongst us in this exploratory study, we were left with only 22 potential table descriptions and 12 chart possibilities. We then separated tables, with multiple points of data, from lists, with only single points of data, and from historic photos of tables, leaving us with 9 remaining artifacts for analysis. With charts, we separated the classic chart styles—one bubble chart, one bar chart, and one complex chart that illustrated multiple data points, such as elevation, rainfall, and temperature—from historic images of charts and maps, leaving us with just those three examples to analyze.

Findings and Discussion

With the table descriptions we were able to locate, we found that most of them did include an introductory paragraph that explained to some degree the purpose of the data visualization and its structure, in terms of columns and rows or however it was organized. Some just provided raw data dumps. Those likely would be unusable in an audible format because of the cognitive load they would require to build meaning. Most of these descriptions visualized the data in sentence form, as if they were reporting findings from reading the table. They generally sidestepped describing conventional table aesthetics, and they did not attempt to provide broad and equivalent access to the table’s data. In that respect, the more complex the table, the less access the listener was getting to its complexity, because the describer mostly was making interpretive choices. This finding opens opportunities for innovation in Audio Description and further research possibilities into ways to re-visualize data in a table, rather than filter it further through a narrative or reporting style.

With charts, and this sample is even smaller and less generalizable, we found inconsistency in labeling and in the longer associated descriptions that could also potentially plague other samples. For example, one chart was labeled as a “rectangular diagram” rather than being more formally categorized as an illustrative “line chart.” Another was described as an “infographic,” but it essentially just used circles instead of bars in the chart. These unusual shapes in a chart could have been confusing to the describers, but the effect of it was to show larger circles as larger numbers, just like a bar chart would communicate larger bars as larger numbers. In the third one, the chart was described as “horizontal,” but the bars were presented in a vertical stack, along the y-axis, which again could have led to description confusion.

These findings not only assert and confirm a general inaccessibility of tables and charts in public discourse, but they also point to larger systemic obstacles in this area of technical communication, including vocabulary inconsistencies and scant evidence of adherence to any sort of models or best practices. In addition, these findings also raise a more philosophical concern about whether Audio Description of tables and charts should be solely focused on describing the original representation of any particular data visualization. In other words, a chart is a support system or medium for making interpretations about specific data. That’s what the audience is after, to make sense of the data. Those interpretations are guided by the visuals and primarily are information-gathering activities. In that respect, the visual design enables and supports the activity of accessing the data but does not constitute the activity itself. So in the application of Audio Description to this communication context, with the listener unable to access the original piece of visual media, describers could break from describing norms in these cases and take one step upstream in the design process to find their viewpoint in the data rather than the visual media and describe the data they see underlying the visuals. That would be one way to prompt a visualization process in an equitable manner, rather than to rely on a re-visualization.

References

Bertin, Jacques. Semiology of Graphics. University of Wisconsin Press, 1983.

Choi, Ji-Hoon, Seungyeon Jung, Dong Gun Park, Jinho Choo, and Niklas Elmqvist. “Visualizing for the Non‐Visual: Enabling the Visually Impaired to Use Visualization.” Computer Graphics Forum 38, no. 3 (2019): 249–260. https://doi.org/10.1111/cgf.13686.

Elzer, Stefanie, Elizabeth Schwartz, Shomir Demir, and Pei-Luen Patrick Rau Wu. “A Browser Extension for Providing Visually Impaired Users Access to the Content of Bar Charts on the Web,” in Proceedings of the Third International Conference on Web Information Systems and Technologies, 2007. https://doi.org/10.5220/0001274600590066.

Jung, Chaeeun, Shruti Mehta, Aparna Kulkarni, Yuzhong Zhao, and Youngsuk Suh Kim. “Communicating Visualizations without Visuals: Investigation of Visualization Alternative Text for People with Visual Impairments.” IEEE Transactions on Visualization and Computer Graphics 28, no. 1 (2022): 1095–1105. https://doi.org/10.1109/tvcg.2021.3114846.

Kim, Nam Wook, Sharon C. Joyner, Andrew Riegelhuth, and Yeeun Kim. “Accessible Visualization: Design Space, Opportunities, and Challenges.” Computer Graphics Forum 40, no. 3 (2021): 173–188. https://doi.org/10.1111/cgf.14298.

Lundgard, Alexander, and Arvind Satyanarayan. 2022. “Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content.” IEEE Transactions on Visualization and Computer Graphics 28, no. 1 (2022): 1073–1083. https://doi.org/10.1109/TVCG.2021.3114770.

Oppegaard, Brett. “Pushing Forward Together: From Failures to Feats Through Increasingly Inclusive Design,” in Inclusive Digital Interactives: Best Practices, Innovative Experiments, and Questions for Research, edited by Jennifer Majewski, Rachel Marquis, Nancy Proctor, and Brad Ziebarth, 219-242. Washington, D.C.: Access Smithsonian, The Institute for Human Centered Design, & Museweb, 2020. https://access.si.edu/sites/default/files/inclusive-digital-interactives-best-practices-research.pdf.

Schwabish, Jonathan A. “Ten Guidelines for Better Tables.” Journal of Benefit-Cost Analysis 11, no. 2 (2020): 151–178. https://doi.org/10.1017/bca.2020.11.


Oppengaard Headshot

Brett Oppegaard (brett.oppegaard@hawaii.edu) is a professor in the School of Communication and Information at the University of Hawai’i at Mānoa in Honolulu. He researches media-production processes and products at intersections of Technical Communication, Rhetoric, Human-Computer Interaction, Disability Studies, Digital Inequalities, and Journalism.

Xu Headshot

Qiang Xu (qiang.xu@polymtl.ca) is a M.Sc.A. student in the Computer Engineering Department of Polytechnique Montréal, Canada. Her research interests include data visualization design and human-computer interaction.

Hurtut Headshot

Thomas Hurtut (thomas.hurtut@plymtl.ca) is an associate professor in the Computer Engineering Department of the Polytechnique Montréal, Canada. His research focuses on data visualization design processes and issues.