Sonification is the use of non-speech sound in an intentional, systematic way to represent information (Walker & Nees, 2011).
Fascinating Twenty Thousand Hertz podcast, Video(less) Games, in which options for games composed mostly or entirely of sound are described. Gamers and developers discuss their motivations for contributing and the experience of play. At about 15:09, you hear about how Steve Saylor, a blind video gamer and game accessibility consultant, describes how he developed a rich series of audio cues that can be enabled. These cues tell players about environmental features and action in the game. Listen to hear the experience of the audio layer on and off.
Games composed mostly or entirely of sound are not new–the Twenty Thousand Hertz episode describes a text adventure game called Zork II that utilized a text-to-speech engine in the early 1980s. But the idea of developing a convention for audio cues within a game or even across multiple games, reminded me of the sonification of math equations I first saw in the Complex Images for All Learners accessibility guide from Portland Community College. The DIAGRAM Center has a wonderful article on sonification with audio examples that can be played back at different speeds. Sonification is also not new. But the provision of multimodal data representations does not seem to be widespread in higher education, at least not that I have seen.
What would educators need to know to become proficient in the use, evaluation, and creation of multimodal data representations? In the case of sonification, it might take an educator knowing where to find high-quality sonifications that had already been created. It might require training in how to produce and design sonifications. In terms of design, how can our existing base of research and theory help guide our decisions? These are fascinating questions that I would like to explore more thoroughly and bring back to the courses I teach.