5. Visual Representation
How can you design computer displays that are as meaningful as possible to human viewers? Answering this question requires understanding of visual representation - the principles by which markings on a surface are made and interpreted. The analysis in this article addresses the most important principles of visual representation for screen design, introduced with examples from the early history of graphical user interfaces. In most cases, these principles have been developed and elaborated within whole fields of study and professional skill - typography, cartography, engineering and architectural draughting, art criticism and semiotics. Improving on the current conventions requires serious skill and understanding. Nevertheless, interaction designers should be able, when necessary, to invent new visual representations.
5.1 Typography and text
For many years, computer displays resembled paper documents. This does not mean that they were simplistic or unreasonably constrained. On the contrary, most aspects of modern industrial society have been successfully achieved using the representational conventions of paper, so those conventions seem to be powerful ones. Information on paper can be structured using tabulated columns, alignment, indentation and emphasis, borders and shading. All of those were incorporated into computer text displays. Interaction conventions, however, were restricted to operations of the typewriter rather than the pencil. Each character typed would appear at a specific location. Locations could be constrained, like filling boxes on a paper form. And shortcut command keys could be defined using onscreen labels or paper overlays. It is not text itself, but keyboard interaction with text that is limited and frustrating compared to what we can do with paper (Sellen and Harper 2001).
But despite the constraints on keyboard interaction, most information on computer screens is still represented as text. Conventions of typography and graphic design help us to interpret that text as if it were on a page, and human readers benefit from many centuries of refinement in text document design. Text itself, including many writing systems as well as specialised notations such as algebra, is a visual representation that has its own research and educational literature. Documents that contain a mix of bordered or coloured regions containing pictures, text and diagrammatic elements can be interpreted according to the conventions of magazine design, poster advertising, form design, textbooks and encyclopaedias. Designers of screen representations should take care to properly apply the specialist knowledge of those graphic and typographic professions. Position on the page, use of typographic grids, and genre-specific illustrative conventions should all be taken into account.
Most screen-based information is interpreted according to textual and typographic conventions, in which graphical elements are arranged within a visual grid, occasionally divided or contained with ruled and coloured borders.
Where to learn more:
5.2 Maps and graphs
The computer has, however, also acquired a specialised visual vocabulary and conventions. Before the text-based computer terminal (or 'glass teletype') became ubiquitous, cathode ray tube displays were already used to display oscilloscope waves and radar echoes. Both could be easily interpreted because of their correspondence to existing paper conventions. An oscilloscope uses a horizontal time axis to trace variation of a quantity over time, as pioneered by William Playfair in his 1786 charts of the British economy. A radar screen shows direction and distance of objects from a central reference point, just as the Hereford Mappa Mundi of 1300 organised places according to their approximate direction and distance from Jerusalem. Many visual displays on computers continue to use these ancient but powerful inventions - the map and the graph. In particular, the first truly large software project, the SAGE air defense system, set out to present data in the form of an augmented radar screen - an abstract map, on which symbols and text could be overlaid. The first graphics computer, the Lincoln Laboratory Whirlwind, was created to show maps, not text.
Basic diagrammatic conventions rely on quantitative correspondence between a direction on the surface and a continuous quantity such as time or distance. These should follow established conventions of maps and graphs.
Where to learn more:
5.3 Schematic drawings
Ivan Sutherland's groundbreaking PhD research with Whirlwind's successor TX-2 introduced several more sophisticated alternatives (Sutherland 1963). The use of a light pen allowed users to draw arbitrary lines, rather than relying on control keys to select predefined options. An obvious application, in the engineering context of Massachusetts Institute of Technology (MIT) where Sutherland worked, was to make engineering drawings such as the girder bridge in Figure 13. Lines on the screen are scaled versions of the actual girders, and text information can be overlaid to give details of force calculations. Plans of this kind, as a visual representation, are closely related to maps. However, where the plane of a map corresponds to a continuous surface, engineering drawings need not be continuous. Each set of connected components must share the same scale, but white space indicates an interpretive break, so that independent representations can potentially share the same divided surface - a convention introduced in Diderot's encyclopedia of 1772, which showed pictures of multiple objects on a page, but cut them loose from any shared pictorial context.
Engineering drawing conventions allow schematic views of connected components to be shown in relative scale, and with text annotations labelling the parts. White space in the representation plane can be used to help the reader distinguish elements from each other rather than directly representing physical space.
Where to learn more:
The examples so far may seem rather abstract. Isn't the most 'natural' visual representation simply a picture of the thing you are trying to represent? In that case, what is so hard about design? Just point a camera, and take the picture. It seems like pictures are natural and intuitive, and anyone should be able to understand what they mean. Of course, you might want the picture to be more or less artistic, but that isn't a technical concern, is it? Well, Ivan Sutherland also suggested the potential value that computer screens might offer as artistic tools. His Sketchpad system was used to create a simple animated cartoon of a winking girl. We can use this example to ask whether pictures are necessarily 'natural', and what design factors are relevant to the selection or creation of pictures in an interaction design context.
We would not describe Sutherland's girl as 'realistic', but it is an effective representation of a girl. In fact, it is an unusually good representation of a winking girl, because all the other elements of the picture are completely abstract and generic. It uses a conventional graphic vocabulary of lines and shapes that are understood in our culture to represent eyes, mouths and so on - these elements do not draw attention to themselves, and therefore highlight the winking eye. If a realistic picture of an actual person was used instead, other aspects of the image (the particular person) might distract the viewer from this message.
It is important, when considering the design options for pictures, to avoid the 'resemblance fallacy', i.e. that drawings are able to depict real object or scenes because the viewer's perception of the flat image simulates the visual perception of a real scene. In practice, all pictures rely on conventions of visual representation, and are relatively poor simulations of natural engagement with physical objects, scenes and people. We are in the habit of speaking approvingly of some pictures as more 'realistic' than others (photographs, photorealistic ray-traced renderings, 'old master' oil paintings), but this simply means that they follow more rigorously a particular set of conventions. The informed designer is aware of a wide range of pictorial conventions and options.
As an example of different pictorial conventions, consider the ways that scenes can be rendered using different forms of artistic perspective. The invention of linear perspective introduced a particular convention in which the viewer is encouraged to think of the scene as perceived through a lens or frame while holding his head still, so that nearby objects occupy a disproportionate amount of the visual field. Previously, pictorial representations more often varied the relative size of objects according to their importance - a kind of 'semantic' perspective. Modern viewers tend to think of the perspective of a camera lens as being most natural, due to the ubiquity of photography, but we still understand and respect alternative perspectives, such as the isometric perspective of the pixel art group eBoy, which has been highly influential on video game style.
As with most conventions of pictorial representation, new perspective rendering conventions are invented and esteemed for their accuracy by critical consensus, and only more slowly adopted by untrained readers. The consensus on preferred perspective shifts across cultures and historical periods. It would be naïve to assume that the conventions of today are the final and perfect product of technical evolution. As with text, we become so accustomed to interpreting these representations that we are blind to the artifice. But professional artists are fully aware of the conventions they use, even where they might have mechanical elements - the way that a photograph is framed changes its meaning, and a skilled pencil drawing is completely unlike visual edge-detection thresholds. A good pictorial representation need not simulate visual experience any more than a good painting of a unicorn need resemble an actual unicorn. When designing user interfaces, all of these techniques are available for use, and new styles of pictorial rendering are constantly being introduced.
Pictorial representations, including line drawings, paintings, perspective renderings and photographs rely on shared interpretive conventions for their meaning. It is naïve to treat screen representations as though they were simulations of experience in the physical world.
Where to learn more:
5.5 Node-and-link diagrams
The first impulse of a computer scientist, when given a pencil, seems to be to draw boxes and connect them with lines. These node and link diagrams can be analysed in terms of the graph structures that are fundamental to the study of algorithms (but unrelated to the visual representations known as graphs or charts). A predecessor of these connectivity diagrams can be found in electrical circuit schematics, where the exact location of components, and the lengths of the wires, can be arranged anywhere, because they are irrelevant to the circuit function. Another early program created for the TX-2, this time by Ivan Sutherland's brother Bert, allowed users to create circuit diagrams of this kind. The distinctive feature of a node-and-link connectivity diagram is that, since the position of each node is irrelevant to the operation of the circuit, it can be used to carry other information. Marian Petre's research into the work of electronics engineers (Petre 1995) catalogued the ways in which they positioned components in ways that were meaningful to human readers, but not to the computer - like the blank space between Diderot's objects this is a form of 'secondary notation' - use of the plane to assist the reader in ways not related to the technical content.
Circuit connectivity diagrams have been most widely popularised through the London Underground diagram, an invention of electrical engineer Henry Beck. The diagram clarified earlier maps by exploiting the fact that most underground travellers are only interested in order and connectivity, not location, of the stations on the line. (Sadly, the widespread belief that a 'diagram' will be technical and hard to understand means that most people describe this as the London Undergound 'map', despite Beck's insistence on his original term).
Node and link diagrams are still widely perceived as being too technical for broad acceptance. Nevertheless, they can present information about ordering and relationships clearly, especially if consideration is given to the value of allowing human users to specify positions.
Where to learn more:
5.6 Icons and symbols
Maps frequently use symbols to indicate specific kinds of landmark. Sometimes these are recognisably pictorial (the standard symbols for tree and church), but others are fairly arbitrary conventions (the symbol for a railway station). As the resolution of computer displays increased in the 1970s, a greater variety of symbols could be differentiated, by making them more detailed, as in the MIT SDMS (Spatial Data Management System) that mapped a naval battle scenario with symbols for different kinds of ship. However, the dividing line between pictures and symbols is ambiguous. Children's drawings of houses often use conventional symbols (door, four windows, triangle roof and chimney) whether or not their own house has two storeys, or a fireplace. Letters of the Latin alphabet are shapes with completely arbitrary relationship to their phonetic meaning, but the Korean phonetic alphabet is easier to learn because the forms mimic the shape of the mouth when pronouncing those sounds. The field of semiotics offers sophisticated ways of analysing the basis on which marks correspond to meanings. In most cases, the best approach for an interaction designer is simply to adopt familiar conventions. When these do not exist, the design task is more challenging.
It is unclear which of the designers working on the Xerox Star coined the term 'icon' for the small pictures symbolising different kinds of system object. David Canfield Smith winningly described them as being like religious icons, which he said were pictures standing for (abstract) spiritual concepts. But 'icon' is also used as a technical term in semiotics. Unfortunately, few of the Xerox team had a sophisticated understanding of semiotics. It was fine art PhD Susan Kare's design work on the Apple Macintosh that established a visual vocabulary which has informed the genre ever since. Some general advice principles are offered by authors such as Horton (1994), but the successful design of icons is still sporadic. Many software publishers simply opt for a memorable brand logo, while others seriously misjudge the kinds of correspondence that are appropriate (my favourite blooper was a software engineering tool in which a pile of coins was used to access the 'change' command).
It has been suggested that icons, being pictorial, are easier to understand than text, and that pre-literate children, or speakers of different languages, might thereby be able to use computers without being able to read. In practice, most icons simply add decoration to text labels, and those that are intended to be self-explanatory must be supported with textual tooltips. The early Macintosh icons, despite their elegance, were surprisingly open to misinterpretation. One PhD graduate of my acquaintance believed that the Macintosh folder symbol was a briefcase (the folder tag looked like a handle), which allowed her to carry her files from place to place when placed inside it. Although mistaken, this belief never caused her any trouble - any correspondence can work, so long as it is applied consistently.
The design of simple and memorable visual symbols is a sophisticated graphic design skill. Following established conventions is the easiest option, but new symbols must be designed with an awareness of what sort of correspondence is intended - pictorial, symbolic, metonymic (e.g. a key to represent locking), bizarrely mnemonic, but probably not monolingual puns.
Where to learn more:
5.7 Visual metaphor
The ambitious graphic designs of the Xerox Star/Alto and Apple Lisa/Macintosh were the first mass-market visual interfaces. They were marketed to office professionals, making the 'cover story' that they resembled an office desktop a convenient explanatory device. Of course, as was frequently noted at the time, these interfaces behaved nothing like a real desktop. The mnemonic symbol for file deletion (a wastebasket) was ridiculous if interpreted as an object placed on a desk. And nobody could explain why the desk had windows in it (the name was derived from the 'clipping window' of the graphics architecture used to implement them - it was at some later point that they began to be explained as resembling sheets of paper on a desk). There were immediate complaints from luminaries such as Alan Kay and Ted Nelson that strict analogical correspondence to physical objects would become obstructive rather than instructive. Nevertheless, for many years the marketing story behind the desktop metaphor was taken seriously, despite the fact that all attempts to improve the Macintosh design with more elaborate visual analogies, as in General Magic and Microsoft Bob, subsequently failed.
The 'desktop' can be far more profitably analysed (and extended) by understanding the representational conventions that it uses. The size and position of icons and windows on the desktop has no meaning, they are not connected, and there is no visual perspective, so it is neither a map, graph nor picture. The real value is the extent to which it allows secondary notation, with the user creating her own meaning by arranging items as she wishes. Window borders separate areas of the screen into different pictorial, text or symbolic contexts as in the typographic page design of a textbook or magazine. Icons use a large variety of conventions to indicate symbolic correspondence to software operations and/or company brands, but they are only occasionally or incidentally organised into more complex semiotic structures.
Theories of visual representation, rather than theories of visual metaphor, are the best approach to explaining the conventional Macintosh/Windows 'desktop'. There is huge room for improvement.
Where to learn more:
5.8 Unified theories of visual representation
The analysis in this article has addressed the most important principles of visual representation for screen design, introduced with examples from the early history of graphical user interfaces. In most cases, these principles have been developed and elaborated within whole fields of study and professional skill - typography, cartography, engineering and architectural draughting, art criticism and semiotics. Improving on the current conventions requires serious skill and understanding. Nevertheless, interaction designers should be able, when necessary, to invent new visual representations.
One approach is to take a holistic perspective on visual language, information design, notations, or diagrams. Specialist research communities in these fields address many relevant factors from low-level visual perception to critique of visual culture. Across all of them, it can be necessary to ignore (or not be distracted by) technical and marketing claims, and to remember that all visual representations simply comprise marks on a surface that are intended to correspond to things understood by the reader. The two dimensions of the surface can be made to correspond to physical space (in a map), to dimensions of an object, to a pictorial perspective, or to continuous abstract scales (time or quantity). The surface can also be partitioned into regions that should be interpreted differently. Within any region, elements can be aligned, grouped, connected or contained in order to express their relationships. In each case, the correspondence between that arrangement, and the intended interpretation, must be understood by convention, explained, or derived from the structural and perceptual properties of marks on the plane. Finally, any individual element might be assigned meaning according to many different semiotic principles of correspondence.
The following table summarises holistic views, as introduced above, drawing principally on the work of Bertin, Richards, MacEachren, Blackwell & Engelhardt and Engelhardt.
Where to learn more:
Literal (visual imitation of physical features)
Mark position, identify category (shape, texture colour)
Texts and symbolic calculi
Identifying shared membership
As an example of how one might analyse (or working backwards, design) a complex visual representation, consider the case of musical scores. These consist of marks on a paper surface, bound into a multi-page book, that is placed on a stand at arms length in front of a performer. Each page is vertically divided into a number of regions, visually separated by white space and grid alignment cues. The regions are ordered, with that at the top of the page coming first. Each region contains two quantitative axes, with the horizontal axis representing time duration, and the vertical axis pitch. The vertical axis is segmented by lines to categorise pitch class. Symbols placed at a given x-y location indicate a specific pitched sound to be initiated at a specific time. A conventional symbol set indicates the duration of the sound. None of the elements use any variation in colour, saturation or texture. A wide variety of text labels and annotation symbols are used to elaborate these basic elements. Music can be, and is, also expressed using many other visual representations (see e.g. Duignan for a survey of representations used in digital music processing).
5.9 Where to learn more
The historical examples of early computer representations used in this article are mainly drawn from Sutherland (Ed. Blackwell and Rodden 2003), Garland (1994), and Blackwell (2006). Historical reviews of visual representation in other fields include Ferguson (1992), Pérez-Gómez and Pelletier (1997), McCloud (1993), Tufte (1983). Reviews of human perceptual principles can be found in Gregory (1970), Ittelson (1996), Ware (2004), Blackwell (2002). Advice on principles of interaction with visual representation is distributed throughout the HCI literature, but classics include Norman (1988), Horton (1994), Shneiderman ( Shneiderman and Plaisant 2009, Card et al 1999, Bederson and Shneiderman 2003) and Spence (2001). Green's Cognitive Dimensions of Notations framework has for many years provided a systematic classification of the design parameters in interactive visual representations. A brief introduction is provided in Blackwell and Green (2003).
Research on visual representation topics is regularly presented at the Diagrams conference series (which has a particular emphasis on cognitive science), the InfoDesign and Vision Plus conferences (which emphasise graphic and typographic information design), the Visual Languages and Human-Centric Computing symposia (emphasising software tools and development), and the InfoVis and Information Visualisation conferences (emphasising quantitative and scientific data visualisation).
IV - International Conference on Information Visualization2008200720062005200420032002
DIAGRAMS - International Conference on the Theory and Application of Diagrams20082006200420022000
VL-HCC - Symposium on Visual Languages and Human Centric Computing2008200720072006200520042003
InfoVis - IEEE Symposium on Information Visualization2005200420032002200120001999
5.10 Commentary by Ben Shneiderman
Since computer displays are such powerful visual appliances, careful designers devote extensive effort to getting the visual representation right. They have to balance the demands of many tasks, diverse users, and challenging requirements, such as short learning time, rapid performance, low error rates, and good retention over time. Designing esthetic interfaces that please and even delight users is a further expectation that designers must meet to be successful. For playful and discretionary tasks esthetic concerns may dominate, but for life critical tasks, rapid performance with low error rates are essential.
Alan Blackwell's competent description of many visual representation issues is a great start for newcomers with helpful reminders even for experienced designers. The videos make for a pleasant personal accompaniment that bridges visual representation for interface design with thoughtful analyses of representational art. Blackwell's approach might be enriched by more discussion of visual representations in functional product design tied to meaningful tasks. Learning from paintings of Paris is fine, but aren't there other lessons to learn from visual representations in airport kiosks, automobile dashboards, or intensive care units?
These devices as well as most graphical user interfaces and mobile devices raise additional questions of changing state visualization and interaction dynamics. Modern designers need to do more than show the right phone icon, they need to show ringing, busy, inactive, no network, conference mode, etc., which may include color changes (highlighted, grayed out), animations, and accompanying sounds. These designers also need to deal with interactive visual representations that happen with a click, double-click, right-click, drag, drag-and-drop, hover, multi-select, region-select, brushing-linking, and more.
The world of mobile devices such as phones, cameras, music players, or medical sensors is the new frontier for design, where visual representations are dynamic and tightly integrated with sound, haptics, and novel actions such as shaking, twisting, or body movements. Even more challenging is the expectation that goes beyond the solitary viewer to the collaboration in which multiple users embedded in a changing physical environment produce new visual representations.
These changing and interactive demands on designers invite creative expressions that are very different from designs for static signs, printed diagrams, or interpretive art. The adventure for visual representation designers is to create a new language of interaction that engages users, accelerates learning, provides comprehensible feedback, and offers appropriate warnings when dangers emerge. Blackwell touches on some of these issues in the closing Gapminder example, but I was thirsty for more.
5.11 Commentary by Clive Richards
If I may be permitted a graphically inspired metaphor Alan Blackwell provides us with a neat pen sketch of that extensive scene called 'visual representation' (Blackwell 2011).
"Visualisation has a lot more to offer than most people are aware of today" we are told by Robert Kosara at the end of his commentary (Kosara 2010) on Stephen Few's related article on 'Data visualisation for human perception' (Few 2010). Korsara is right, and Blackwell maps out the broad territory in which many of these visualisation offerings may be located. In this commentary I offer a few observations on some prominent features in that landscape: dynamics, picturing, semiotics and metaphor.
Ben Shneiderman's critique of Blackwell's piece points to a lack of attention to "... additional questions of changing state visualisations and interaction dynamics" (Shneiderman 2010). Indeed the possibilities offered by these additional questions present some exciting challenges for interaction designers - opportunities to create novel and effective combinations of visual with other sensory and motor experiences in dynamic operational contexts. Shneiderman suggests that: "These changing and interactive demands on designers invite creative expressions that are very different from design for static signs, printed diagrams, or interpretive art". This may be so up to a point, but here Shneinderman and I part company a little. The focus of Blackwell's essay is properly on the visual representation side of facilities available to interaction designers, and in that context he is quite right to give prominence to highly successful but static visual representation precedents, and also to point out the various specialist fields of endeavour in which they have been developed. Some of these representational approaches have histories reaching back thousands of years and are deeply embedded within our culture. It would be foolhardy to disregard conventions established in, say, the print domain, and to try to re-invent everything afresh for the screen, even if this were a practical proposition. Others have made arguments to support looking to historical precedents. For example Michael Twyman has pointed out that when considering typographic cueing and "... the problems of the electronic age ... we have much to learn from the manuscript age" (Twyman 1987, p5). He proposes that studying the early scribes' use of colour, spacing and other graphical devices can usefully inform the design of today's screen-based texts. And as Blackwell points out in his opening section on 'Typography and text' "most information on computer screen is still presented as text".
It is also sometimes assumed that the pictorial representation of a dynamic process is best presented dynamically. However it can be argued that the comic book convention of using a sequence of static frames is sometimes superior for focusing the viewer's attention on the critical events in a process, rather than using an animated sequence in which key moments may be missed. This is of course not to deny the immense value of the moving and interactive visual image in the right context. The Gapminder charts are a case in point (http://www.gapminder.org). Blackwell usefully includes one of these, but as a static presentation. These diagrams come to life and really tell their story through the clustering of balloons that inflate or deflate as they move about the screen when driven through simulated periods of time.
While designing a tool for engineers to learn about the operation and maintenance of an oil system for an aircraft jet engine, Detlev Fischer devised a series of interactive animations, called 'Cinegrams' to display in diagrammatic form various operating procedures (Fischer and Richards 1995). He used the cinematic techniques of time compression and expansion in one animated sequence to show how the slow accumulation of debris in an oil filter, over an extended period of time, would eventually create a blockage to the oil flow and trigger the opening of a by-pass device in split seconds. Notwithstanding my earlier comment about the potential superiority of the comic strip genre for displaying some time dependant processes this particular Cinegram proved very instructive for the targeted users. There are many other examples one could cite where dynamic picturing of this sort has been deployed to similarly good effect in interactive environments.
Shneinderman also comments that: "Blackwell's approach might be enriched by more discussion of visual representation in functional product design tied to meaningful tasks". An area I have worked in is the pictorial representation of engineering assemblies to show that which is normally hidden from view. Techniques to do this on the printed page include 'ghosting' (making occluding parts appear as if transparent), 'exploding' (showing components separately, set out in dis-assembly order along an axis) and cutting away (taking a slice out of an outer shell to reveal mechanisms beneath). All these three-dimensional picturing techniques were used by, if not actually invented by, Leonardo Da Vinci (Richards 2006). All could be enhanced by interactive viewer control - an area of further fruitful exploration for picturing purposes in technical documentation contexts.
Blackwell's section on 'Pictures' warns us that when considering picturing options to avoid the "resemblance fallacy" pointing out the role that convention plays, even in so called photo-realistic images. He also points out that viewers can be distracted from the message by incidental information in 'realistic' pictures. From my own work in the field I know that technical illustrators' synoptic black and white outline depictions are regarded as best for drawing the viewer's attention to the key features of a pictorial representation. Research in this area has shown that when using linear perspective type drawings the appropriate deployment of lines of varying 'weight', rather than of a single thickness, can have a significant effect on viewers' levels of understanding about what is depicted (Richards, Bussard and Newman 2007). This work was done specifically to determine an 'easy to read' visual representational style when manipulating on the screen images of CAD objects. The most effective convention was shown to be: thin lines for edges where both planes forming the edge are visible and thicker lines for edges where only one plane is visible - that is where an outline edge forms a kind of horizon to the object.
These line thickness conventions appear on the face of it to have little to do with how we normally perceive the world, and Blackwell tells us that: "A good pictorial representation need not simulate visual experience any more than a good painting of a unicorn need resemble an actual unicorn". And some particular representations of unicorns can aid our understanding of how to use semiotic theory to figure out how pictures may be interpreted and, importantly, sometimes misunderstood - as I shall describe in the following.
Blackwell mentions semiotics, almost in passing, however it can help unravel some of the complexities of visual representation. Evelyn Goldsmith uses a Charles Addams cartoon to explain the relevance of the 'syntactic', 'semantic' and 'pragmatic' levels of semiotic analysis when applied to pictures (Goldsmith 1978). The cartoon in question, like many of those by Charles Addams, has no caption. It shows two unicorns standing on a small island in the pouring rain forlornly watching the Ark sailing away into the distance. Goldsmith suggests that most viewers will have little trouble in interpreting the overlapping elements in the scene, for example that one unicorn is standing behind the other, nor any difficulty understanding that the texture gradient of the sea stands for a receding horizontal plane. These represent the syntactic level of interpretation. Most adults will correctly identify the various components of the picture at the semantic level, however Goldsmith proposes that a young child might mistake the unicorns for horses and be happy with 'boat' for the Ark. But at the pragmatic level of interpretation, unless a viewer of the picture is aware of the story of Noah's Ark, the joke will be lost - the connection will not be made between the scene depicted in the drawing and the scarcity of unicorns. This reinforces the point that one should not assume that the understanding of pictures is straightforward. There is much more to it than a simple matter or recognition. This is especially the case when metaphor is involved in visual representation.
Blackwell's section on 'Visual metaphor' is essentially a critique of the use of "theories of visual metaphor" as an "approach to explaining the conventional Mackintosh/Windows 'desktop' ". His is a convincing argument but there is much more which may be said about the use of visual metaphor - especially to show that which otherwise cannot be pictured. In fact most diagrams employ a kind of spatial metaphor when not depicting physical arrangements, for example when using the branches of a tree to represent relations within a family (Richards 2002). The capability to represent the invisible is the great strength of the visual metaphor, but there are dangers, and here I refer back to semiotics and particularly the pragmatic level of analysis. One needs to know the story to get the picture.
In our parental home, one of the many books much loved by my two brothers and me, was The Practical Encyclopaedia for Children (Odhams circa 1948). In it a double page spread illustration shows the possible evolutionary phases of the elephant. These are depicted as a procession of animals in a primordial swamp cum jungle setting. Starting with a tiny fish and passing to a small aquatic creature climbing out of the water onto the bank the procession progresses on through eight phases of transformation, including the Moeritherium and the Paleomatodon, finishing up with the land-based giant of today's African Elephant. Recently one of my brothers confessed to me that through studying this graphical diorama he had believed as a child that the elephant had a life cycle akin to that of a frog. He had understood that the procession was a metaphor for time. He had just got the duration wrong - by several orders of magnitude. He also hadn't understood that each separate depiction was of a different animal. He had used the arguably more sophisticated concept that it was the same animal at different times and stages in its individual development.
Please forgive the cliché if I say that this anecdote clearly illustrates that there can be more to looking at a picture than meets the eye? Blackwell's essay provides some useful pointers for exploring the possibilities of this fascinating territory of picturing and visual representation in general.
- Blackwell A 2011 'Visual representation' Interaction-Design.org
- Few S 2010 'Data visualisation for human perception' Interaction-Design.org
- Fischer D and Richards CJ 1995 'The presentation of time in interactive animated systems diagrams' In: Earnshaw RA and Vince JA (eds) Multimedia Systems and Applications London: Academic Press Ltd (pp141 - 159). ISBN 0-12-227740-6
- Goldsmith E 1978 An analysis of the elements affecting comprehensibility of illustrations intended as supportive of textPhD thesis (CNAA) Brighton Polytechnic
- Korsa R 2010 'Commentary on Stephen Few's article: Data visualisation for human perception' Interaction-Design.org
Odhams c. 1949 The practical encyclopaedia for children (pp 194 - 195)
- Richards CJ 2002 'The fundamental design variables of diagramming' In: Oliver P, Anderson M and Meyer B (eds) Diagrammatic representation and reasoning London: Springer Verlag (pp 85 - 102) ISBN 1-85233-242-5
- Richards CJ 2006 'Drawing out information - lines of communication in technical illustration' Information Design Journal 14 (2) 93 - 107
- Richards CJ, Bussard N, Newman R 2007 'Weighing up line weights: the value of differing line thicknesses in technical illustrations' Information Design Journal 15 (2) 171 - 181
- Shneiderman B 2011 'Commentary on Alan Blackwell's article: Visual representation' Interaction-Design.org
- Twyman M 1982 'The graphic representation of language' Information Design Journal 3 (1) 2 - 22
5.12 Commentary by Peter C-H. Cheng
Alan Blackwell has provided us with a fine introduction to the design of visual representations. The article does a great job in motivating the novice designer of visual representations to explore some of the fundamental issues that lurk just beneath the surface of creating effective representations. Furthermore, he gives us all quite a challenge:
Alan, quite rightly, claims that we must consider the fundamental principles of symbolic correspondence, if we are to design new genres of visual representations beyond the common forms of displays and interfaces. The report begins to equip the novice visual representation designer with an understanding of the nature of symbolic correspondence between the components of visual representations and the things they represent, whether objects, actions or ideas. In particular, it gives a useful survey of how correspondence works in a range of representations and provides a systematic framework of how systems of correspondence can be applied to design. The interactive screen shot is an exemplary visual representation that vividly reveals the correspondence techniques used in each part of the example diagram.
However, suppose you really wished to rise to the challenge of creating novel visual representations, how far will a knowledge of the fundamentals of symbolic correspondence take you? Drawing on my studies of the role of diagrams in the history of science, experience of inventing novel visual representations and research on problem solving and learning with diagrams, from the perspective of Cognitive Science, my view is that such knowledge will be necessary but not sufficient for your endeavours. So, what else should the budding visual representation designer consider? From the perspective of cognitive science there are at least three aspects that we may profitably target.
First, there is the knowledge of how human process information; specifically the nature of the human cognitive architecture. By this, I mean more than visual perception, but an understanding of how we mentally receive, store, retrieve, transform and transmit information. The way the mind deals with each of these basic types of information processing provides relevant constrains for the design of visual representations. For instance, humans often, perhaps even typically, encode concepts in the form of hierarchies of schemas, which are information structures that coordinate attributes that describe and differentiate classes of concepts. These hierarchies of schemas underpin our ability to efficiently generalize or specialize concepts. Hence, we can use this knowledge to consider whether particular forms of symbolic correspondence will assist or hinder the forms of inference that we hope the user of the representation may make. For example, are the main symbolic correspondences in a visual representation consistent with the key attributes of the schemas for the concepts being considered?
Second, it may be useful for the designer to consider the broader nature of the tasks that the user may wish to do with the designed representation. Resource allocation, optimization, calculating quantities, inferences about of possible outcomes, classification, reasoning about extreme or special cases, and debugging: these are just a few of the many possibilities. These tasks are more generic than the information-oriented options considered in the 'design uses' column of Figure 27 in the article. They are worth addressing, because they provide constraints for the initial stages of representation design, by narrowing the search for what are likely to be effective correspondences to adopt. For example, if taxonomic classification is important, then separation and layering will be important correspondences; whereas magnitude calculations may demand scale mapping, Euclidian and metrical correspondences.
The third aspect concerns situations in which the visual representation must support not just a single task, but many diverse tasks. For example, a visual representation to help students learn about electricity will be used to explain the topology of circuits, make computations with electrical quantities, provide explanations of circuit behaviour (in terms of formal algebraic models and as qualitative causal models), facilitate fault finding or trouble shooting, among other activities. The creation of novel representations in such circumstances is perhaps one of the most challenging for designers. So, what knowledge can help? In this case, I advocate attempting to design representations on the basis of an analysis of the underlying conceptual structure of the knowledge of the target domain. Why? Because the nature of the knowledge is invariant across different classes of task. For example, for problem solving and learning of electricity, all the tasks depend upon the common fundamental conceptual structures of the domain that knit together the laws governing the physical properties of electricity and circuit topology. Hence, a representation that makes these concepts readily available through effective representation designed will probably be effective for a wide range of tasks.
In summary, it is desirable for the aspiring visual representation designer to consider symbolic correspondence, but I recommend they cast their net more widely for inspiration by learning about the human cognitive architecture, focusing on the nature of the task for which they are designing, and most critically thinking about the underlying conceptual structure of the knowledge of the target domain.
5.13 Commentary by Brad A. Myers
I have been teaching human-computer interaction to students with a wide range of backgrounds for many years. One of the most difficult areas for them to learn seems to be visual design. Students seem to quickly pick up rules like Nielsen's Heuristics for interaction (Nielsen & Molich, 1990), whereas the guidelines for visual design are much more subtle. Alan Blackwell's article presents many useful points, but a designer needs to know so much more! Whereas students can achieve competence at achieving Nielsen's "consistency and standards," for example, they struggle with selecting an appropriate representation for their information. And only a trained graphic designer is likely to be able to create an attractive and effective icon. Some people have a much better aesthetic sense, and can create much more beautiful and appropriate representations. A key goal of my introductory course, therefore, is to try to impart to the students how difficult it is to do visual design, and how wide the set of choices is. Studying the examples that Blackwell provides will give the reader a small start towards effective visual representations, but the path requires talent, study, and then iterative design and testing to evaluate and improve a design's success.
- Sutherland, Ivan E. (1963). Sketchpad, A Man-Machine Graphical Communication System. PhD Thesis at Massachusetts Insti. Cambridge University Computer Laboratory
- Spence, Robert (2001): Information Visualization, Addison Wesley,
- Richards, Clive (1984). Diagrammatics: an investigation aimed at providing a theoretical framework for studying di. Royal College of Art, London, UK
- Mccloud, Scott (1994): Understanding Comics: The Invisible Art, Harper Paperbacks,
- MacEachren, Alan M. (2004): How Maps Work: Representation, Visualization, and Design, The Guilford Press,
- Goodman, Nelson (1976): Languages of Art, Hackett Publishing Company,
- Engelhardt, Yuri (2002). The Language of Graphics. A framework for the analysis of syntax and meaning in maps, char. University of Amsterdam
- Blackwell, Alan (2002): Psychological perspectives on diagrams and their users. In: "Diagrammatic Representation and Reasoning" .
- Blackwell, Alan, Engelhardt, Yuri (2002): A Meta-Taxonomy for Diagram Research. In: "Diagrammatic Representation and Reasoning" .
- Anderson, Michael, Meyer, Bernd, Olivier, Patrick (2002): Diagrammatic Representation and Reasoning,
- Bertin, Jacques (1967): Semiology of Graphics: Diagrams, Networks, Maps (Sémiologie graphique: Les diagrammes - Les réseaux - Les cartes). English translation by W. J. Berg, University of Wisconsin Press,
- Blackwell, Alan (2006): The reification of metaphor as a design tool. In ACM Transactions on Computer-Human Interaction, 13 (4) pp. 490-530. http://doi.acm.org/10.1145/1188816.1188820
- Blackwell, Alan, Green, T. R. G. (2003): Notational Systems - The Cognitive Dimensions of Notations Framework. In: Carroll, John M. (eds). "HCI Models, Theories, and Frameworks" Morgan Kaufman Publishers .
- Tufte, Edward R. (1983): The Visual Display of Quantitative Information, Graphics Press,
- Bederson, Benjamin B., Shneiderman, Ben (2003): The Craft of Information Visualization: Readings and Reflections, Morgan Kaufman Publishers,
- Ferguson, Eugene S. (1994): Engineering and the Mind's Eye, MIT Press,
- Garland, Ken (1994): Mr. Beck's Underground Map, Capital Transport Publishing,
- Gregory, Richard L. (1970): The Intelligent Eye, Weidenfeld and Nicolson,
- Ittelson, W. H. (1996): Visual perception of markings. In Psychonomic Bulletin & Review, 3 (2) pp. 171-187.
- Norman, Donald A. (1988): The Design of Everyday Things, Doubleday,
- Petre, Marian (1995): Why Looking Isn\'t Always Seeing: Readership Skills and Graphical Programming. In Communications of the ACM, 38 (6) pp. 33-44.
- Pérez-Gómez, Alberto, Pelletier, Louise (1997): Architectural Representation and the Perspective Hinge, MIT Press,
- Sellen, Abigail, Harper, Richard H. R. (2001): The Myth of the Paperless Office, MIT Press,
- Shneiderman, Ben, Plaisant, Catherine (2009): Designing the User Interface: Strategies for Effective Human-Computer Interaction (5th ed.), Addison-Wesley,
- Ware, Colin (2004): Information Visualization: Perception for Design, 2nd Ed, Morgan Kaufman,
- Horton, William (1994): The Icon Book: Visual Symbols for Computer Systems and Documentation, John Wiley and Sons,
- Resnick, Elizabeth (2003): Design for Communication: Conceptual Graphic Design Basics, Wiley,
- Napoles, Veronica (1987): Corporate Identity Design,
- Micklewright, Keith (2005): Drawing: Mastering the Language of Visual Expression, Harry N. Abrams,
- Lowe, Ric (1992): Successful Instructional Diagram,
- Carroll, John M., Mazur, Sandra A. (1986): LisaLearning. In Computer, 19 (11) pp. 35-49.