Publication statistics

Pub. period:1987-2011
Pub. count:23
Number of co-authors:20



Co-authors

Number of publications with 3 favourite co-authors:

Roy A. Ruddle:9
Stephen J. Payne:3
Philip Tucker:3

 

 

Productive colleagues

Dylan M. Jones's 3 most productive colleagues in number of publications:

Stephen J. Payne:47
Roy A. Ruddle:27
Andrew Howes:26
 
 
 
Jul 30

It's all about one thing: creative problem-solving to get the story out.

-- Robert Greenberg, R/GA, 2006

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Dylan M. Jones

Has also published under the name of:
"D. M. Jones"

Personal Homepage:
psych.cf.ac.uk/contactsandpeople/academics/jonesdylan.html


Add description
Add publication

Publications by Dylan M. Jones (bibliography)

 what's this?
2011
 
Edit | Del

Vachon, Françe;ois, Tremblay, Sebastien, Nicholls, Alastair P. and Jones, Dylan M. (2011): Exploiting the Auditory Modality in Decision Support: Beneficial "Warning" Effects and Unavoidable Costs. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting 2011. pp. 1402-1406.

The rate at which technology continues to develop and permeate our lives is such that it has become increasingly easier, and thus more likely, for information to be presented to us via different modalities simultaneously. But to what extent does this confluence of information affect our subsequent judgment and performance? Furthermore, what are the implications for system design when this information is critical to saving our lives and others? This study uses a visual 'microworld' simulation of a naval anti-air warfare to investigate whether the content and priority of audio messages that accompany changes in the visual modality assist or hinder performance in the task (identification of change and evaluation of threats). Results indicate that although helping critical change detection, a critical warning in the auditory modality is not as efficient as its visual counterpart. Moreover, audio messages tended to bias threat evaluation towards perceiving objects as more hostile than they were in reality. Such findings have clear implications in regard to the costs and benefits of further exploiting the auditory modality in dynamic visual environments.

© All rights reserved Vachon et al. and/or HFES

2007
 
Edit | Del

Hodgetts, Helen M. and Jones, Dylan M. (2007): Reminders, Alerts and Pop-ups: The Cost of Computer-Initiated Interruptions. In: Jacko, Julie A. (ed.) HCI International 2007 - 12th International Conference - Part I July 22-27, 2007, Beijing, China. pp. 818-826.

2003
 
Edit | Del

Ruddle, Roy A., Savage, Justin C. D. and Jones, Dylan M. (2003): Levels of Control during a Collaborative Carrying Task. In Presence: Teleoperators and Virtual Environments, 12 (2) pp. 149-155.

2002
 
Edit | Del

Ruddle, Roy A., Savage, Justin C.D. and Jones, Dylan M. (2002): Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments. In Interactions, 9 (6) pp. 9-10.

 
Edit | Del

Ruddle, Roy A., Savage, Justin C.D. and Jones, Dylan M. (2002): Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments. In ACM Transactions on Computer-Human Interaction, 9 (4) pp. 285-308.

Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of cooperation, multiuser object manipulation has rarely been studied. This paper describes a behavioral experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object were integrated together either symmetrically or asymmetrically. The former only allowed the common component of participants' actions to take place, but the latter used the mean. Symmetric action integration was superior for sections of the task when both participants had to perform similar actions, but if participants had to move in different ways (e.g., one maneuvering him/herself through a narrow opening while the other traveled down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the reduction in performance caused by having to cooperate with another person).

© All rights reserved Ruddle et al. and/or ACM Press

 
Edit | Del

Ruddle, Roy A., Savage, Justin C. D. and Jones, Dylan M. (2002): Evaluating Rules of Interaction for Object Manipulation in Cluttered Virtual Environments. In Presence: Teleoperators and Virtual Environments, 11 (6) pp. 591-609.

 
Edit | Del

Ruddle, Roy A., Savage, Justin C. D. and Jones, Dylan M. (2002): Implementing flexible rules of interaction for object manipulation in cluttered virtual environments. In: Shi, Jiaoying, Hodges, Larry F., Sun, Hanqiu and Peng, Qunsheng (eds.) VRST 2002- Proceedings of the ACM Symposium on Virtual Reality Software and Technology November 11-13, 2002, Hong Kong, China. pp. 89-96.

2001
 
Edit | Del

Ruddle, Roy A. and Jones, Dylan M. (2001): Movement in Cluttered Virtual Environments. In Presence: Teleoperators and Virtual Environments, 10 (5) pp. 511-524.

2000
 
Edit | Del

Ruddle, Roy A., Howes, Andrew, Payne, Stephen J. and Jones, Dylan M. (2000): The Effects of Hyperlinks on Navigation in Virtual Environments. In International Journal of Human-Computer Studies, 53 (4) pp. 551-581.

Hyperlinks introduce discontinuities of movement to 3-D virtual environments (VEs). Nine independent attributes of hyperlinks are defined and their likely effects on navigation in VEs are discussed. Four experiments are described in which participants repeatedly navigated VEs that were either conventional (i.e. obeyed the laws of Euclidean space), or contained hyperlinks. Participants learned spatial knowledge slowly in both types of environment, echoing the findings of previous studies that used conventional VEs. The detrimental effects on participants' spatial knowledge of using hyperlinks for movement were reduced when a time-delay was introduced, but participants still developed less accurate knowledge than they did in the conventional VEs. Visual continuity had a greater influence on participants' rate of learning than continuity of movement, and participants were able to exploit hyperlinks that connected together disparate regions of a VE to reduce travel time.

© All rights reserved Ruddle et al. and/or Academic Press

1999
 
Edit | Del

Ruddle, Roy A., Payne, Stephen J. and Jones, Dylan M. (1999): Navigating Large-Scale Virtual Environments: What Differences Occur Between Helmet-Mounted and Desk-Top Displays?. In Presence: Teleoperators and Virtual Environments, 8 (2) pp. 157-168.

1998
 
Edit | Del

Ruddle, Roy A., Payne, Stephen J. and Jones, Dylan M. (1998): Navigating Large-Scale 'Desk-Top' Virtual Buildings: Effects of Orientation Aids and Familiarity. In Presence: Teleoperators and Virtual Environments, 7 (2) pp. 179-192.

1996
 
Edit | Del

Murray, A. C., Jones, Dylan M. and Frankish, C. R. (1996): Dialogue Design in Speech-Mediated Data-Entry: The Role of Syntactic Constraints and Feedback. In International Journal of Human-Computer Studies, 45 (3) pp. 263-286.

Human-computer interfaces which use speech as the medium for interaction present unique problems for human factors research, due to the fact that automatic speech recognition (ASR) technology is still error prone. The experiments described here address the design of ASR interfaces for data-entry tasks. Particular emphasis was placed on human factors, and users' data-entry performance was compared using not only quantitative measures of speed and accuracy but also more qualitative analyses of user-errors. Experiment 1 investigated the merits of using closed word-sets (syntax) to enhance recognition accuracy. Participants used a purely auditory interface (i.e. one with no visual component to it) programmed to exercise Full Syntactic constraints (FS), Partial Syntactic constraints (PS) or No Syntactic constraints (NS) on the set of words available for recognition at any given time in the data-entry dialogue. Comparisons of data-entry performance showed an advantage of syntax in terms of ASR performance, and when errors and their consequences were taken into account PS was shown to accommodate users' attempts at error-correction more readily than FS. Experiment 2 compared design options for visual prompts and feedback: a limited area of the visual display was dedicated to the provision of prompts and feedback supporting the spoken data-entry dialogue. Two styles of visual prompt were contrasted: Options Prompts (OP) which displayed the full set of current options for input, and Fieldname Prompts (FP) which displayed only the current Fieldname but could be expanded on command to include the relevant options. The results showed that overall OP led to more efficient performance than FP. The errors made by users in the absence of visual feedback were compared with those occurring when the visual component was included in the interface. Recommendations for design of ASR systems for data-entry tasks are made based on the experimental results.

© All rights reserved Murray et al. and/or Academic Press

1993
 
Edit | Del

Cowley, Christopher K. and Jones, Dylan M. (1993): Talking to Machines. In: Ashlund, Stacey, Mullet, Kevin, Henderson, Austin, Hollnagel, Erik and White, Ted (eds.) Proceedings of the ACM CHI 93 Human Factors in Computing Systems Conference April 24-29, 1993, Amsterdam, The Netherlands. p. 522.

The film shows how dialogue design and error correction strategies, informed by human factors research, can lead to the development of usable and profitable systems. It starts with a simulation of a truly conversational machine to show the level of performance necessary to compete with human recognition. Template matching recognition is clearly explained so that viewers can see how most devices actually work. The film then shows the Digital Equipment Corporation's DECvoice in a number of voice input and output scenarios which highlight typical design problems and solutions. It concludes with a set of guidelines which will help designers make reasoned decisions about when and how to use speech recognition and avoid the typical problems experienced by users. The film ends with an example of a system which, having been designed with the guidelines in mind, is usable, efficient, and practical within the constraints of contemporary technology.

© All rights reserved Cowley and Jones and/or ACM Press

 
Edit | Del

Tucker, Philip and Jones, Dylan M. (1993): Document Annotation: To Write, Type or Speak?. In International Journal of Man-Machine Studies, 39 (6) pp. 885-900.

Although the visual display unit (VDU) is becoming an increasingly popular means of displaying documents, users often show a strong preference for the "hard-copy" medium of document presentation when it comes to reading activities such as those that involve proof-reading or refereeing the document. This is partly attributed to the difficulties of annotating documents presented in the electronic medium. Voice recording may be a more acceptable medium for annotating documents that are presented on VDUs, as it overcomes many of the problems associated with the typed annotation of electronic documents. Experiment 1 compared two computer-based annotation media (typed and spoken input) with the method of writing annotations on the document. Findings suggested that writing was a superior method of annotation to the other media in terms of number of annotations elicited, speed of recording and user preference. Experiment 2 differed from the first experiment in the way in which written annotations were recorded and in the amount of pre-trial practice given to subjects. In the second experiment voice resulted in shorter task completion times than either writing or typing. This is taken as limited support for a theory that a small amount of pre-trial practice is of greater benefit to the utility of a voice annotation facility than it is to a facility for typing annotations. The majority of differences between writing and the other conditions observed in Experiment 1 were not found in Experiment 2. The contrast between the two sets of results is discussed in terms of the subjects' familiarity with the methods of annotation involved and the advantages of a facility for annotating on the document. The discussion concludes with a set of guide-lines for the implementation of a voice annotation facility.

© All rights reserved Tucker and Jones and/or Academic Press

1992
 
Edit | Del

Jones, Dylan M., Frankish, Clive and Hapeshi, Kevin (1992): Automatic Speech Recognition in Practice. In Behaviour and Information Technology, 11 (2) pp. 109-122.

There is a growing interest in the commercial possibilities offered by automatic speech recognition (ASR) technology. Unfortunately the prospective user has little independent guidance with respect to the potential success of any proposed implementation. There do exist a few general human factors guidelines on the use of ASR, but most of the corpus of knowledge that forms part of the lore within the ASR community is based on the unpublished experiences of system developers and users themselves. The present paper attempts to redress this balance; it is a summary of the experiences of users and system designers at 30 research and commercial sites in the UK and USA where ASR has been extensively used or tested. The application represented were classified as vehicle, office, industrial, and aids for disabled people. A number of important human factors issues were identified, and the relative success of the various applications are discussed.

© All rights reserved Jones et al. and/or Taylor and Francis

1991
 
Edit | Del

Tucker, Philip and Jones, Dylan M. (1991): Voice as Interface: An Overview. In International Journal of Human-Computer Interaction, 3 (2) pp. 145-170.

Speech, as a medium for communicating with computers, has several potential advantages, and its introduction may have far-reaching implications for the evolution of information technology. Yet, despite the availability of speech I/O devices over the past few years, their introduction into day-to-day use has been relatively slow. A human factors perspective helps to explain the predominance of the visual/manual interface, and the reasons for its preference over speech. This view reflects fundamental differences between the two modes of communication. However, human factors also suggest that much can be done to improve the usability of speech systems. Broad guidelines for the use of speech-output devices are detailed, and include criteria for the assignment of output either to the visual or auditory channel. The human factors of synthetic speech are discussed, in terms of enhancing both its intelligibility and its acceptability to users. Performance measures of automatic speech recognition (ASR) technology must account for the sensitivity of such devices to extraneous factors. A description of contemporary ASR technology serves as a basis for the explanation of this sensitivity, and of the limited performance of such devices. Guidelines for the implementation of ASR technology are described that address both user behavior and the interaction of the device with its environment. Modality effects upon user behavior in a man-machine dialogue are described.

© All rights reserved Tucker and Jones and/or Lawrence Erlbaum Associates

 
Edit | Del

Jones, Dylan M. and Tucker, Philip (1991): Voice as Interface: A Critique. In: Proceedings of the Fourth International Conference on Human-Computer Interaction 1991. pp. 463-467.

The revolution in information technology has made relatively little use of the auditory channel of communication. This is perhaps not surprising given that the tasks for which the computer has been used hitherto are typically ones for which there was already a corresponding manual visual version. However the prevalence of the manual visual interface may soon be diminished. As computers become more powerful and more functionally diverse, devices are being developed that can generate and perceive speech. Thus research into the effects of modality on communication behaviour is becoming increasingly salient to the human factors of interface design. Using voice in the human computer interface has several potential advantages but the realisation of these advantages will depend upon the sophistication of the technology and the appropriateness of its implementation. Speech liberates the user by allowing movement away from terminal while input and output continue. Moreover it allows interaction when the hands and/or eyes are busy. The auditory channel is well suited to bringing urgent information to the attention of an operator, regardless of the direction of the current visual focus and is not reduced even if the user is relatively remote from a terminal. Many physically handicapped users, hitherto disenfranchised by need for sight, finger dexterity and the paucity of specially adapted interfaces, come within the catchment area of speech-based computer users. In its most highly developed form the speech based computer could take the form of a telephone hand-set, perhaps coupled with all ordinary domestic television set. Some of the computer's intelligence may be local, but the greater burden of processing could be centrally located at a remote site. Yet despite the availability of speech I/O devices for some years, their introduction into day-to-day use has been relatively slow. While technological sophistication has clearly played a role in determining the development of the visual and auditory channels in the interface, a human factors perspective is also required to further explain the emphasis on the visual/manual medium.

© All rights reserved Jones and Tucker and/or Elsevier Science

1989
 
Edit | Del

Jones, Dylan M. and Hapeshi, Kevin (1989): Monitoring Speech Recognizer Feedback During Data Entry from Short-Term Memory: A Working Memory Analysis. In International Journal of Human-Computer Interaction, 1 (2) pp. 187-209.

Subjects recalled items from short-term memory by speaking into a speech recognizer. Two experiments examined effects of the type of feedback provided by the device during this data entry task. Three types of feedback were compared, varying in: modality, either auditory or visual; timing, either concurrent or terminal, and specificity, either verbal or nonverbal. Recognizer performance was better with concurrent feedback than with terminal feedback and better with nonverbal feedback than with verbal feedback. In terms of the efficiency of memory (the number of errors and the rate of data throughput), performance was more impaired by concurrent verbal feedback than by nonverbal feedback. Two main functional features of feedback in automatic speech recognition were identified: (1) degree of similarity between the feedback and the phonologically-coded information held in short-term memory, which pointed to the dangers of spoken feedback and to a lesser extent the use of verbal visual feedback, and (2) the extent to which prompting is required to establish the timeliness of data input, a feature which is especially important with isolated-word speech recognition.

© All rights reserved Jones and Hapeshi and/or Lawrence Erlbaum Associates

 
Edit | Del

Hapeshi, K. and Jones, Dylan M. (1989): Concurrent Manual Tracking and Speaking: Implications for Automatic Speech Recognition. In: Proceedings of the Third International Conference on Human-Computer Interaction 1989. pp. 412-418.

The study examined interference between manual tracking and a concurrent speech input task. Tracking did not significantly increase recognition error rates, but did reduced the quality of the speech utterance when measured by recognition distance scores. Also, there is some evidence that tracking error increases at the moment of speech data entry. This appeared to be due to sensory registration rather than to programming of the utterance. The results also have implications for methodology, in particular the measurement of speech recognition accuracy and for models of dual task performance.

© All rights reserved Hapeshi and Jones and/or Lawrence Erlbaum Associates

1988
 
Edit | Del

Jones, Dylan M. and Winder, R. (eds.) Proceedings of the Fourth Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers IV August 5-9, 1988, University of Manchester, UK.

1987
 
Edit | Del

Jones, Dylan M., Hapeshi, Kevin and Frankish, Clive (1987): Human Factors and the Problems of Evaluation in the Design of Speech Systems Interfaces. In: Carroll, John M. and Tanner, Peter P. (eds.) Proceedings of the ACM CHI 87 Human Factors in Computing Systems Conference April 5-9, 1987, Toronto, Canada. pp. 41-49.

As part of a project in the Alvey programme, during the next two years we will be carrying out experimental studies aimed at generating human factors guidelines for the design of automatic speech recognition systems for avionics and office applications. The planning of the programme of experimental work has thrown up some unique and interesting methodological issues. Firstly, there are a range of factors which can affect performance in the speech recognition system. Secondly, the choice of dependent variable is not easy, since there are a number of possible 'yardsticks' by which recognition performance can be assessed, none of which are, on their own, entirely satisfactory. In this paper we describe these methodological problems and suggest some possible solutions.

© All rights reserved Jones et al. and/or ACM Press

 
Edit | Del

Frankish, Clive, Jones, Dylan M., Madden, C., Waight, K. and Stoddart, J. (1987): Parcel Sorting by Speech Recognition: Human Factors Issues. In: Carroll, John M. and Tanner, Peter P. (eds.) Proceedings of the ACM CHI 87 Human Factors in Computing Systems Conference April 5-9, 1987, Toronto, Canada. pp. 295-303.

Two types of vocabulary were compared in a simulated parcel sorting task using automatic speech recognition. One type used place names drawn from operational use (eg. 'Belfast Delivery'), in the other, alpha-numeric codes based on the International Civil Aviation Organisation (ICAO) alphabet were substituted for place names (eg. 'Bravo One'). A Kurzweil speech recogniser was used in isolated mode. The overall rate of work was lower with codes based on the ICAO alphabet than with place names. However, code recognition rates were similar for both vocabularies; the ICAO alphabet gave an average rate of

© All rights reserved Frankish et al. and/or ACM Press

 
Edit | Del

Frankish, C. R., Jones, Dylan M., Madden, C., Waight, K. and Stoddart, J. (1987): Parcel Sorting by Speech Recognition: Human Factors Issues. In: Diaper, Dan and Winder, Russel (eds.) Proceedings of the Third Conference of the British Computer Society Human Computer Interaction Specialist Group - People and Computers III August 7-11, 1987, University of Exeter, UK. pp. 295-303.

Two types of vocabulary were compared in a simulated parcel sorting task using automatic speech recognition. One type used place names drawn from operational use (eg. 'Belfast Delivery'), in the other, alpha-numeric codes based on the International Civil Aviation Organisation (ICAO) alphabet were substituted for place names (eg. 'Bravo One'). A Kurzweil speech recogniser was used in isolated mode. The overall rate of work was lower with codes based on the ICAO alphabet than with place names. However, code recognition rates were similar for both vocabularies; the ICAO alphabet gave an average rate of

© All rights reserved Frankish et al. and/or Cambridge University Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

04 Apr 2012: Modified
16 Jun 2009: Modified
05 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
01 Jun 2009: Modified
28 Jun 2007: Modified
28 Jun 2007: Modified
28 Apr 2003: Added
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/dylan_m__jones.html

Publication statistics

Pub. period:1987-2011
Pub. count:23
Number of co-authors:20



Co-authors

Number of publications with 3 favourite co-authors:

Roy A. Ruddle:9
Stephen J. Payne:3
Philip Tucker:3

 

 

Productive colleagues

Dylan M. Jones's 3 most productive colleagues in number of publications:

Stephen J. Payne:47
Roy A. Ruddle:27
Andrew Howes:26
 
 
 
Jul 30

It's all about one thing: creative problem-solving to get the story out.

-- Robert Greenberg, R/GA, 2006

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!