Publication statistics

Pub. period:-2010
Pub. count:23
Number of co-authors:27



Co-authors

Number of publications with 3 favourite co-authors:

Andrew Sears:12
Jonathan Lazar:8
Clare-Marie Karat:7

 

 

Productive colleagues

Jinjuan Feng's 3 most productive colleagues in number of publications:

Andrew Sears:90
Julie A. Jacko:84
John Karat:47
 
 
 
Jul 10

Visual appearance is one of the most effective variables for quickly differentiating one application from another

-- Bob Baxley, 2003

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Jinjuan Feng

Add description
Add publication

Publications by Jinjuan Feng (bibliography)

 what's this?
2010
 
Edit | Del

Feng, Jinjuan, Lazar, Jonathan, Kumin, Libby and Ozok, Ant (2010): Computer Usage by Children with Down Syndrome: Challenges and Future Research. In ACM Transactions on Accessible Computing, 2 (3) p. 13.

Children with Down syndrome, like neurotypical children, are growing up with extensive exposure to computer technology. Computers and computer-related devices have the potential to help these children in education, career development, and independent living. Our understanding of computer usage by this population is quite limited. Most of the software, games, and Web sites that children with Down syndrome interact with are designed without consideration of their special needs, making the applications less effective or completely inaccessible. We conducted a large-scale survey that collected computer usage information from the parents of approximately six hundred children with Down syndrome. This article reports the text responses collected in the survey and is intended as a step towards understanding the difficulties children with Down syndrome experience while using computers. The relationship between the age and the specific type of difficulties, as well as related design challenges are also reported. A number of potential research directions and hypotheses are identified for future studies. Due to limitations in survey methodology, the findings need to be further validated through hypothesis-driven, empirical studies.

© All rights reserved Feng et al. and/or ACM Press

 
Edit | Del

Sauer, Graig, Lazar, Jonathan, Hochheiser, Harry and Feng, Jinjuan (2010): Towards A Universally Usable Human Interaction Proof: Evaluation of Task Completion Strategies. In ACM Transactions on Accessible Computing, 2 (4) p. 15.

The need for security features to stop spam and bots has prompted research aimed at developing human interaction proofs (HIPs) that are both secure and easy to use. The primarily visual techniques used in these HIP tools present difficulties for users with visual impairments. This article reports on the development of Human-Interaction Proof, Universally Usable (HIPUU), a new approach to human-interaction proofs based on identification of a series of sound/image pairs. Simultaneous presentation of a single, unified task in two alternative modalities provides multiple paths to successful task completion. We present two alternative task completion strategies, based on differing input strategies (menu-based vs. free text entry). Empirical results from studies involving both blind and sighted users validate both the usability and accessibility of these differing strategies, with blind users achieving

© All rights reserved Sauer et al. and/or ACM Press

 
Edit | Del

Feng, Jinjuan and Sears, Andrew (2010): Beyond errors: measuring reliability for error-prone interaction devices. In Behaviour and Information Technology, 29 (2) pp. 149-163.

The development of assistive technologies and ubiquitous computing highlights the need to better understand errors associated with both the limitations of the devices being used and difficulties introduced by the environment in which interactions occur. At the same time, we need to better understand the relationship between the user experience and the consequences users encounter when errors occur. Although error rates are the most common measure of reliability reported in the human-computer interaction literature, this simple metric fails to address the different consequences users may experience. We propose a new metric, leveraging the concepts of entropy and desirability, to quantify the concept of reliability. An empirical study provides a preliminary validation of this new metric, focusing on its ability to describe several aspects of user satisfaction as well as task completion time. Results confirm that our new metric is more effective than error rates when describing user satisfaction and that the metric can also be used to describe task completion times when error rates are high.

© All rights reserved Feng and Sears and/or Taylor and Francis

 
Edit | Del

Sauer, Graig, Holman, Jonathan, Lazar, Jonathan, Hochheiser, Harry and Feng, Jinjuan (2010): Accessible privacy and security: a universally usable human-interaction proof tool. In Universal Access in the Information Society, 9 (3) pp. 239-248.

Despite growing interest in designing usable systems for managing privacy and security, recent efforts have generally failed to address the needs of users with disabilities. As security and privacy tools often rely upon subtle visual cues or other potentially inaccessible indicators, users with perceptual limitations might find such tools particularly challenging. To understand the needs of an important group of users with disabilities, a focus group was conducted with blind users to determine their perceptions of security-related challenges. Human-interaction proof (HIP) tools, commonly known as CAPTCHAs, are used by web pages to defeat robots and were identified in the focus group as a major concern. Therefore, a usability test was conducted to see how well blind users were able to use audio equivalents of these graphical tools. Finally, an accessible HIP tool was developed which combines audio and matching images, supporting both visual and audio output. Encouraging results from a small usability evaluation of the prototype with five sighted users and five blind users show that this new form of HIP is preferred by both blind and visual users to previous forms of text-based HIPs. Future directions for research are also discussed.

© All rights reserved Sauer et al. and/or Springer Verlag

2009
 
Edit | Del

Zenebe, Azene, Tuner, Claude, Feng, Jinjuan, Lazar, Jonathan and O'Leary, Mike (2009): Integrating usability and accessibility in information assurance education. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 24.

 
Edit | Del

Zhu, Shaojian, Ma, Yao, Feng, Jinjuan and Sears, Andrew (2009): Don't listen! I am dictating my password!. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 229-230.

Speech recognition is a promising alternative input technology for individuals with upper-body motor impairments that hinder the use of the standard keyboard and mouse. A recent long-term field study found that the users employed speech techniques for a variety of tasks beyond generating text documents [1]. One challenge with hands-free speech-based interactions is user authentication, which requires the users to speak their user IDs and passwords character by character. Unfortunately, speaking a password presents both security and privacy threats as well as usability problems. To address this challenge, we propose a new speech-based authentication model. An initial proof-of-concept prototype has been implemented and a pilot study was conducted. Preliminary results suggest several problems for further examination.

© All rights reserved Zhu et al. and/or their publisher

 
Edit | Del

Feng, Jinjuan and Sears, Andrew (2009): Speech Input to Support Universal Access. In: Stephanidis, Constantine (ed.). "The Universal Access Handbook". Boca Ration, USA: Taylor and Francispp. 30-1-30-16

 Cited in the following chapter:

Design 4 All: [/encyclopedia/design_4_all.html]


 
2008
 
Edit | Del

Feng, Jinjuan, Zhu, Shaojian, Hu, Ruimin and Sears, Andrew (2008): Speech technology in real world environment: early results from a long term study. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 233-234.

Existing knowledge on how people use speech-based technologies in realistic settings is limited. We are conducting a longitudinal field study, spanning six months, to investigate how users with no physical impairments and users with upper body physical impairments use speech technologies when interacting with computers in their home environment. Digital data logs, time diaries, and interviews are being used to record the types of applications used, frequency of use of each application, and difficulties experienced as well as subjective data regarding the usage experience. While confirming many expectations, initial results have provided several unexpected insights including a preference to use speech for navigation instead of dictation tasks, and the use of speech technology for programming and games.

© All rights reserved Feng et al. and/or ACM Press

2007
 
Edit | Del

Lazar, Jonathan, Meiselwitz, Gabriele and Feng, Jinjuan (2007): Understanding Web Credibility: A Synthesis of the Research Literature. In Foundations and Trends in Human-Computer Interaction, 1 (2) pp. 139-202.

As more of our communication, commerce, and personal data goes online, credibility becomes an increasingly important issue. How do we determine if our e-commerce sites, our healthcare sites, or our online communication partners are credible? This paper examines the research literature in the area of web credibility. This review starts by examining the cognitive foundations of credibility. Other sections of the paper examine not only the general credibility of web sites, but also online communication, such as e-mail, instant messaging, and online communities. Training and education, as well as future issues (such as CAPTCHAs and phishing), will be addressed. The implications for multiple populations (users, web developers, browser designers, and librarians) will be discussed.

© All rights reserved Lazar et al. and/or Now Publishers

2006
 
Edit | Del

Karat, Clare-Marie, Karat, John, Brodie, Carolyn and Feng, Jinjuan (2006): Evaluating interfaces for privacy policy rule authoring. In: Proceedings of ACM CHI 2006 Conference on Human Factors in Computing Systems 2006. pp. 83-92.

Privacy policy rules are often written in organizations by a team of people in different roles. Currently, people in these roles have no technological tools to guide the creation of clear and implementable high-quality privacy policy rules. High-quality privacy rules can be the basis for verifiable automated privacy access decisions. An empirical study was conducted with 36 users who were novices in privacy policy authoring to evaluate the quality of rules created and user satisfaction with two experimental privacy authoring tools and a control condition. Results show that users presented with scenarios were able to author significantly higher quality rules using either the natural language with a privacy rule guide tool or a structured list tool as compared to an unguided natural language control condition. The significant differences in quality were found in both user self-ratings of rule quality and objective quality scores. Users ranked the two experimental tools significantly higher than the control condition. Implications of the research and future research directions are discussed.

© All rights reserved Karat et al. and/or ACM Press

 
Edit | Del

Lazar, Jonathan, Feng, Jinjuan and Allen, Aaron (2006): Determining the impact of computer frustration on the mood of blind users browsing the web. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 149-156.

While previous studies have investigated the impact of frustration on computer users' mood as well as the causes of frustration, no research has ever been conducted to examine the relationship between computer frustrations and mood change for users with visual impairment. In this paper, we report on a study that examined the frustrating experiences and mood change of 100 participants, all with visual impairments, when they were browsing the web. The result shows that frustration does cause the participants' mood to deteriorate. However, the amount of time lost due to frustrating situations does not have a significant impact on users' mood, which is very different from the previous research on users without visual impairment. The impact on work seems to have the greatest impact on user mood.

© All rights reserved Lazar et al. and/or ACM Press

 
Edit | Del

Feng, Jinjuan, Sears, Andrew and Karat, Clare-Marie (2006): A longitudinal evaluation of hands-free speech-based navigation during dictation. In International Journal of Human-Computer Studies, 64 (6) pp. 553-569.

Despite a reported recognition accuracy rate of 98%, speech recognition technologies have yet to be widely adopted by computer users. When considering hands-free use of speech-based solutions, as is the case for individuals with physical impairments that interfere with the use of traditional solutions such as a mouse, the considerable time required to complete basic navigation tasks presents a significant barrier to adoption. Several solutions were proposed to improve navigation efficiency based on the results of a previous study. In the current study, a longitudinal experiment was conducted to investigate the process by which users learn to use hands-free speech-based navigation in the context of large vocabulary, continuous dictation tasks as well the efficacy of the proposed solutions. Due to the influence initial interactions have on the adoption of speech-based solutions, the current study focused on these critical, initial, interactions of individuals with no prior experience using speech-based dictation solutions. Our results confirm the efficacy of the solutions proposed earlier while providing valuable insights into the strategies users employ when using speech-based navigation commands as well as design decisions that can influence these patterns.

© All rights reserved Feng et al. and/or Academic Press

 
Edit | Del

Price, Kathleen J., Lin, Min, Feng, Jinjuan, Goldman, Rich, Sears, Andrew and Jacko, Julie A. (2006): Motion does matter: an examination of speech-based text entry on the move. In Universal Access in the Information Society, 4 (3) pp. 246-257.

Desktop interaction solutions are often inappropriate for mobile devices due to small screen size and portability needs. Speech recognition can improve interactions by providing a relatively hands-free solution that can be used in various situations. While mobile systems are designed to be transportable, few have examined the effects of motion on mobile interactions. This paper investigates the effect of motion on automatic speech recognition (ASR) input for mobile devices. Speech recognition error rates (RER) have been examined with subjects walking or seated, while performing text input tasks and the effect of ASR enrollment conditions on RER. The obtained results suggest changes in user training of ASR systems for mobile and seated usage.

© All rights reserved Price et al. and/or Springer Verlag

2005
 
Edit | Del

Feng, Jinjuan, Karat, Clare-Marie and Sears, Andrew (2005): How productivity improves in hands-free continuous dictation tasks: lessons learned from a longitudinal study. In Interacting with Computers, 17 (3) pp. 265-289.

Speech recognition technology continues to improve, but users still experience significant difficulty using the software to create and edit documents. The reported composition speed using speech software is only between 8 and 15 words per minute [Proc CHI 99 (1999) 568; Universal Access Inform Soc 1 (2001) 4], much lower than people's normal speaking speed of 125-150 words per minute. What causes the huge gap between natural speaking and composing using speech recognition? Is it possible to narrow the gap and make speech recognition more promising to users? In this paper we discuss users' learning processes and the difficulties they experience as related to continuous dictation tasks using state of the art Automatic Speech Recognition (ASR) software. Detailed data was collected for the first time on various aspects of the three activities involved in document composition tasks: dictation, navigation, and correction. The results indicate that navigation and error correction accounted for big chunk of the dictation task during the early stages of interaction. As users gained more experience, they became more efficient at dictation, navigation and error correction. However, the major improvements in productivity were due to dictation quality and the usage of navigation commands. These results provide insights regarding the factors that cause the gap between user expectation with speech recognition software and the reality of use, and how those factors changed with experience. Specific advice is given to researchers as to the most critical issues that must be addressed.

© All rights reserved Feng et al. and/or Elsevier Science

 
Edit | Del

Karat, John, Karat, Clare-Marie, Brodie, Carolyn and Feng, Jinjuan (2005): Privacy in information technology: Designing to enable privacy policy management in organizations. In International Journal of Human-Computer Studies, 63 (1) pp. 153-174.

As information technology continues to spread, we believe that there will be an increasing awareness of a fundamental need to address privacy concerns, and that doing so will require an understanding of policies that govern information use accompanied by development of technologies that can implement such policies. The research reported here describes our efforts to design a system which facilitates privacy policy authoring, implementation, and compliance monitoring. We employed a variety of user-centered design methods with 109 target users across the four steps of the research reported here. This case study highlights the work of identifying organizational privacy requirements, iteratively designing and validating a prototype with target users, and conducting laboratory tests to guide specific design decisions to meet the needs of providing flexible privacy enabling technologies. Each of the four steps in our work is identified and described, and directions for future work in privacy are suggested.

© All rights reserved Karat et al. and/or Academic Press

 
Edit | Del

Brodie, Carolyn, Karat, Clare-Marie, Karat, John and Feng, Jinjuan (2005): Usable security and privacy: a case study of developing privacy management tools. In: Proceedings of the 2005 Symposium on Usable Privacy and Security 2005. pp. 35-43.

Privacy is a concept which received relatively little attention during the rapid growth and spread of information technology through the 1980's and 1990's. Design to make information easily accessible, without particular attention to issues such as whether an individual had a desire or right to control access to and use of particular information was seen as the more pressing goal. We believe that there will be an increasing awareness of a fundamental need to address privacy concerns in information technology, and that doing so will require an understanding of policies that govern information use as well as the development of technologies that can implement such policies. The research reported here describes our efforts to design a privacy management workbench which facilitates privacy policy authoring, implementation, and compliance monitoring. This case study highlights the work of identifying organizational privacy requirements, analyzing existing technology, on-going research to identify approaches that address these requirements, and iteratively designing and validating a prototype with target users for flexible privacy technologies.

© All rights reserved Brodie et al. and/or ACM Press

 
Edit | Del

Zhou, Lina, Feng, Jinjuan, Sears, Andrew and Shi, Yongmei (2005): Applying the Na´ve Bayes Classifier to Assist Users in Detecting Speech Recognition Errors. In: HICSS 2005 - 38th Hawaii International Conference on System Sciences 3-6 January, 2005, Big Island, HI, USA. .

2004
 
Edit | Del

Price, Kathleen J., Lin, Min, Feng, Jinjuan, Goldman, Rich, Sears, Andrew and Jacko, Julie A. (2004): Data Entry on the Move: An Examination of Nomadic Speech-Based Text Entry. In: Proceedings of the 8th ERCIM Workshop on User Interfaces for All 2004. p. 460.

Desktop interaction solutions are often inappropriate for mobile devices due to small screen size and portability needs. Speech recognition can improve interactions by providing a relatively hands-free solution that can be used in various situations. While mobile systems are designed to be transportable, few have examined the effects of motion on mobile interactions. We investigated the effect of motion on automatic speech recognition (ASR) input for mobile devices. We examined speech recognition error rates (RER) with subjects walking or seated, while performing text input tasks and the effect of ASR enrollment conditions on RER. RER were significantly lower for seated conditions. There was a significant interaction between enrollment and task conditions. When users enrolled while seated, but completed walking tasks, RER increased. In contrast, when users enrolled while walking, but completed seated tasks, RER decreased. These results suggest changes in user training of ASR systems for mobile and seated usage.

© All rights reserved Price et al. and/or Springer Verlag

 
Edit | Del

Feng, Jinjuan, Lazar, Jonathan and Preece, Jennifer J. (2004): Empathy and online interpersonal trust: A fragile relationship. In Behaviour and Information Technology, 23 (2) pp. 97-106.

The rapid growth of personal email communication, instant messaging and online communities has brought attention to the important role of interpersonal trust in online communication. An empirical study was conducted focusing on the effect of empathy on online interpersonal trust in textual IM. To be more specific, the relationship between empathic accuracy, response type and online interpersonal trust was investigated. The result suggests both empathic accuracy and response type have significant influence on online interpersonal trust. The interaction between empathic accuracy and response type also significantly influences online trust. Interestingly, the results imply a relationship between daily trust attitude and online interpersonal trust. People who are more trusting in their daily life may experience more difficulty in developing trust online. There is also some evidence to suggest that different communication scenarios may have an influence on online trust.

© All rights reserved Feng et al. and/or Taylor and Francis

 
Edit | Del

Feng, Jinjuan and Sears, Andrew (2004): Using confidence scores to improve hands-free speech based navigation in continuous dictation systems. In ACM Transactions on Computer-Human Interaction, 11 (4) pp. 329-356.

Speech recognition systems have improved dramatically, but recent studies confirm that error correction activities still account for 66-75% of the users' time, and 50% of that time is spent just getting to the errors that need to be corrected. While researchers have suggested that confidence scores could prove useful during the error correction process, the focus is typically on error detection. More importantly, empirical studies have failed to confirm any measurable benefits when confidence scores are used in this way within dictation-oriented applications. In this article, we provide data that explains why confidence scores are unlikely to be useful for error detection. We propose a new navigation technique for use when speech-only interactions are strongly preferred and common, desktop-sized displays are available. The results of an empirical study that highlights the potential of this new technique are reported. An informal comparison between the current study and previous research suggests the new technique reduces time spent on navigation by 18%. Future research should include additional studies that compare the proposed technique to previous non-speech and speech-based navigation solutions.

© All rights reserved Feng and Sears and/or ACM Press

2001
 
Edit | Del

Sears, Andrew, Karat, Clare-Marie, Oseitutu, Kwesi, Karimullah, Azfar and Feng, Jinjuan (2001): Productivity, satisfaction, and interaction strategies of individuals with spinal cord injuries and traditional users interacting with speech recognition software. In Universal Access in the Information Society, 1 (1) pp. 4-15.

Speech recognition is an important technology that is becoming increasingly effective for dictation-oriented activities. While recognition accuracy has increased dramatically in recent years, recent studies confirm that traditional computer users are still faster using a keyboard and mouse and spend more time correcting errors than dictating. Further, as these users become more experienced they frequently adopt multimodal strategies that require the keyboard and mouse when correcting errors. While speech recognition can be a convenient alternative for traditional computer users, it can be a powerful tool for individuals with physical disabilities that limit their ability to use a keyboard and mouse. However, research into the performance, satisfaction, and usage patterns of individuals with physical disabilities has not been reported. In this article, we report on a study that provides initial insights into the efficacy of existing speech recognition systems with respect to individuals with physical disabilities. Our results confirm that productivity does not differ between traditional users and those with physical disabilities. In contrast, numerous differences were observed when users rated their satisfaction with the system and when usage patterns were analyzed.

© All rights reserved Sears et al. and/or Springer Verlag

 
Edit | Del

Oseitutu, Kwesi, Feng, Jinjuan, Sears, Andrew and Karat, Clare-Marie (2001): Speech recognition for data entry by individuals with spinal cord injuries. In: Stephanidis, Constantine (ed.) HCI International 2001 - Proceedings of the 9th International Conference on Human-Computer Interaction August 5-10, 2001, New Orleans, USA. pp. 402-406.

 
Edit | Del

Lazar, Jonathan, Meiselwitz, Gabriele and Feng, Jinjuan (): Understanding Web Credibility: A Synthesis of the Research Literature. In Foundations and Trends in Human-Computer Interaction, 1 (2) pp. 139-202.

As more of our communication, commerce, and personal data goes online, credibility becomes an increasingly important issue. How do we determine if our e-commerce sites, our healthcare sites, or our online communication partners are credible? This paper examines the research literature in the area of web credibility. This review starts by examining the cognitive foundations of credibility. Other sections of the paper examine not only the general credibility of web sites, but also online communication, such as e-mail, instant messaging, and online communities. Training and education, as well as future issues (such as CAPTCHAs and phishing), will be addressed. The implications for multiple populations (users, web developers, browser designers, and librarians) will be discussed.

© All rights reserved Lazar et al. and/or Now Publishers

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Jan 2014: Added
17 Jan 2011: Modified
15 Jan 2011: Modified
19 Nov 2010: Modified
19 Nov 2010: Modified
03 Nov 2010: Modified
08 Sep 2009: Modified
26 Jun 2009: Modified
13 Jun 2009: Modified
05 Jun 2009: Modified
07 Apr 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
12 May 2008: Modified
24 Jul 2007: Modified
27 Jun 2007: Modified
27 Jun 2007: Modified
27 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
19 Jun 2007: Modified
11 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/jinjuan_feng.html

Publication statistics

Pub. period:-2010
Pub. count:23
Number of co-authors:27



Co-authors

Number of publications with 3 favourite co-authors:

Andrew Sears:12
Jonathan Lazar:8
Clare-Marie Karat:7

 

 

Productive colleagues

Jinjuan Feng's 3 most productive colleagues in number of publications:

Andrew Sears:90
Julie A. Jacko:84
John Karat:47
 
 
 
Jul 10

Visual appearance is one of the most effective variables for quickly differentiating one application from another

-- Bob Baxley, 2003

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!