Publication statistics

Pub. period:2007-2012
Pub. count:30
Number of co-authors:33



Co-authors

Number of publications with 3 favourite co-authors:

Scott E. Hudson:15
Brian Amento:5
Ivan Poupyrev:4

 

 

Productive colleagues

Chris Harrison's 3 most productive colleagues in number of publications:

Scott E. Hudson:113
Jodi Forlizzi:90
Anind K. Dey:71
 
 
 
Jul 23

Men have become the tools of their tools.

-- Henry David Thoreau

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Chris Harrison

Ph.D.

Picture of Chris Harrison.
Personal Homepage:
http://www.chrisharrison.net

Current place of employment:
Carnegie Mellon University, Human-Computer Interaction Institute

Chris is a third year Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon University. He primarily works on novel input devices and display technologies. Scott Hudson is his advisor.

Edit author info
Add publication

Publications by Chris Harrison (bibliography)

 what's this?
2012
 
Edit | Del

Harrison, Chris, Ramamurthy, Shilpa and Hudson, Scott E. (2012): On-body interaction: armed and dangerous. In: Proceedings of the 6th International Conference on Tangible and Embedded Interaction 2012. pp. 69-76.

Recent technological advances in input sensing, as well as ultra-small projectors, have opened up new opportunities for interaction -- the use of the body itself as both an input and output platform. Such on-body interfaces offer new interactive possibilities, and the promise of access to computation, communication and information literally in the palm of our hands. The unique context of on-body interaction allows us to take advantage of extra dimensions of input our bodies naturally afford us. In this paper, we consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this opportunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we proto-typed many applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. We highlight the most compelling techniques we uncovered. Further, this paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Finally, we bring to light several new and unique interaction techniques.

© All rights reserved Harrison et al. and/or ACM Press

 
Edit | Del

Poupyrev, Ivan, Harrison, Chris and Sato, Munehiko (2012): Touché: touch and gesture sensing for the real world. In: Proceedings of the 2012 International Conference on Uniquitous Computing 2012. p. 536.

Touché proposes a novel Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but also recognize complex configurations of the human hands and body. Such contextual information significantly enhances touch interaction in a broad range of applications, from conventional touchscreens to unique contexts and materials. For example, in our explorations we add touch and gesture sensitivity to the human body and liquids. We demonstrate the rich capabilities of Touché with five example setups from different application domains and conduct experimental studies that show gesture classification accuracies of 99% are achievable with our technology.

© All rights reserved Poupyrev et al. and/or ACM Press

 
Edit | Del

Harrison, Chris, Sato, Munehiko and Poupyrev, Ivan (2012): Capacitive fingerprinting: exploring user differentiation by sensing electrical properties of the human body. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 537-544.

At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. In this work, we consider how the electrical properties of humans and their attire can be used to support user differentiation on touchscreens. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing, which measures the impedance of a user to the environment (i.e., ground) across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different footwear, and so on. This, in turn, yields different impedance profiles, which allows for touch events, including multitouch gestures, to be attributed to a particular user. This has many interesting implications for interactive design. We describe and evaluate our sensing approach, demonstrating that the technique has considerable promise. We also discuss limitations, how these might be overcome, and next steps.

© All rights reserved Harrison et al. and/or ACM Press

 
Edit | Del

Harrison, Chris, Xiao, Robert and Hudson, Scott (2012): Acoustic barcodes: passive, durable and inexpensive notched identification tags. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 563-568.

We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.

© All rights reserved Harrison et al. and/or ACM Press

2011
 
Edit | Del

Harrison, Chris, Hsieh, Gary, Willis, Karl D. D., Forlizzi, Jodi and Hudson, Scott E. (2011): Kineticons: using iconographic motion in graphical user interface design. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1999-2008.

Icons in graphical user interfaces convey information in a mostly universal fashion that allows users to immediately interact with new applications, systems and devices. In this paper, we define Kineticons -- an iconographic scheme based on motion. By motion, we mean geometric manipulations applied to a graphical element over time (e.g., scale, rotation, deformation). In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content or "pixel-space" of an element. Although kineticons are not new -- indeed, they are seen in several popular systems -- we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Part of our contribution is an initial kineticon vocabulary, which we evaluated in a 200 participant study. We conclude with discussion of our results and design recommendations.

© All rights reserved Harrison et al. and/or their publisher

 
Edit | Del

Xu, Cheng, Israr, Ali, Poupyrev, Ivan, Bau, Olivier and Harrison, Chris (2011): Tactile display for the visually impaired using TeslaTouch. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 317-322.

TeslaTouch is a technology that provides tactile sensation to moving fingers on touch screens. Based on TeslaTouch, we have developed applications for the visually impaired to interpret and create 2D tactile information. In this paper, we demonstrate these applications, present observations from the interaction, and discuss TeslaTouch's potential in supporting communication among visually impaired individuals.

© All rights reserved Xu et al. and/or their publisher

 
Edit | Del

Saponas, T. Scott, Harrison, Chris and Benko, Hrvoje (2011): PocketTouch: through-fabric capacitive touch input. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 303-308.

PocketTouch is a capacitive sensing prototype that enables eyes-free multitouch input on a handheld device without having to remove the device from the pocket of one's pants, shirt, bag, or purse. PocketTouch enables a rich set of gesture interactions, ranging from simple touch strokes to full alphanumeric text entry. Our prototype device consists of a custom multitouch capacitive sensor mounted on the back of a smartphone. Similar capabilities could be enabled on most existing capacitive touchscreens through low-level access to the capacitive sensor. We demonstrate how touch strokes can be used to initialize the device for interaction and how strokes can be processed to enable text recognition of characters written over the same physical area. We also contribute a comparative study that empirically measures how different fabrics attenuate touch inputs, providing insight for future investigations. Our results suggest that PocketTouch will work reliably with a wide variety of fabrics used in today's garments, and is a viable input method for quick eyes-free operation of devices in pockets.

© All rights reserved Saponas et al. and/or ACM Press

 
Edit | Del

Harrison, Chris, Benko, Hrvoje and Wilson, Andrew D. (2011): OmniTouch: wearable multitouch interaction everywhere. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 441-450.

OmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the wearer to use their hands, arms and legs as graphical, interactive surfaces. Users can also transiently appropriate surfaces from the environment to expand the interactive area (e.g., books, walls, tables). On such surfaces -- without any calibration -- OmniTouch provides capabilities similar to that of a mouse or touchscreen: X and Y location in 2D interfaces and whether fingers are "clicked" or hovering, enabling a wide variety of interactions. Reliable operation on the hands, for example, requires buttons to be 2.3cm in diameter. Thus, it is now conceivable that anything one can do on today's mobile devices, they could do in the palm of their hand.

© All rights reserved Harrison et al. and/or ACM Press

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2011): A new angle on cheap LCDs: making positive use of optical distortion. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 537-540.

Most LCD screens exhibit color distortions when viewed at oblique angles. Engineers have invested significant time and resources to alleviate this effect. However, the massive manufacturing base, as well as millions of in-the-wild monitors, means this effect will be common for many years to come. We take an opposite stance, embracing these optical peculiarities, and consider how they can be used in productive ways. This paper discusses how a special palette of colors can yield visual elements that are invisible when viewed straight-on, but visible at oblique angles. In essence, this allows conventional, unmodified LCD screens to output two images simultaneously -- a feature normally only available in far more complex setups. We enumerate several applications that could take advantage of this ability.

© All rights reserved Harrison and Hudson and/or ACM Press

2010
 
Edit | Del

Harrison, Chris, Tan, Desney and Morris, Dan (2010): Skinput: appropriating the body as an input surface. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 453-462.

We present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with several proof-of-concept applications we developed.

© All rights reserved Harrison et al. and/or their publisher

 
Edit | Del

Harrison, Chris, Yeo, Zhiquan and Hudson, Scott E. (2010): Faster progress bars: manipulating perceived duration with visual augmentations. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1545-1548.

Human perception of time is fluid, and can be manipulated in purposeful and productive ways. In this note, we propose and evaluate variations on two visual designs for progress bars that alter users' perception of time passing, and "appear" faster when in fact they are not. As a baseline, we use standard, solid-color progress bars, prevalent in many user interfaces. In a series of direct comparison tests, we are able to rank how these augmentations compare to one another. We then show that these designs yield statistically significantly shorter perceived durations than progress bars seen in many modern interfaces, including Mac OSX. Progress bars with animated ribbing that move backwards in a decelerating manner proved to have the strongest effect. In a final experiment, we measured the effect of this particular progress bar design and showed that it reduces the perceived duration among our participants by 11%.

© All rights reserved Harrison et al. and/or their publisher

 
Edit | Del

Harrison, Chris, Dey, Anind K. and Hudson, Scott E. (2010): Evaluation of progressive image loading schemes. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1549-1552.

Although network bandwidth has increased dramatically, high-resolution images often take several seconds to load, and considerably longer on mobile devices over wireless connections. Progressive image loading techniques allow for some visual content to be displayed prior to the whole file being downloaded. In this note, we present an empirical evaluation of popular progressive image loading methods, and derive one novel technique from our findings. Results suggest a spiral variation of bilinear interlacing can yield an improvement in content recognition time.

© All rights reserved Harrison et al. and/or their publisher

 
Edit | Del

Schwarz, Julia, Harrison, Chris, Hudson, Scott E. and Mankoff, Jennifer (2010): Cord input: an intuitive, high-accuracy, multi-degree-of-freedom input method for mobile devices. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1657-1660.

A cord, although simple in form, has many interesting physical affordances that make it powerful as an input device. Not only can a length of cord be grasped in different locations, but also pulled, twisted and bent -- four distinct and expressive dimensions that could potentially act in concert. Such an input mechanism could be readily integrated into headphones, backpacks, and clothing. Once grasped in the hand, a cord can be used in an eyes-free manner to control mobile devices, which often feature small screens and cramped buttons. In this note, we describe a proof-of-concept cord-based sensor, which senses three of the four input dimensions we propose. In addition to a discussion of potential uses, we also present results from our preliminary user study. The latter sought to compare the targeting performance and selection accuracy of different cord-based input modalities. We conclude with brief set of design recommendations drawn upon results from our study.

© All rights reserved Schwarz et al. and/or their publisher

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2010): Minput: enabling interaction on small mobile devices with high-precision, low-cost, multipoint optical tracking. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 1661-1664.

We present Minput, a sensing and input method that enables intuitive and accurate interaction on very small devices -- ones too small for practical touch screen use and with limited space to accommodate physical buttons. We achieve this by incorporating two, inexpensive and high-precision optical sensors (like those found in optical mice) into the underside of the device. This allows the entire device to be used as an input mechanism, instead of the screen, avoiding occlusion by fingers. In addition to x/y translation, our system also captures twisting motion, enabling many interesting interaction opportunities typically found in larger and far more complex systems.

© All rights reserved Harrison and Hudson and/or their publisher

 
Edit | Del

Bau, Olivier, Poupyrev, Ivan, Israr, Ali and Harrison, Chris (2010): TeslaTouch: electrovibration for touch surfaces. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 283-292.

We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.

© All rights reserved Bau et al. and/or their publisher

2009
 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2009): Providing dynamically changeable physical buttons on a visual display. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 299-308.

Physical buttons have the unique ability to provide low-attention and vision-free interactions through their intuitive tactile clues. Unfortunately, the physicality of these interfaces makes them static, limiting the number and types of user interfaces they can support. On the other hand, touch screen technologies provide the ultimate interface flexibility, but offer no inherent tactile qualities. In this paper, we describe a technique that seeks to occupy the space between these two extremes -- offering some of the flexibility of touch screens, while retaining the beneficial tactile properties of physical interfaces. The outcome of our investigations is a visual display that contains deformable areas, able to produce physical buttons and other interface elements. These tactile features can be dynamically brought into and out of the interface, and otherwise manipulated under program control. The surfaces we describe provide the full dynamics of a visual display (through rear projection) as well as allowing for multitouch input (though an infrared lighting and camera setup behind the display). To illustrate the tactile capabilities of the surfaces, we describe a number of variations we uncovered in our exploration and prototyping. These go beyond simple on/off actuation and can be combined to provide a range of different possible tactile expressions. A preliminary user study indicates that our dynamic buttons perform much like physical buttons in tactile search tasks.

© All rights reserved Harrison and Hudson and/or ACM Press

 
Edit | Del

Harrison, Chris, Lim, Brian Y., Shick, Aubrey and Hudson, Scott E. (2009): Where to locate wearable displays?: reaction time performance of visual alerts from tip to toe. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 941-944.

Advances in electronics have brought the promise of wearable computers to near reality. Such systems can offer a highly personal and mobile information and communication infrastructure. Previous research has investigated where wearable computers can be located on the human body -- critical for successful development and acceptance. However, for a location to be truly useful, it needs to not only be accessible for interaction, socially acceptable, comfortable and sufficiently stable for electronics, but also effective at conveying information. In this paper, we describe the results from a study that evaluated reaction time performance to visual stimuli at seven different body locations. Results indicate that there are numerous and statistically significant differences in the reaction time performance characteristics of these locations. We believe our findings can be used to inform the design and placement of future wearable computing applications and systems.

© All rights reserved Harrison et al. and/or ACM Press

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2009): Texture displays: a passive approach to tactile presentation. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 2261-2264.

In this paper, we consider a passive approach to tactile presentation based on changing the surface textures of objects that might naturally be handled by a user. This may allow devices and other objects to convey small amounts of information in very unobtrusive ways and with little attention demand. This paper considers several possible uses for this style of display and explores implementation issues. We conclude with results from our user study, which indicate that users can detect upwards of four textural states accurately with even simple materials.

© All rights reserved Harrison and Hudson and/or ACM Press

 
Edit | Del

Amento, Brian, Harrison, Chris, Nathan, Mukesh and Terveen, Loren (2009): Chapter
XII - 
Asynchronous
 Communication: 
Fostering 
Social
 Interaction 
with 
CollaboraTV. In: Geerts, David (ed.). "Social 
Interactive 
Television: 
Immersive 
Shared 
Experiences 
and
 Perspectives". Hershey, PA, USA: pp. 204-224

 
Edit | Del

Amento, Brian, Harrison, Chris, Nathan, Mukesh and Terveen, Loren (2009): ChapterXII - Asynchronous Communication: Fostering Social Interaction with CollaboraTV. In: Geerts, David (ed.). "Social Interactive Television: Immersive Shared Experiences and Perspectives". Hershey, PA, USA: pp. 204-224

 
Edit | Del

Bartindale, Tom and Harrison, Chris (2009): Stacks on the surface: resolving physical order using fiducial markers with structured transparency. In: Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009. pp. 57-60.

We present a method for identifying the order of stacked items on interactive surfaces. This is achieved using conventional, passive fiducial markers, which in addition to reflective regions, also incorporate structured areas of transparency. This allows particular orderings to appear as unique marker patterns. We discuss how such markers are encoded and fabricated, and include relevant mathematics. To motivate our approach, we comment on various scenarios where stacking could be especially useful. We conclude with details from our proof-of-concept implementation, built on Microsoft Surface.

© All rights reserved Bartindale and Harrison and/or their publisher

 
Edit | Del

Hudson, Scott E., Harrison, Chris, Harrison, Beverly L. and LaMarca, Anthony (2009): Whack gestures: inexact and inattentive interaction with mobile devices. In: Proceedings of the 4th International Conference on Tangible and Embedded Interaction 2009. pp. 109-112.

We introduce Whack Gestures, an inexact and inattentive interaction technique. This approach seeks to provide a simple means to interact with devices with minimal attention from the user -- in particular, without the use of fine motor skills or detailed visual attention (requirements found in nearly all conventional interaction techniques). For mobile devices, this could enable interaction without "getting it out," grasping, or even glancing at the device. This class of techniques is suitable for a small number of simple but common interactions that could be carried out in an extremely lightweight fashion without disrupting other activities. With Whack Gestures, users can interact by striking a device with the open palm or heel of the hand. We briefly discuss the development and use of a preliminary version of this technique and show that implementations with high accuracy and a low false positive rate are feasible.

© All rights reserved Hudson et al. and/or their publisher

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2009): Abracadabra: wireless, high-precision, and unpowered finger input for very small mobile devices. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 121-124.

We present Abracadabra, a magnetically driven input technique that offers users wireless, unpowered, high fidelity finger input for mobile devices with very small screens. By extending the input area to many times the size of the device's screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. We discuss several example applications as a proof of concept. Finally, results from our user study indicate radial targets as small as 16 degrees can achieve greater than 92% selection accuracy, outperforming comparable radial, touch-based finger input.

© All rights reserved Harrison and Hudson and/or their publisher

2008
 
Edit | Del

Harrison, Chris and Dey, Anind K. (2008): Lean and zoom: proximity-aware user interface and content magnification. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 507-510.

The size and resolution of computer displays has increased dramatically, allowing more information than ever to be rendered on-screen. However, items can now be so small or screens so cluttered that users need to lean forward to properly examine them. This behavior may be detrimental to a user's posture and eyesight. Our Lean and Zoom system detects a user's proximity to the display using a camera and magnifies the on-screen content proportionally. This alleviates dramatic leaning and makes items more readable. Results from a user study indicate people find the technique natural and intuitive. Most participants found on-screen content easier to read, and believed the technique would improve both their performance and comfort.

© All rights reserved Harrison and Dey and/or ACM Press

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2008): Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 205-208.

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2008): Lightweight material detection for placement-aware mobile computing. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 279-282.

 
Edit | Del

Harrison, Chris and Hudson, Scott E. (2008): Pseudo-3D Video Conferencing with a Generic Webcam. In: IEEE International Symposium on Multimedia ISM December 15-17, 2008, Berkeley, CA, USA. pp. 236-241.

When conversing with someone via video conference, you are provided with a virtual window into their space. However, this currently remains both flat and fixed, limiting its immersiveness. Previous research efforts have explored the use of 3D in telecommunication, and show that the additional realism can enrich the video conference experience. However, existing systems require complex sensor and camera setups that make them infeasible for widespread adoption. We present a method for producing a pseudo-3D experience using only a single generic webcam at each end. This means nearly any computer currently able to video conference can use our technique, making it readily adoptable. Although using comparatively simple techniques, the 3D result is convincing.

© All rights reserved Harrison and Hudson and/or IEEE

 
Edit | Del

Harrison, Chris, Amento, Brian and Stead, Larry (2008): iEPG: an ego-centric electronic program guide and recommendation interface. In: Darnell, Michael J., Masthoff, Judith, Panabaker, Sheri, Sullivan, Marc and Lugmayr, Artur (eds.) UXTV 2008 - Proceeding of the 1st International Conference on Designing Interactive User Experiences for TV and Video October 22-24, 2008, Silicon Valley, California, USA. pp. 23-26.

 
Edit | Del

Nathan, Mukesh, Harrison, Chris, Yarosh, Svetlana, Terveen, Loren, Stead, Larry and Amento, Brian (2008): CollaboraTV: making television viewing social again. In: Darnell, Michael J., Masthoff, Judith, Panabaker, Sheri, Sullivan, Marc and Lugmayr, Artur (eds.) UXTV 2008 - Proceeding of the 1st International Conference on Designing Interactive User Experiences for TV and Video October 22-24, 2008, Silicon Valley, California, USA. pp. 85-94.

2007
 
Edit | Del

Harrison, Chris, Amento, Brian, Kuznetsov, Stacey and Bell, Robert (2007): Rethinking the progress bar. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 115-118.

Progress bars are prevalent in modern user interfaces. Typically, a linear function is employed such that the progress of the bar is directly proportional to how much work has been completed. However, numerous factors cause progress bars to proceed at non-linear rates. Additionally, humans perceive time in a non-linear way. This paper explores the impact of various progress bar behaviors on user perception of process duration. The results are used to suggest several design considerations that can make progress bars appear faster and ultimately improve users' computing experience.

© All rights reserved Harrison et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

08 Dec 2013: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Apr 2012: Modified
05 Jul 2011: Modified
05 Jul 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
09 Aug 2009: Added
31 Jul 2009: Modified
31 Jul 2009: Added
25 Jul 2009: Added
25 Jul 2009: Added
25 Jul 2009: Modified
25 Jul 2009: Added
25 Jul 2009: Added
25 Jul 2009: Modified
24 Jul 2009: Added
12 Jul 2009: Modified
12 Jul 2009: Modified
17 Jun 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/chris_harrison.html

Publication statistics

Pub. period:2007-2012
Pub. count:30
Number of co-authors:33



Co-authors

Number of publications with 3 favourite co-authors:

Scott E. Hudson:15
Brian Amento:5
Ivan Poupyrev:4

 

 

Productive colleagues

Chris Harrison's 3 most productive colleagues in number of publications:

Scott E. Hudson:113
Jodi Forlizzi:90
Anind K. Dey:71
 
 
 
Jul 23

Men have become the tools of their tools.

-- Henry David Thoreau

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!