Publication statistics

Pub. period:2000-2011
Pub. count:9
Number of co-authors:23


Number of publications with 3 favourite co-authors:

Mary Czerwinski:
Kori Inkpen:
Cha Zhang:



Productive colleagues

Zhengyou Zhang's 3 most productive colleagues in number of publications:

Mary Czerwinski:80
Kori Inkpen:70
Anoop Gupta:30

Upcoming Courses

go to course
User-Centred Design - Module 1
go to course
Dynamic User Experience: Ajax Design and Usability
Starts tomorrow LAST CALL!

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Zhengyou Zhang


Publications by Zhengyou Zhang (bibliography)

 what's this?
Edit | Del

Junuzovic, Sasa, Inkpen, Kori, Hegde, Rajesh, Zhang, Zhengyou, Tang, John and Brooks, Christopher (2011): What did i miss?: in-meeting review using multimodal accelerated instant replay (air) conferencing. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 513-522.

People sometimes miss small parts of meetings and need to quickly catch up without disrupting the rest of the meeting. We developed an Accelerated Instant Replay (AIR) Conferencing system for videoconferencing that enables users to catch up on missed content while the meeting is ongoing. AIR can replay parts of the conference using four different modalities: audio, video, conversation transcript, and shared workspace. We performed two studies to evaluate the system. The first study explored the benefit of AIR catch-up during a live meeting. The results showed that when the full videoconference was reviewed (i.e., all four modalities) at an accelerated rate, users were able to correctly recall a similar amount of information as when listening live. To better understand the benefit of full review, a follow-up study more closely examined the benefits of each of the individual modalities. The results show that users (a) preferred using audio along with any other modality to using audio alone, (b) were most confident and performed best when audio was reviewed with all other modalities, (c) compared to audio-only, had better recall of facts and explanations when reviewing audio together with the shared workspace and transcript modalities, respectively, and (d) performed similarly with audio-only and audio with video review.

© All rights reserved Junuzovic et al. and/or their publisher

Edit | Del

Junuzovic, Sasa, Inkpen, Kori, Hegde, Rajesh and Zhang, Zhengyou (2011): Towards ideal window layouts for multi-party, gaze-aware desktop videoconferencing. In: Proceedings of the 2011 Conference on Graphics Interface 2011. pp. 119-126.

In high-end desktop videoconferencing systems, several windows compete for screen space, particularly when users also share an application. Ideally, the layout of these windows should satisfy both (a) layout guidelines for establishing a rich communication channel and (b) user preferences for window layouts. This paper presents an exploration of user preferences and their interplay with previously established window layout guidelines. Based on results from two user studies, we have created five recommendations for user-preferred window layouts in high-end desktop videoconferencing systems. Both designers and end-users can use these recommendations to setup "ideal" layouts, that is, layouts that satisfy both user preferences and existing layout guidelines. For instance, we have developed an application that utilizes the recommendations to guide users towards ideal layouts during a videoconference.

© All rights reserved Junuzovic et al. and/or their publisher

Edit | Del

Inkpen, Kori, Hegde, Rajesh, Czerwinski, Mary and Zhang, Zhengyou (2010): Exploring spatialized audio & video for distributed conversations. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 95-98.

Previous work has demonstrated the benefits of spatial audio conferencing over monophonic when listening to a group conversation. In this paper we examined three-way distributed conversations while varying the presence of spatial video and audio. Our results demonstrate significant benefits to adding spatialized video to an audio conference. Specifically, users perceived that the conversations were of higher quality, they were more engaged, and they were better able to keep track of the conversation. In contrast, no significant benefits were found when mono audio was replaced by spatialized audio. The results of this work are important in that they provide strong evidence for continued exploration of spatialized video, and also suggest that the benefits of spatialized audio may have less of an impact when video is also spatialized.

© All rights reserved Inkpen et al. and/or their publisher

Edit | Del

Zhao, Ji, Liu, Hujia, Zhang, Chunhui and Zhang, Zhengyou (2010): MobileSurface: interaction in the air for mobile computing. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 459-460.

We describe a virtual interactive surface technology based on a projector-camera system connected to a mobile device. This system, named mobile surface, can project images on any free surfaces and enable interaction in the air within the projection area. The projector used in the system scans a laser beam very quickly across the projection area to produce a stable image at 60 fps. The camera-projector synchronization is applied to obtain the image of the appointed scanning line. So our system can project what is perceived as a stable image onto the display surface, while simultaneously working as a structured light 3D scanning system.

© All rights reserved Zhao et al. and/or their publisher

Edit | Del

Junuzovic, Sasa, Hegde, Rajesh, Zhang, Zhengyou, Chou, Philip A., Liu, Zicheng and Zhang, Cha (2008): Requirements and recommendations for an enhanced meeting viewing experience. In: El-Saddik, Abdulmotaleb, Vuong, Son, Griwodz, Carsten, Bimbo, Alberto Del, Candan, K. Selcuk and Jaimes, Alejandro (eds.) Proceedings of the 16th International Conference on Multimedia 2008 October 26-31, 2008, Vancouver, British Columbia, Canada. pp. 539-548.

Edit | Del

Liu, Zicheng, Zhang, Zhengyou and Shan, Ying (2004): Image-Based Surface Detail Transfer. In IEEE Computer Graphics and Applications, 24 (3) pp. 30-35.

Edit | Del

Cutler, Ross, Rui, Yong, Gupta, Anoop, Cadiz, Jonathan J., Tashev, Ivan, He, Li-wei, Colburn, Alex, Zhang, Zhengyou, Liu, Zicheng and Silverberg, Steve (2002): Distributed meetings: a meeting capture and broadcasting system. In: ACM Multimedia 2002 2002. pp. 503-512.

Edit | Del

Liu, Zicheng, Zhang, Zhengyou, Jacobs, Chuck and Cohen, Michael F. (2001): Rapid modeling of animated faces from video. In Journal of Visualization and Computer Animation, 12 (4) pp. 227-240.

Edit | Del

Liu, Zicheng, Zhang, Zhengyou, Jacobs, Chuck and Cohen, Michael F. (2000): Rapid modeling of animated faces from video images. In: ACM Multimedia 2000 2000. pp. 475-476.

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team