Publication statistics

Pub. period:2011-2012
Pub. count:4
Number of co-authors:10


Number of publications with 3 favourite co-authors:

Daniel Kluver:
Tien T. Nguyen:
Shilad Sen:



Productive colleagues

Michael Ekstrand's 3 most productive colleagues in number of publications:

John Riedl:61
Tovi Grossman:44
Wei Li:23

Upcoming Courses

go to course
Dynamic User Experience: Ajax Design and Usability
go to course
Gestalt Psychology and Web Design: The Ultimate Guide
92% booked. Starts in 3 days

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !


Our Latest Books

The Glossary of Human Computer Interaction
by Mads Soegaard and Rikke Friis Dam
start reading
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading

Michael Ekstrand


Publications by Michael Ekstrand (bibliography)

 what's this?
Edit | Del

Kluver, Daniel, Nguyen, Tien T., Ekstrand, Michael, Sen, Shilad and Riedl, John (2012): How many bits per rating?. In: Proceedings of the 2012 ACM Conference on Recommender Systems 2012. pp. 99-106.

Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating scales for real world datasets. We then estimate how the amount of information predictions give to users is related to the scale ratings are collected on. Our findings suggest a tradeoff in rating scale granularity: while previous research indicates that coarse scales (such as thumbs up / thumbs down) take less time, we find that ratings with these scales provide less predictive value to users. We introduce a new measure, preference bits per second, to quantitatively reconcile this tradeoff.

© All rights reserved Kluver et al. and/or ACM Press

Edit | Del

Ekstrand, Michael and Riedl, John (2012): When recommenders fail: predicting recommender failure for algorithm selection and combination. In: Proceedings of the 2012 ACM Conference on Recommender Systems 2012. pp. 233-236.

Hybrid recommender systems -- systems using multiple algorithms together to improve recommendation quality -- have been well-known for many years and have shown good performance in recent demonstrations such as the NetFlix Prize. Modern hybridization techniques, such as feature-weighted linear stacking, take advantage of the hypothesis that the relative performance of recommenders varies by circumstance and attempt to optimize each item score to maximize the strengths of the component recommenders. Less attention, however, has been paid to understanding what these strengths and failure modes are. Understanding what causes particular recommenders to fail will facilitate better selection of the component recommenders for future hybrid systems and a better understanding of how individual recommender personalities can be harnessed to improve the recommender user experience. We present an analysis of the predictions made by several well-known recommender algorithms on the MovieLens 10M data set, showing that for many cases in which one algorithm fails, there is another that will correctly predict the rating.

© All rights reserved Ekstrand and Riedl and/or ACM Press

Edit | Del

Willemsen, Martijn, Bollen, Dirk and Ekstrand, Michael (2011): UCERSTI 2: second workshop on user-centric evaluation of recommender systems and their interfaces. In: Proceedings of the 2011 ACM Conference on Recommender Systems 2011. pp. 395-396.

Edit | Del

Ekstrand, Michael, Li, Wei, Grossman, Tovi, Matejka, Justin and Fitzmaurice, George (2011): Searching for software learning resources using application context. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 195-204.

Users of complex software applications frequently need to consult documentation, tutorials, and support resources to learn how to use the software and further their understanding of its capabilities. Existing online help systems provide limited context awareness through "what's this?" and similar techniques. We examine the possibility of making more use of the user's current context in a particular application to provide useful help resources. We provide an analysis and taxonomy of various aspects of application context and how they may be used in retrieving software help artifacts with web browsers, present the design of a context-aware augmented web search system, and describe a prototype implementation and initial user study of this system. We conclude with a discussion of open issues and an agenda for further research.

© All rights reserved Ekstrand et al. and/or ACM Press

Add publication
Show list on your website

Join our community and advance:




Join our community!

Page Information

Page maintainer: The Editorial Team