Publication statistics

Pub. period:2008-2010
Pub. count:4
Number of co-authors:15



Co-authors

Number of publications with 3 favourite co-authors:

Simone Stumpf:3
Weng-Keen Wong:3
Margaret M. Burnett:3

 

 

Productive colleagues

Ian Oberst's 3 most productive colleagues in number of publications:

Margaret M. Burnet..:103
Andrew J. Ko:29
Yann Riche:12
 
 
 

Upcoming Courses

go to course
UI Design Patterns for Successful Software
92% booked. Starts in 3 days
go to course
Affordances: Designing Intuitive User Interfaces
91% booked. Starts in 4 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Ian Oberst

 

Publications by Ian Oberst (bibliography)

 what's this?
2010
 
Edit | Del

Kulesza, Todd, Stumpf, Simone, Burnett, Margaret M., Wong, Weng-Keen, Riche, Yann, Moore, Travis, Oberst, Ian, Shinsel, Amber and McIntosh, Kevin (2010): Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs. In: Hundhausen, Christopher D., Pietriga, Emmanuel, Diaz, Paloma and Rosson, Mary Beth (eds.) IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2010 21-25 September 2010, 2010, Legans-Madrid, Spain. pp. 41-48. Available online

2009
 
Edit | Del

Kulesza, Todd, Wong, Weng-Keen, Stumpf, Simone, Perona, Stephen, White, Rachel, Burnett, Margaret M., Oberst, Ian and Ko, Andrew J. (2009): Fixing the program my computer learned: barriers for end users, challenges for the machine. In: Proceedings of the 2009 International Conference on Intelligent User Interfaces 2009. pp. 187-196. Available online

The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters, because without the ability to fix errors users may find that the learned program's errors are too damaging for them to be able to trust such programs. We present a new approach to enable end users to debug a learned program. We then use an early prototype of our new approach to conduct a formative study to determine where and when debugging issues arise, both in general and also separately for males and females. The results suggest opportunities to make machine-learned programs more effective tools.

© All rights reserved Kulesza et al. and/or their publisher

 
Edit | Del

Chow, Richard, Oberst, Ian and Staddon, Jessica (2009): Sanitization's slippery slope: the design and study of a text revision assistant. In: Proceedings of the 2009 Symposium on Usable Privacy and Security 2009. p. 13. Available online

For privacy reasons, sensitive content may be revised before it is released. The revision often consists of redaction, that is, the "blacking out" of sensitive words and phrases. Redaction has the side effect of reducing the utility of the content, often so much that the content is no longer useful. Consequently, government agencies and others are increasingly exploring the revision of sensitive content as an alternative to redaction that preserves more content utility. We call this practice sanitization. In a sanitized document, names might be replaced with pseudonyms and sensitive attributes might be replaced with hypernyms. Sanitization adds to redaction the challenge of determining what words and phrases reduce the sensitivity of content. We have designed and developed a tool to assist users in sanitizing sensitive content. Our tool leverages the Web to automatically identify sensitive words and phrases and quickly evaluates revisions for sensitivity. The tool, however, does not identify all sensitive terms and mistakenly marks some innocuous terms as sensitive. This is unavoidable because of the difficulty of the underlying inference problem and is the main reason we have designed a sanitization assistant as opposed to a fully-automated tool. We have conducted a small study of our tool in which users sanitize biographies of celebrities to hide the celebrity's identity both both with and without our tool. The user study suggests that while the tool is very valuable in encouraging users to preserve content utility and can preserve privacy, this usefulness and apparent authoritativeness may lead to a "slippery slope" in which users neglect their own judgment in favor of the tool's.

© All rights reserved Chow et al. and/or ACM Press

2008
 
Edit | Del

Stumpf, Simone, Sullivan, Erin, Fitzhenry, Erin, Oberst, Ian, Wong, Weng-Keen and Burnett, Margaret M. (2008): Integrating rich user feedback into intelligent user interfaces. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 50-59. Available online

The potential for machine learning systems to improve via a mutually beneficial exchange of information with users has yet to be explored in much detail. Previously, we found that users were willing to provide a generous amount of rich feedback to machine learning systems, and that the types of some of this rich feedback seem promising for assimilation by machine learning algorithms. Following up on those findings, we ran an experiment to assess the viability of incorporating real-time keyword-based feedback in initial training phases when data is limited. We found that rich feedback improved accuracy but an initial unstable period often caused large fluctuations in classifier behavior. Participants were able to give feedback by relying heavily on system communication in order to respond to changes. The results show that in order to benefit from the user's knowledge, machine learning systems must be able to absorb keyword-based rich feedback in a graceful manner and provide clear explanations of their predictions.

© All rights reserved Stumpf et al. and/or ACM Press

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/ian_oberst.html