Publication statistics

Pub. period:1992-2011
Pub. count:29
Number of co-authors:46



Co-authors

Number of publications with 3 favourite co-authors:

Robert C. Miller:9
David Huynh:7
Dennis Quan:5

 

 

Productive colleagues

David R. Karger's 3 most productive colleagues in number of publications:

Mark S. Ackerman:67
Robert C. Miller:42
Jaime Teevan:30
 
 
 
Jul 24

There is no reason for any individual to have a computer in his home

-- Ken Olson

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

David R. Karger

Has also published under the name of:
"David Karger"

Personal Homepage:
http://people.csail.mit.edu/karger

Current place of employment:
MIT

Professor of Electrical Engineering and Computer Science, member of Computer Science and Artificial Intelligence Laboratory.

Edit author info
Add publication

Publications by David R. Karger (bibliography)

 what's this?
2011
 
Edit | Del

Marcus, Adam, Bernstein, Michael S., Badar, Osama, Karger, David R., Madden, Samuel and Miller, Robert C. (2011): Twitinfo: aggregating and visualizing microblogs for event exploration. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 227-236.

Microblogs are a tremendous repository of user-generated content about world events. However, for people trying to understand events by querying services like Twitter, a chronological log of posts makes it very difficult to get a detailed understanding of an event. In this paper, we present TwitInfo, a system for visualizing and summarizing events on Twitter. TwitInfo allows users to browse a large collection of tweets using a timeline-based display that highlights peaks of high tweet activity. A novel streaming algorithm automatically discovers these peaks and labels them meaningfully using text from the tweets. Users can drill down to subevents, and explore further via geolocation, sentiment, and popular URLs. We contribute a recall-normalized aggregate sentiment visualization to produce more honest sentiment overviews. An evaluation of the system revealed that users were able to reconstruct meaningful summaries of events in a small amount of time. An interview with a Pulitzer Prize-winning journalist suggested that the system would be especially useful for understanding a long-running event and for identifying eyewitnesses. Quantitatively, our system can identify 80-100% of manually labeled peaks, facilitating a relatively complete view of each event studied.

© All rights reserved Marcus et al. and/or their publisher

 
Edit | Del

Bernstein, Michael S., Brandt, Joel, Miller, Robert C. and Karger, David R. (2011): Crowds in two seconds: enabling realtime crowd-powered interfaces. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 33-42.

Interactive systems must respond to user input within seconds. Therefore, to create realtime crowd-powered interfaces, we need to dramatically lower crowd latency. In this paper, we introduce the use of synchronous crowds for on-demand, realtime crowdsourcing. With synchronous crowds, systems can dynamically adapt tasks by leveraging the fact that workers are present at the same time. We develop techniques that recruit synchronous crowds in two seconds and use them to execute complex search tasks in ten seconds. The first technique, the retainer model, pays workers a small wage to wait and respond quickly when asked. We offer empirically derived guidelines for a retainer system that is low-cost and produces on-demand crowds in two seconds. Our second technique, rapid refinement, observes early signs of agreement in synchronous crowds and dynamically narrows the search space to focus on promising directions. This approach produces results that, on average, are of more reliable quality and arrive faster than the fastest crowd member working alone. To explore benefits and limitations of these techniques for interaction, we present three applications: Adrenaline, a crowd-powered camera where workers quickly filter a short video down to the best single moment for a photo; and Puppeteer and A|B, which examine creative generation tasks, communication with workers, and low-latency voting.

© All rights reserved Bernstein et al. and/or ACM Press

2010
 
Edit | Del

Bernstein, Michael S., Marcus, Adam, Karger, David R. and Miller, Robert C. (2010): Enhancing directed content sharing on the web. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 971-980.

To find interesting, personally relevant web content, people rely on friends and colleagues to pass links along as they encounter them. In this paper, we study and augment link-sharing via e-mail, the most popular means of sharing web content today. Armed with survey data indicating that active sharers of novel web content are often those that actively seek it out, we developed FeedMe, a plug-in for Google Reader that makes directed sharing of content a more salient part of the user experience. FeedMe recommends friends who may be interested in seeing content that the user is viewing, provides information on what the recipient has seen and how many emails they have received recently, and gives recipients the opportunity to provide lightweight feedback when they appreciate shared content. FeedMe introduces a novel design space within mixed-initiative social recommenders: friends who know the user voluntarily vet the material on the user's behalf. We performed a two-week field experiment (N=60) and found that FeedMe made it easier and more enjoyable to share content that recipients appreciated and would not have found otherwise.

© All rights reserved Bernstein et al. and/or their publisher

 
Edit | Del

Kleek, Max Van, Moore, Brennan, Xu, Christina and Karger, David R. (2010): Eyebrowse: real-time web activity sharing and visualization. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 3643-3648.

In this paper, we explore the potential for letting users automatically track and selectively publish their web browsing activities in real time on the Web. We developed a system, Eyebrowse, with three goals: first, to provide a means for individuals to better understand how they spend time on the web through visualizations and statistics; secondly, to foster social discovery and awareness through real-time web activity sharing; and finally, to build a large public corpus of web browsing trails using this method. We gathered user impressions of Eyebrowse, including perceived usefulness, feelings of self-exposure, and privacy concerns, for ascertaining ways to improve the system.

© All rights reserved Kleek et al. and/or their publisher

 
Edit | Del

Kleek, Max Van, Moore, Brennan, Karger, David R., André, Paul and schraefel, m.c. (2010): Atomate it! end-user context-sensitive automation using heterogeneous information sources on the web. In: Proceedings of the 2010 International Conference on the World Wide Web 2010. pp. 951-960.

The transition of personal information management (PIM) tools off the desktop to the Web presents an opportunity to augment these tools with capabilities provided by the wealth of real-time information readily available. In this paper, we describe a next-generation personal information assistance engine that lets end-users delegate to it various simple context- and activity-reactive tasks and reminders. Our system, Atomate, treats RSS/ATOM feeds from social networking and life-tracking sites as sensor streams, integrating information from such feeds into a simple unified RDF world model representing people, places and things and their timevarying states and activities. Combined with other information sources on the web, including the user's online calendar, web-based e-mail client, news feeds and messaging services, Atomate can be made to automatically carry out a variety of simple tasks for the user, ranging from context-aware filtering and messaging, to sharing and social coordination actions. Atomate's open architecture and world model easily accommodate new information sources and actions via the addition of feeds and web services. To make routine use of the system easy for non-programmers, Atomate provides a constrained-input natural language interface (CNLI) for behavior specification, and a direct-manipulation interface for inspecting and updating its world model.

© All rights reserved Kleek et al. and/or their publisher

 
Edit | Del

Bernstein, Michael S., Little, Greg, Miller, Robert C., Hartmann, Björn, Ackerman, Mark S., Karger, David R., Crowell, David and Panovich, Katrina (2010): Soylent: a word processor with a crowd inside. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 313-322.

This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.

© All rights reserved Bernstein et al. and/or their publisher

2009
 
Edit | Del

Kleek, Max G. Van, Bernstein, Michael, Panovich, Katrina, Vargas, Gregory G., Karger, David R. and Schraefel, MC (2009): Note to self: examining personal information keeping in a lightweight note-taking tool. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1477-1480.

This paper describes a longitudinal field experiment in personal note-taking that examines how people capture and use information in short textual notes. Study participants used our tool, a simple browser-based textual note-taking utility, to capture personal information over the course of ten days. We examined the information they kept in notes using the tool, how this information was expressed, and aspects of note creation, editing, deletion, and search. We found that notes were recorded extremely quickly and tersely, combined information of multiple types, and were rarely revised or deleted. The results of the study demonstrate the need for a tool such as ours to support the rapid capture and retrieval of short notes-to-self, and afford insights into how users' actual note-keeping tendencies could be used to better support their needs in future PIM tools.

© All rights reserved Kleek et al. and/or ACM Press

 
Edit | Del

Karger, David R., Ostler, Scott and Lee, Ryan (2009): The web page as a WYSIWYG end-user customizable database-backed information management application. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2009. pp. 257-260.

Dido is an application (and application development environment) in a web page. It is a single web page containing rich structured data, an AJAXy interactive visualizer/editor for that data, and a "metaeditor" for WYSIWYG editing of the visualizer/editor. Historically, users have been limited to the data schemas, visualizations, and interactions offered by a small number of heavyweight applications. In contrast, Dido encourages and enables the end user to edit (not code) in his or her web browser a distinct ephemeral interaction "wrapper" for each data collection that is specifically suited to its intended use. Dido's active document metaphor has been explored before but we show how, given today's web infrastructure, it can be deployed in a small self-contained HTML document without touching a web client or server.

© All rights reserved Karger et al. and/or their publisher

2008
 
Edit | Del

Miller, Robert C., Chou, Victoria H., Bernstein, Michael S., Little, Greg, Kleek, Max Van, Karger, David R. and Schraefel, M. C. (2008): Inky: a sloppy command line for the web with rich visual feedback. In: Cousins, Steve B. and Beaudouin-Lafon, Michel (eds.) Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology October 19-22, 2008, Monterey, CA, USA. pp. 131-140.

2007
 
Edit | Del

Huynh, David, Karger, David R. and Miller, Robert C. (2007): Exhibit: lightweight structured data publishing. In: Proceedings of the 2007 International Conference on the World Wide Web 2007. pp. 737-746.

The early Web was hailed for giving individuals the same publishing power as large content providers. But over time, large content providers learned to exploit the structure in their data, leveraging databases and server side technologies to provide rich browsing and visualization. Individual authors fall behind once more: neither old-fashioned static pages nor domain-specific publishing frameworks supporting limited customization can match custom database-backed web applications. In this paper, we propose Exhibit, a lightweight framework for publishing structured data on standard web servers that requires no installation, database administration, or programming. Exhibit lets authors with relatively limited skills-those same enthusiasts who could write HTML pages for the early Web-publish richly interactive pages that exploit the structure of their data for better browsing and visualization. Such structured publishing in turn makes that data more useful to all of its consumers: individual readers get more powerful interfaces, mashup creators can more easily repurpose the data, and Semantic Web enthusiasts can feed the data to the nascent Semantic Web.

© All rights reserved Huynh et al. and/or International World Wide Web Conference Committee

 
Edit | Del

Shen, Yuan Kui and Karger, David R. (2007): U-REST: an unsupervised record extraction system. In: Proceedings of the 2007 International Conference on the World Wide Web 2007. pp. 1347-1348.

In this paper, we describe a system that can extract record structures from web pages with no direct human supervision. Records are commonly occurring HTML-embedded data tuples that describe people, offered courses, products, company profiles, etc. We present a simplified framework for studying the problem of unsupervised record extraction. one which separates the algorithms from the feature engineering. Our system, U-REST formalizes an approach to the problem of unsupervised record extraction using a simple two-stage machine learning framework. The first stage involves clustering, where structurally similar regions are discovered, and the second stage involves classification, where discovered groupings (clusters of regions) are ranked by their likelihood of being records. In our work, we describe, and summarize the results of an extensive survey of features for both stages. We conclude by comparing U-REST to related systems. The results of our empirical evaluation show encouraging improvements in extraction accuracy.

© All rights reserved Shen and Karger and/or International World Wide Web Conference Committee

 
Edit | Del

Kleek, Max Van, Bernstein, Michael, Karger, David R. and Schraefel, M. C. (2007): Gui -- phooey!: the case for text input. In: Proceedings of the ACM Symposium on User Interface Software and Technology October 7-10, 2007, Newport, Rhode Island, USA. pp. 193-202.

Information cannot be found if it is not recorded. Existing rich graphical application approaches interfere with user input in many ways, forcing complex interactions to enter simple information, requiring complex cognition to decide where the data should be stored, and limiting the kind of information that can be entered to what can fit into specific applications' data models. Freeform text entry suffers from none of these limitations but produces data that is hard to retrieve or visualize. We describe the design and implementation of Jourknow, a system that aims to bridge these two modalities, supporting lightweight text entry and weightless context capture that produces enough structure to support rich interactive presentation and retrieval of the arbitrary information entered.

© All rights reserved Kleek et al. and/or ACM Press

2006
 
Edit | Del

Chen, Harr and Karger, David R. (2006): Less is more: probabilistic models for retrieving fewer relevant documents. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2006. pp. 429-436.

Traditionally, information retrieval systems aim to maximize the number of relevant documents returned to a user within some window of the top. For that goal, the probability ranking principle, which ranks documents in decreasing order of probability of relevance, is provably optimal. However, there are many scenarios in which that ranking does not optimize for the users information need. One example is when the user would be satisfied with some limited number of relevant documents, rather than needing all relevant documents. We show that in such a scenario, an attempt to return many relevant documents can actually reduce the chances of finding any relevant documents.

© All rights reserved Chen and Karger and/or ACM Press

 
Edit | Del

Huynh, David, Miller, Robert C. and Karger, David R. (2006): Enabling web browsers to augment web sites' filtering and sorting functionalities. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 125-134.

Existing augmentations of web pages are mostly small cosmetic changes (e.g., removing ads) and minor addition of third-party content (e.g., product prices from competing sites). None leverages the structured data presented in web pages. This paper describes Sifter, a web browser extension that can augment a well-structured web site with advanced filtering and sorting functionality. These added features work inside the site's own pages, preserving the site's presentational style and the user's context. Sifter contains an algorithm that scrapes structured data out of well-structured web pages while usually requiring no user intervention. We tested Sifter on real web sites and real users and found that people could use Sifter to perform sophisticated queries and high-level analyses on sizable data collections on the Web. We propose that web sites can be similarly augmented with other sophisticated data-centric functionality, giving users new benefits over the existing Web.

© All rights reserved Huynh et al. and/or ACM Press

 
Edit | Del

Huynh, David, Miller, Robert C. and Karger, David R. (2006): Enabling web browsers to augment web sites' filtering and sorting functionalities. In: Proceedings of the ACM Symposium on User Interface Software and Technology 2006. pp. 125-134.

Existing augmentations of web pages are mostly small cosmetic changes (e.g., removing ads) and minor addition of third-party content (e.g., product prices from competing sites). None leverages the structured data presented in web pages. This paper describes Sifter, a web browser extension that can augment a well-structured web site with advanced filtering and sorting functionality. These added features work inside the site's own pages, preserving the site's presentational style and the user's context. Sifter contains an algorithm that scrapes structured data out of well-structured web pages while usually requiring no user intervention. We tested Sifter on real web sites and real users and found that people could use Sifter to perform sophisticated queries and high-level analyses on sizable data collections on the Web. We propose that web sites can be similarly augmented with other sophisticated data-centric functionality, giving users new benefits over the existing Web.

© All rights reserved Huynh et al. and/or ACM Press

 
Edit | Del

Sinha, Vineet, Karger, David R. and Miller, Rob (2006): Relo: Helping Users Manage Context during Interactive Exploratory Visualization of Large Codebases. In: VL-HCC 2006 - IEEE Symposium on Visual Languages and Human-Centric Computing 4-8 September, 2006, Brighton, UK. pp. 187-194.

 
Edit | Del

Karger, David R. and Jones, William (2006): Data unification in personal information management. In Communications of the ACM, 49 (1) pp. 77-82.

2004
 
Edit | Del

Teevan, Jaime, Alvarado, Christine, Ackerman, Mark S. and Karger, David R. (2004): The perfect search engine is not enough: a study of orienteering behavior in directed search. In: Dykstra-Erickson, Elizabeth and Tscheligi, Manfred (eds.) Proceedings of ACM CHI 2004 Conference on Human Factors in Computing Systems April 24-29, 2004, Vienna, Austria. pp. 415-422.

This paper presents a modified diary study that investigated how people performed personally motivated searches in their email, in their files, and on the Web. Although earlier studies of directed search focused on keyword search, most of the search behavior we observed did not involve keyword search. Instead of jumping directly to their information target using keywords, our participants navigated to their target with small, local steps using their contextual knowledge as a guide, even when they knew exactly what they were looking for in advance. This stepping behavior was especially common for participants with unstructured information organization. The observed advantages of searching by taking small steps include that it allowed users to specify less of their information need and provided a context in which to understand their results. We discuss the implications of such advantages for the design of personal information management tools.

© All rights reserved Teevan et al. and/or ACM Press

2003
 
Edit | Del

Quan, Dennis, Huynh, David, Karger, David R. and Miller, Robert C. (2003): User interface continuations. In: Proceedings of the 16th annural ACM Symposium on User Interface Software and Technology November, 2-5, 2003, Vancouver, Canada. pp. 145-148.

Dialog boxes that collect parameters for commands often create ephemeral, unnatural interruptions of a program's normal execution flow, encouraging the user to complete the dialog box as quickly as possible in order for the program to process that command. In this paper we examine the idea of turning the act of collecting parameters from a user into a first class object called a user interface continuation. Programs can create user interface continuations by specifying what information is to be collected from the user and supplying a callback (i.e., a continuation) to be notified with the collected information. A partially completed user interface continuation can be saved as a new command, much as currying and partially evaluating a function with a set of parameters produces a new function. Furthermore, user interface continuations, like other continuation-passing paradigms, can be used to allow program execution to continue uninterrupted while the user determines a command's parameters at his or her leisure.

© All rights reserved Quan et al. and/or ACM Press

 
Edit | Del

Huynh, David, Karger, David R., Quan, Dennis and Sinha, Vineet (2003): Haystack: a platform for creating, organizing and visualizing semistructured information. In: Johnson, Lewis and Andre, Elisabeth (eds.) International Conference on Intelligent User Interfaces 2003 January 12-15, 2003, Miami, Florida, USA. p. 323.

 
Edit | Del

Karger, David R., Katz, Boris, Lin, Jimmy and Quan, Dennis (2003): Sticky notes for the semantic web. In: Johnson, Lewis and Andre, Elisabeth (eds.) International Conference on Intelligent User Interfaces 2003 January 12-15, 2003, Miami, Florida, USA. pp. 254-256.

Computer-based annotation is increasing in popularity as a mechanism for revising documents and sharing comments over the Internet. One reason behind this surge is that viewpoints, summaries, and notes written by others are often helpful to readers. In particular, these types of annotations can help users locate or recall relevant documents. We believe that this model can be applied to the problem of retrieval on the Semantic Web. In this paper, we propose a generalized annotation environment that supports richer forms of description such as natural language. We discuss how RDF can be used to model annotations and the connections between annotations and the documents they describe. Furthermore, we explore the idea of a question answering interface that allows retrieval based both on the text of the annotations and the annotations associated metadata. Finally, we speculate on how these features could be pervasively integrated into an information management environment, making Semantic Web annotation a first class player in terms of document management and retrieval.

© All rights reserved Karger et al. and/or ACM Press

 
Edit | Del

Teevan, Jaime and Karger, David R. (2003): Empirical development of an exponential probabilistic model for text retrieval: using textual analysis to build a better model. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2003. pp. 18-25.

Much work in information retrieval focuses on using a model of documents and queries to derive retrieval algorithms. Model based development is a useful alternative to heuristic development because in a model the assumptions are explicit and can be examined and refined independent of the particular retrieval algorithm. We explore the explicit assumptions underlying the naive framework by performing computational analysis of actual corpora and queries to devise a generative document model that closely matches text. Our thesis is that a model so developed will be more accurate than existing models, and thus more useful in retrieval, as well as other applications. We test this by learning from a corpus the best document model. We find the learned model better predicts the existence of text data and has improved performance on certain IR tasks.

© All rights reserved Teevan and Karger and/or ACM Press

 
Edit | Del

Lin, Jimmy, Quan, Dennis, Sinha, Vineet, Bakshi, Karun, Huynh, David, Katz, Boris and Karger, David R. (2003): What Makes a Good Answer? The Role of Context in Question Answering. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 25.

 
Edit | Del

Quan, Dennis, Bakshi, Karun, Huynh, David and Karger, David R. (2003): User Interfaces for Supporting Multiple Categorization. In: Proceedings of IFIP INTERACT03: Human-Computer Interaction 2003, Zurich, Switzerland. p. 228.

 
Edit | Del

Balakrishnan, Hari, Kaashoek, M. Frans, Karger, David R., Morris, Robert and Stoica, Ion (2003): Looking up data in P2P systems. In Communications of the ACM, 46 (2) pp. 43-48.

2002
 
Edit | Del

Balazinska, Magdalena, Balakrishnan, Hari and Karger, David R. (2002): INS/Twine: A Scalable Peer-to-Peer Architecture for Intentional Resource Discovery. In: Mattern, Friedemann and Naghshineh, Mahmoud (eds.) Pervasive 2002 - Pervasive Computing, First International Conference August 26-28, 2002, Zürich, Switzerland. pp. 195-210.

1999
 
Edit | Del

Adar, Eytan, Karger, David R. and Stein, Lynn Andrea (1999): Haystack: Per-User Information Environments. In: Proceedings of the 1999 ACM CIKM International Conference on Information and Knowledge Management November 2-6, 1999, Kansas City, Missouri, USA. pp. 413-422.

1993
 
Edit | Del

Cutting, Douglass R., Karger, David R. and Pedersen, Jan O. (1993): Constant Interaction-Time Scatter/Gather Browsing of Very Large Document Collections. In: Proceedings of the Sixteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 1993. pp. 126-134.

The Scatter/Gather document browsing method uses fast document clustering to produce table-of-contents-like outlines of large document collections. Previous work [1] developed linear-time document clustering algorithms to establish the feasibility of this method over moderately large collections. However, even linear-time algorithms are too slow to support interactive browsing of very large collections such as Tipster, the DARPA standard text retrieval evaluation collection. We present a scheme that supports constant interaction-time Scatter/Gather of arbitrarily large collections after near-linear time preprocessing. This involves the construction of a cluster hierarchy. A modification of Scatter/Gather employing this scheme, and an example of its use over the Tipster collection are presented.

© All rights reserved Cutting et al. and/or ACM Press

1992
 
Edit | Del

Cutting, Douglass R., Karger, David R., Pedersen, Jan O. and Tukey, John W. (1992): Scatter/Gather: A Cluster-Based Approach to Browsing Large Document Collections. In: Proceedings of the Fifteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 1992. pp. 318-329.

Document clustering has not been well received as an information retrieval tool. Objections to its use fall into two main categories: first, that clustering is too slow for large corpora (with running time often quadratic in the number of documents); and second, that clustering does not appreciably improve retrieval. We argue that these problems arise only when clustering is used in an attempt to improve conventional search techniques. However, looking at clustering as an information access tool in its own right obviates these objections, and provides a powerful new access paradigm. We present a document browsing technique that employs document clustering as its primary operation. We also present fast (linear time) clustering algorithms which support this interactive browsing paradigm.

© All rights reserved Cutting et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

05 Apr 2012: Modified
05 Jul 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
02 Nov 2010: Modified
24 Aug 2009: Modified
17 Aug 2009: Modified
17 Aug 2009: Modified
12 Jul 2009: Modified
16 Jun 2009: Modified
29 May 2009: Modified
29 May 2009: Modified
09 May 2009: Modified
12 May 2008: Modified
12 May 2008: Modified
25 Jul 2007: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
24 Jul 2007: Added
24 Jul 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/david_r__karger.html

Publication statistics

Pub. period:1992-2011
Pub. count:29
Number of co-authors:46



Co-authors

Number of publications with 3 favourite co-authors:

Robert C. Miller:9
David Huynh:7
Dennis Quan:5

 

 

Productive colleagues

David R. Karger's 3 most productive colleagues in number of publications:

Mark S. Ackerman:67
Robert C. Miller:42
Jaime Teevan:30
 
 
 
Jul 24

There is no reason for any individual to have a computer in his home

-- Ken Olson

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!