Publication statistics

Pub. period:2007-2012
Pub. count:27
Number of co-authors:31



Co-authors

Number of publications with 3 favourite co-authors:

Robert E. Kraut:11
Ed H. Chi:5
Bongwon Suh:5

 

 

Productive colleagues

Aniket Kittur's 3 most productive colleagues in number of publications:

Robert E. Kraut:98
John Riedl:61
Sara Kiesler:59
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
90% booked. Starts in 5 days
go to course
UI Design Patterns for Successful Software
82% booked. Starts in 13 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Aniket Kittur

Picture of Aniket Kittur.
Update pic
Personal Homepage:
http://kittur.org/

I am currently an assistant professor in HCI at Carnegie Mellon. My research focuses on understanding and improving distributed and individual cognition. My research on distributed cognition examines how groups of people can collaborate to process information on a scale that exceeds individual cognitive capabilities. For example, I am investigating how large social collaborative knowledge systems such as Wikipedia and del.icio.us function and can be made more effective. I am also a collaborator on the Cognitive Atlas project, whose goal is to build a knowledge base of mental processes, tasks, and brain systems using principles of distributed cognition. My research in cognitive psychology seeks to understand cognitive processes in the individual which underlie categorization and memory by combining empirical studies with computational and statistical modeling. I am also involved in developing interfaces which improve the efficiency of storing and retrieving knowledge and support visualization and interaction with large knowledge spaces. Before entering graduate school I spent four years in internet consulting, first as a senior programmer and then as a senior information architect. I continue to consult for Fortune 500 companies with difficult information and interaction problems. In addition I do freelance graphic design, especially focusing on logo design. I am also an avid photographer.

 

Publications by Aniket Kittur (bibliography)

 what's this?
2012
 
Edit | Del

Zhu, Haiyi, Kraut, Robert E. and Kittur, Aniket (2012): Effectiveness of shared leadership in online communities. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 407-416. Available online

Traditional research on leadership in online communities has consistently focused on the small set of people occupying leadership roles. In this paper, we use a model of shared leadership, which posits that leadership behaviors come from members at all levels, not simply from people in high-level leadership positions. Although every member can exhibit some leadership behavior, different types of leadership behavior performed by different types of leaders may not be equally effective. This paper investigates how distinct types of leadership behaviors (transactional, aversive, directive and person-focused) and the legitimacy of the people who deliver them (people in formal leadership positions or not) influence the contributions that other participants make in the context of Wikipedia. After using propensity score matching to control for potential pre-existing differences among those who were and were not targets of leadership behaviors, we found that 1) leadership behaviors performed by members at all levels significantly influenced other members' motivation; 2) transactional leadership and person-focused leadership were effective in motivating others to contribute more, whereas aversive leadership decreased other contributors' motivations; and 3) legitimate leaders were in general more influential than regular peer leaders. We discuss the theoretical and practical implication of our work.

© All rights reserved Zhu et al. and/or ACM Press

 
Edit | Del

Rzeszotarski, Jeffrey and Kittur, Aniket (2012): Learning from history: predicting reverted work at the word level in wikipedia. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 437-440. Available online

Wikipedia's remarkable success in aggregating millions of contributions can pose a challenge for current editors, whose hard work may be reverted unless they understand and follow established norms, policies, and decisions and avoid contentious or proscribed terms. We present a machine learning model for predicting whether a contribution will be reverted based on word level features. Unlike previous models relying on editor-level characteristics, our model can make accurate predictions based only on the words a contribution changes. A key advantage of the model is that it can provide feedback on not only whether a contribution is likely to be rejected, but also the particular words that are likely to be controversial, enabling new forms of intelligent interfaces and visualizations. We examine the performance of the model across a variety of Wikipedia articles.

© All rights reserved Rzeszotarski and Kittur and/or ACM Press

 
Edit | Del

Zhu, Haiyi, Kraut, Robert E. and Kittur, Aniket (2012): Organizing without formal organization: group identification, goal setting and social modeling in directing online production. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 935-944. Available online

A challenge for many online production communities is to direct their members to accomplish tasks that are important to the group, even when these tasks may not match individual members' interests. Here we investigate how combining group identification and direction setting can motivate volunteers in online communities to accomplish tasks important to the success of the group as a whole. We hypothesize that group identity, the perception of belonging to a group, triggers in-group favoritism; and direction setting (including explicit direction from group goals and implicit direction from role models) focuses people's group-oriented motivation towards the group's important tasks. We tested our hypotheses in the context of Wikipedia's Collaborations of the Week (COTW), a group goal setting mechanism and a social event within Wikiprojects. Results demonstrate that 1) publicizing important group goals via COTW can have a strong motivating influence on editors who have voluntarily identified themselves as group members compared to those who have not self-identified; 2) the effects of goals spill over to non-goal related tasks; and 3) editors exposed to group role models in COTW are more likely to perform similarly to the models on group-relevant citizenship behaviors. Finally, we discuss design and managerial implications based on our findings.

© All rights reserved Zhu et al. and/or ACM Press

 
Edit | Del

Kittur, Aniket, Khamkar, Susheel, Andr, Paul and Kraut, Robert E. (2012): CrowdWeaver: visually managing complex crowd work. In: Proceedings of ACM CSCW12 Conference on Computer-Supported Cooperative Work 2012. pp. 1033-1036. Available online

Though toolkits exist to create complex crowdsourced workflows, there is limited support for management of those workflows. Managing crowd workers and tasks requires significant iteration and experimentation on task instructions, rewards, and flows. We present CrowdWeaver, a system to visually manage complex crowd work. The system supports the creation and reuse of crowdsourcing and computational tasks into integrated task flows, manages the flow of data between tasks, and allows tracking and notification of task progress, with support for real-time modification. We describe the system and demonstrate its utility through case studies and user feedback.

© All rights reserved Kittur et al. and/or ACM Press

 
Edit | Del

Rzeszotarski, Jeffrey and Kittur, Aniket (2012): CrowdScape: interactively visualizing user behavior and output. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 55-62. Available online

Crowdsourcing has become a powerful paradigm for accomplishing work quickly and at scale, but involves significant challenges in quality control. Researchers have developed algorithmic quality control approaches based on either worker outputs (such as gold standards or worker agreement) or worker behavior (such as task fingerprinting), but each approach has serious limitations, especially for complex or creative work. Human evaluation addresses these limitations but does not scale well with increasing numbers of workers. We present CrowdScape, a system that supports the human evaluation of complex crowd work through interactive visualization and mixed initiative machine learning. The system combines information about worker behavior with worker outputs, helping users to better understand and harness the crowd. We describe the system and discuss its utility through grounded case studies. We explore other contexts where CrowdScape's visualizations might be useful, such as in user studies.

© All rights reserved Rzeszotarski and Kittur and/or ACM Press

2011
 
Edit | Del

Chau, Duen Horng, Kittur, Aniket, Hong, Jason I. and Faloutsos, Christos (2011): Apolo: making sense of large network data by combining rich user interaction and machine learning. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 167-176. Available online

Extracting useful knowledge from large network datasets has become a fundamental challenge in many domains, from scientific literature to social networks and the web. We introduce Apolo, a system that uses a mixed-initiative approach -- combining visualization, rich user interaction and machine learning -- to guide the user to incrementally and interactively explore large network data and make sense of it. Apolo engages the user in bottom-up sensemaking to gradually build up an understanding over time by starting small, rather than starting big and drilling down. Apolo also helps users find relevant information by specifying exemplars, and then using a machine learning method called Belief Propagation to infer which other nodes may be of interest. We evaluated Apolo with twelve participants in a between-subjects study, with the task being to find relevant new papers to update an existing survey paper. Using expert judges, participants using Apolo found significantly more relevant papers. Subjective feedback of Apolo was also very positive.

© All rights reserved Chau et al. and/or their publisher

 
Edit | Del

Cranshaw, Justin and Kittur, Aniket (2011): The polymath project: lessons from a successful online collaboration in mathematics. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1865-1874. Available online

Although science is becoming increasingly collaborative, there are remarkably few success stories of online collaborations between professional scientists that actually result in real discoveries. A notable exception is the Polymath Project, a group of mathematicians who collaborate online to solve open mathematics problems. We provide an in-depth descriptive history of Polymath, using data analysis and visualization to elucidate the principles that led to its success, and the difficulties that must be addressed before the project can be scaled up. We find that although a small percentage of users created most of the content, almost all users nevertheless contributed some content that was highly influential to the task at hand. We also find that leadership played an important role in the success of the project. Based on our analysis, we present a set of design suggestions for how future collaborative mathematics sites can encourage and foster newcomer participation.

© All rights reserved Cranshaw and Kittur and/or their publisher

 
Edit | Del

Zhu, Haiyi, Kraut, Robert E., Wang, Yi-Chia and Kittur, Aniket (2011): Identifying shared leadership in Wikipedia. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 3431-3434. Available online

In this paper, we introduce a method to measure shared leadership in Wikipedia as a step in developing a new model of online leadership. We show that editors with varying degrees of engagement and from peripheral as well as central roles all act like leaders, but that core and peripheral editors show different profiles of leadership behavior. Specifically, we developed machine learning models to automatically identify four types of leadership behaviors from 4 million messages sent between Wikipedia editors. We found strong evidence of shared leadership in Wikipedia, with editors in peripheral roles producing a large proportion of leadership behaviors.

© All rights reserved Zhu et al. and/or their publisher

 
Edit | Del

Bernstein, Michael, Chilton, Lydia, Hartmann, Bjorn, Kittur, Aniket and Miller, Robert C. (2011): Crowdsourcing and human computation: systems, studies and platforms. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 53-56. Available online

Crowdsourcing and human computation are transforming human-computer interaction, and CHI has led the way. The seminal publication in human computation was initially published in CHI in 2004 [1], and the first paper investigating Mechanical Turk as a user study platform has amassed over one hundred citations in two years [5]. However, we are just beginning to stake out a coherent research agenda for the field. This workshop will bring together researchers in the young field of crowdsourcing and human computation and produce three artifacts: a research agenda for the field, a vision for ideal crowdsourcing platforms, and a group-edited bibliography. These resources will be publically disseminated on the web and evolved and maintained by the community.

© All rights reserved Bernstein et al. and/or their publisher

 
Edit | Del

Kittur, Aniket, Smus, Boris and Kraut, Robert E. (2011): CrowdForge: crowdsourcing complex work. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 1801-1806. Available online

Micro-task markets such as Amazon's Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets typically support only simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for micro-task markets that provides a scaffolding for more complex human computation tasks which require coordination among many individuals, such as writing an article.

© All rights reserved Kittur et al. and/or their publisher

 
Edit | Del

Halfaker, Aaron, Song, Bryan, Stuart, D. Alex, Kittur, Aniket and Riedl, John (2011): NICE: social translucence through UI intervention. In: Proceedings of the 2011 International Symposium on Wikis and Open Collaboration 2011. pp. 101-104. Available online

Social production systems such as Wikipedia rely on attracting and motivating volunteer contributions to be successful. One strong demotivating factor can be when an editor's work is discarded, or "reverted", by others. In this paper we demonstrate evidence of this effect and design a novel interface aimed at improving communication between the reverting and reverted editors. We deployed the interface in a controlled experiment on the live Wikipedia site, and report on changes in the behavior of 487 contributors who were reverted by editors using our interface. Our results suggest that simple interface modifications (such as informing Wikipedians that the editor they are reverting is a newcomer) can have substantial positive effects in protecting against contribution loss in newcomers and improving the quality of work done by more experienced contributors.

© All rights reserved Halfaker et al. and/or ACM Press

 
Edit | Del

Halfaker, Aaron, Kittur, Aniket and Riedl, John (2011): Don't bite the newbies: how reverts affect the quantity and quality of Wikipedia work. In: Proceedings of the 2011 International Symposium on Wikis and Open Collaboration 2011. pp. 163-172. Available online

Reverts are important to maintaining the quality of Wikipedia. They fix mistakes, repair vandalism, and help enforce policy. However, reverts can also be damaging, especially to the aspiring editor whose work they destroy. In this research we analyze 400,000 Wikipedia revisions to understand the effect that reverts had on editors. We seek to understand the extent to which they demotivate users, reducing the workforce of contributors, versus the extent to which they help users improve as encyclopedia editors. Overall we find that reverts are powerfully demotivating, but that their net influence is that more quality work is done in Wikipedia as a result of reverts than is lost by chasing editors away. However, we identify key conditions -- most specifically new editors being reverted by much more experienced editors -- under which reverts are particularly damaging. We propose that reducing the damage from reverts might be one effective path for Wikipedia to solve the newcomer retention problem.

© All rights reserved Halfaker et al. and/or ACM Press

 
Edit | Del

Rzeszotarski, Jeffrey M. and Kittur, Aniket (2011): Instrumenting the crowd: using implicit behavioral measures to predict task performance. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 13-22. Available online

Detecting and correcting low quality submissions in crowdsourcing tasks is an important challenge. Prior work has primarily focused on worker outcomes or reputation, using approaches such as agreement across workers or with a gold standard to evaluate quality. We propose an alternative and complementary technique that focuses on the way workers work rather than the products they produce. Our technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating. We evaluate the effectiveness of the approach across three contexts including classification, generation, and comprehension tasks. The results indicate that we can build predictive models of task performance based on behavioral traces alone, and that these models generalize to related tasks. Finally, we discuss limitations and extensions of the approach.

© All rights reserved Rzeszotarski and Kittur and/or ACM Press

 
Edit | Del

Kittur, Aniket, Smus, Boris, Khamkar, Susheel and Kraut, Robert E. (2011): CrowdForge: crowdsourcing complex work. In: Proceedings of the 2011 ACM Symposium on User Interface Software and Technology 2011. pp. 43-52. Available online

Micro-task markets such as Amazon's Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets have been primarily used for simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for accomplishing complex and interdependent tasks using micro-task markets. We describe our framework, a web-based prototype, and case studies on article writing, decision making, and science journalism that demonstrate the benefits and limitations of the approach.

© All rights reserved Kittur et al. and/or ACM Press

2010
 
Edit | Del

Cranshaw, Justin, Toch, Eran, Hong, Jason, Kittur, Aniket and Sadeh, Norman (2010): Bridging the gap between physical location and online social networks. In: Proceedings of the 2010 International Conference on Uniquitous Computing 2010. pp. 119-128. Available online

This paper examines the location traces of 489 users of a location sharing social network for relationships between the users' mobility patterns and structural properties of their underlying social network. We introduce a novel set of location-based features for analyzing the social context of a geographic region, including location entropy, which measures the diversity of unique visitors of a location. Using these features, we provide a model for predicting friendship between two users by analyzing their location trails. Our model achieves significant gains over simpler models based only on direct properties of the co-location histories, such as the number of co-locations. We also show a positive relationship between the entropy of the locations the user visits and the number of social ties that user has in the network. We discuss how the offline mobility of users can have implications for both researchers and designers of online social networks.

© All rights reserved Cranshaw et al. and/or their publisher

 
Edit | Del

Kittur, Aniket and Kraut, Robert E. (2010): Beyond Wikipedia: coordination and conflict in online production groups. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 215-224. Available online

Online production groups have the potential to transform the way that knowledge is produced and disseminated. One of the most widely used forms of online production is the wiki, which has been used in domains ranging from science to education to enterprise. We examined the development of and interactions between coordination and conflict in a sample of 6811 wiki production groups. We investigated the influence of four coordination mechanisms: intra-article communication, inter-user communication, concentration of workgroup structure, and policy and procedures. We also examined the growth of conflict, finding the density of users in an information space to be a significant predictor. Finally, we analyzed the effectiveness of the four coordination mechanisms on managing conflict, finding differences in how each scaled to large numbers of contributors. Our results suggest that coordination mechanisms effective for managing conflict are not always the same as those effective for managing task quality, and that designers must take into account the social benefits of coordination mechanisms in addition to their production benefits.

© All rights reserved Kittur and Kraut and/or their publisher

 
Edit | Del

Balakrishnan, Aruna D., Fussell, Susan R., Kiesler, Sara and Kittur, Aniket (2010): Pitfalls of information access with visualizations in remote collaborative analysis. In: Proceedings of ACM CSCW10 Conference on Computer-Supported Cooperative Work 2010. pp. 411-420. Available online

In a world of widespread information access, information can overwhelm collaborators, even with visualizations to help. We extend prior work to study the effect of shared information on collaboration. We analyzed the success and discussion process of remote pairs trying to identify a serial killer in multiple crime cases. Each partner had half of the evidence, or each partner had all the available evidence. Pairs also used one of three tools: spreadsheet only (control condition), unshared visualizations, or shared visualization. Visualizations improved analysis over the control condition but this improvement depended on how much evidence each partner had. When each partner possessed all the evidence with visualizations, discussion flagged and pairs showed evidence of more confirmation bias. They discussed fewer hypotheses and persisted on the wrong hypothesis. We discuss the possible reasons for this phenomenon and implications for design of remote collaboration systems to incorporate awareness of intermediate processes important to collaborative success.

© All rights reserved Balakrishnan et al. and/or their publisher

2009
 
Edit | Del

Kittur, Aniket, Lee, Bryant and Kraut, Robert E. (2009): Coordination in collective intelligence: the role of team structure and task interdependence. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1495-1504. Available online

The success of Wikipedia has demonstrated the power of peer production in knowledge building. However, unlike many other examples of collective intelligence, tasks in Wikipedia can be deeply interdependent and may incur high coordination costs among editors. Increasing the number of editors increases the resources available to the system, but it also raises the costs of coordination. This suggests that the dependencies of tasks in Wikipedia may determine whether they benefit from increasing the number of editors involved. Specifically, we hypothesize that adding editors may benefit low-coordination tasks but have negative consequences for tasks requiring a high degree of coordination. Furthermore, concentrating the work to reduce coordination dependencies should enable more efficient work by many editors. Analyses of both article ratings and article review comments provide support for both hypotheses. These results suggest ways to better harness the efforts of many editors in social collaborative systems involving high coordination tasks.

© All rights reserved Kittur et al. and/or ACM Press

 
Edit | Del

Kittur, Aniket, Chi, Ed H. and Suh, Bongwon (2009): What's in Wikipedia?: mapping topics and conflict using socially annotated category structure. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 1509-1512. Available online

Wikipedia is an online encyclopedia which has undergone tremendous growth. However, this same growth has made it difficult to characterize its content and coverage. In this paper we develop measures to map Wikipedia using its socially annotated, hierarchical category structure. We introduce a mapping technique that takes advantage of socially-annotated hierarchical categories while dealing with the inconsistencies and noise inherent in the distributed way that they are generated. The technique is demonstrated through two applications: mapping the distribution of topics in Wikipedia and how they have changed over time; and mapping the degree of conflict found in each topic area. We also discuss the utility of the approach for other applications and datasets involving collaboratively annotated category hierarchies.

© All rights reserved Kittur et al. and/or ACM Press

 
Edit | Del

Chau, Duen Horng, Kittur, Aniket, Faloutsos, Christos and Hong, Jason I. (2009): SHIFTR: a user-directed, link-based system for ad hoc sensemaking of large heterogeneous data collections. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 3535-3536. Available online

We present a novel method and prototype system to help users make sense of and reorganize large amounts of heterogeneous information. Our work is grounded in theories of categorization from cognitive psychology and is designed for ad hoc sensemaking; that is, supporting people's shifting goals and flexible mental representations of concepts. Shiftr adapts a carefully chosen Belief Propagation algorithm from large-scale graph mining to efficiently assist users in interactively clustering information of arbitrary types. The system functions effectively with few human-labeled examples, and supports the use of both positive and negative examples. We demonstrate Shiftr's utility through sensemaking scenarios, one of which uses the DBLP bibliography dataset, which contains more than 1.7 million author-paper relationships.

© All rights reserved Chau et al. and/or ACM Press

 
Edit | Del

Kittur, Aniket, Pendleton, Bryan and Kraut, Robert E. (2009): Herding the cats: the influence of groups in coordinating peer production. In: Proceedings of the 2009 International Symposium on Wikis 2009. p. 7. Available online

Peer production systems rely on users to self-select appropriate tasks and "scratch their personal itch". However, many such systems require significant maintenance work, which also implies the need for collective action, that is, individuals following goals set by the group and performing good citizenship behaviors. How can this paradox be resolved? Here we examine one potential answer: the influence of social identification with the larger group on contributors' behavior. We examine Wikipedia, a highly successful peer production system, and find a significant and growing influence of group structure, with a prevalent example being the WikiProject. Comparison of editors who join projects with those who do not and comparisons of the joiners' behavior before and after they join a project suggest their identification with the group plays an important role in directing them towards group goals and good citizenship behaviors. Upon joining, Wikipedians are more likely to work on project-related content, to shift their contributions towards coordination rather than production work, and to perform maintenance work such as reverting vandalism. These results suggest that group influence can play an important role in maintaining the health of online communities, even when such communities are putatively self-directed peer production systems.

© All rights reserved Kittur et al. and/or their publisher

 
Edit | Del

Halfaker, Aaron, Kittur, Aniket, Kraut, Robert E. and Riedl, John (2009): A jury of your peers: quality, experience and ownership in Wikipedia. In: Proceedings of the 2009 International Symposium on Wikis 2009. p. 15. Available online

Wikipedia is a highly successful example of what mass collaboration in an informal peer review system can accomplish. In this paper, we examine the role that the quality of the contributions, the experience of the contributors and the ownership of the content play in the decisions over which contributions become part of Wikipedia and which ones are rejected by the community. We introduce and justify a versatile metric for automatically measuring the quality of a contribution. We find little evidence that experience helps contributors avoid rejection. In fact, as they gain experience, contributors are even more likely to have their work rejected. We also find strong evidence of ownership behaviors in practice despite the fact that ownership of content is discouraged within Wikipedia.

© All rights reserved Halfaker et al. and/or their publisher

2008
 
Edit | Del

Kittur, Aniket, Chi, Ed H. and Suh, Bongwon (2008): Crowdsourcing user studies with Mechanical Turk. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 453-456. Available online

User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.

© All rights reserved Kittur et al. and/or ACM Press

 
Edit | Del

Suh, Bongwon, Chi, Ed H., Kittur, Aniket and Pendleton, Bryan A. (2008): Lifting the veil: improving accountability and social transparency in Wikipedia with wikidashboard. In: Proceedings of ACM CHI 2008 Conference on Human Factors in Computing Systems April 5-10, 2008. pp. 1037-1040. Available online

Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here we describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on Wikipedia articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.

© All rights reserved Suh et al. and/or ACM Press

 
Edit | Del

Kittur, Aniket and Kraut, Robert E. (2008): Harnessing the wisdom of crowds in wikipedia: quality through coordination. In: Proceedings of ACM CSCW08 Conference on Computer-Supported Cooperative Work 2008. pp. 37-46. Available online

Wikipedia's success is often attributed to the large numbers of contributors who improve the accuracy, completeness and clarity of articles while reducing bias. However, because of the coordination needed to write an article collaboratively, adding contributors is costly. We examined how the number of editors in Wikipedia and the coordination methods they use affect article quality. We distinguish between explicit coordination, in which editors plan the article through communication, and implicit coordination, in which a subset of editors structure the work by doing the majority of it. Adding more editors to an article improved article quality only when they used appropriate coordination techniques and was harmful when they did not. Implicit coordination through concentrating the work was more helpful when many editors contributed, but explicit coordination through communication was not. Both types of coordination improved quality more when an article was in a formative stage. These results demonstrate the critical importance of coordination in effectively harnessing the "wisdom of the crowd" in online production environments.

© All rights reserved Kittur and Kraut and/or ACM Press

 Cited in the following chapters:

Social Computing: [/encyclopedia/social_computing.html]

Contribute to the encyclopedia: [/encyclopedia/contribute.html]


 
 Cited in the following chapters:

Social Computing: [/encyclopedia/social_computing.html]

Contribute to the encyclopedia: [/encyclopedia/contribute.html]


 
 
Edit | Del

Kittur, Aniket, Suh, Bongwon and Chi, Ed H. (2008): Can you ever trust a wiki?: impacting perceived trustworthiness in wikipedia. In: Proceedings of ACM CSCW08 Conference on Computer-Supported Cooperative Work 2008. pp. 477-480. Available online

Wikipedia has become one of the most important information resources on the Web by promoting peer collaboration and enabling virtually anyone to edit anything. However, this mutability also leads many to distrust it as a reliable source of information. Although there have been many attempts at developing metrics to help users judge the trustworthiness of content, it is unknown how much impact such measures can have on a system that is perceived as inherently unstable. Here we examine whether a visualization that exposes hidden article information can impact readers' perceptions of trustworthiness in a wiki environment. Our results suggest that surfacing information relevant to the stability of the article and the patterns of editor behavior can have a significant impact on users' trust across a variety of page types.

© All rights reserved Kittur et al. and/or ACM Press

2007
 
Edit | Del

Kittur, Aniket, Suh, Bongwon, Pendleton, Bryan A. and Chi, Ed H. (2007): He says, she says: conflict and coordination in Wikipedia. In: Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems 2007. pp. 453-462. Available online

Wikipedia, a wiki-based encyclopedia, has become one of the most successful experiments in collaborative knowledge building on the Internet. As Wikipedia continues to grow, the potential for conflict and the need for coordination increase as well. This article examines the growth of such non-direct work and describes the development of tools to characterize conflict and coordination costs in Wikipedia. The results may inform the design of new collaborative knowledge systems.

© All rights reserved Kittur et al. and/or ACM Press

 Cited in the following chapter:

Social Computing: [/encyclopedia/social_computing.html]


 
 Cited in the following chapter:

Social Computing: [/encyclopedia/social_computing.html]


 
 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/aniket_kittur.html