Number of co-authors:10
Number of publications with 3 favourite co-authors:Sonya Nikolova:3Kenneth R. Fleischmann:2Perry R. Cook:2
Jordan Boyd-Graber's 3 most productive colleagues in number of publications:Perry R. Cook:13Kenneth R. Fleisch..:10Xiaojuan Ma:6
For a list of all the ways technology has failed to improve the quality of life, please press three.
-- Alice Kahn
Read the fascinating history of Wearable Computing, told by its father, Steve Mann
Read Steve's chapter !
Publications by Jordan Boyd-Graber (bibliography)
Zhai, Ke, Boyd-Graber, Jordan, Asadi, Nima and Alkhouja, Mohamad L. (2012): Mr. LDA: a flexible large scale topic modeling package using variational inference in MapReduce. In: Proceedings of the 2012 International Conference on the World Wide Web 2012. pp. 879-888.
Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring document collections. Because of the increasing prevalence of large datasets, there is a need to improve the scalability of inference for LDA. In this paper, we introduce a novel and flexible large scale topic modeling package in MapReduce (Mr. LDA). As opposed to other techniques which use Gibbs sampling, our proposed framework uses variational inference, which easily fits into a distributed environment. More importantly, this variational implementation, unlike highly tuned and specialized implementations based on Gibbs sampling, is easily extensible. We demonstrate two extensions of the models possible with this scalable framework: informed priors to guide topic discovery and extracting topics from a multilingual corpus. We compare the scalability of Mr. LDA against Mahout, an existing large scale topic modeling package. Mr. LDA out-performs Mahout both in execution speed and held-out likelihood.
© All rights reserved Zhai et al. and/or ACM Press
Fleischmann, Kenneth R., Templeton, Thomas Clay and Boyd-Graber, Jordan (2011): Modeling diverse standpoints in text classification: learning to be human by modeling human values. In: Proceedings of the 2011 iConference 2011. pp. 672-673.
An annotator's classification of a text not only tells us something about the intent of the text's author, it also tells us something about the annotator's standpoint. To understand authorial intent, we can consider all of these diverse standpoints, as well as the extent to which the annotators' standpoints affect their perceptions of authorial intent. To model human behavior, it is important to model humans' unique standpoints. Human values play an especially important role in determining human behavior and how people perceive the world around them, so any effort to model human behavior and perception can benefit from an effort to understand and model human values. Instead of training humans to obscure their standpoints and act like computers, we should teach computers to have standpoints of their own.
© All rights reserved Fleischmann et al. and/or ACM Press
Templeton, Thomas Clay, Fleischmann, Kenneth R. and Boyd-Graber, Jordan (2011): Comparing values and sentiment using Mechanical Turk. In: Proceedings of the 2011 iConference 2011. pp. 783-784.
Human values can help to explain people's sentiment toward current events. In this experiment, we compare people's values with their agreement or disagreement with paragraphs that were classified as either supporting or opposing a specific topic. We found that five value types have statistically significant agreement (p<0.001) for both the supporting and opposing paragraphs, in opposite directions. We hope to use these paragraph ratings to train an automatic text classifier to agree or disagree with paragraphs based on a specific value profile.
© All rights reserved Templeton et al. and/or ACM Press
Nikolova, Sonya, Boyd-Graber, Jordan and Cook, Perry R. (2009): The design of ViVA: a mixed-initiative visual vocabulary for aphasia. In: Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009. pp. 4015-4020.
In this paper, we present the design of ViVA, a visual vocabulary for aphasia. Aphasia is an acquired language disorder that causes variability of impairments affecting individual's ability to speak, comprehend, read and write. Existing communication aids lack flexibility and adequate customization functionality failing to address this variability and to satisfy individual user needs. We tackle these shortcomings by incorporating adaptive and adaptable capabilities in ViVA which is designed to assist communication for users suffering from aphasia. The visual vocabulary for aphasia implements a novel approach that organizes the words in the vocabulary according to user preferences, word usage and certain semantic measures, thus continuously tailoring the tool to the user's profile.
© All rights reserved Nikolova et al. and/or ACM Press
Ma, Xiaojuan, Boyd-Graber, Jordan, Nikolova, Sonya and Cook, Perry R. (2009): Speaking through pictures: images vs. icons. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 163-170.
People with aphasia, a condition that impairs the ability to understand or generate written or spoken language, are aided by assistive technology that helps them communicate through a vocabulary of icons. These systems are akin to language translation systems, translating icon arrangements into spoken or written language and vice versa. However, these icon-based systems have little vocabulary breadth or depth, making it difficult for people with aphasia to apply their usage to multiple real world situations. Pictures from the web are numerous, varied, and easily accessible and thus, could potentially address the small size issues of icon-based systems. We present results from two studies that investigate this potential and demonstrate that images can be as effective as icons when used as a replacement for English language communication. The first study uses elderly subjects to investigate the efficacy of images vs. icons in conveying word meaning; the second study examines the retention of word-level meaning by both images and icons with a population of aphasics. We conclude that images collected from the web are as functional as icons in conveying information and thus, are feasible to use in assistive technology that supports people with aphasia.
© All rights reserved Ma et al. and/or their publisher
Nikolova, Sonya, Boyd-Graber, Jordan, Fellbaum, Christiane and Cook, Perry (2009): Better vocabularies for assistive communication aids: connecting terms using semantic networks and untrained annotators. In: Eleventh Annual ACM SIGACCESS Conference on Assistive Technologies 2009. pp. 171-178.
The difficulties of navigating vocabulary in an assistive communication device are exacerbated for individuals with lexical access disorders like those due to aphasia. We present the design and implementation of a vocabulary network based on WordNet, a resource that attempts to model human semantic memory, that enables users to find words easily. To correct for the sparsity of links among words, we augment WordNet with additional connections derived from human judgments of semantic similarity collected in an online experiment. We evaluate the resulting system, the visual vocabulary for aphasia (ViVA), and describe its potential to adapt to a user's profile and enable faster search and improved navigation.
© All rights reserved Nikolova et al. and/or their publisher
Show this list on your homepage
Join the technology elite and advance:
Changes to this page (author)23 Nov 2012: Added10 Nov 2012: Added
10 Nov 2012: Added
03 Nov 2010: Added
03 Nov 2010: Added
13 Feb 2010: Modified
09 May 2009: Added
Page maintainer: The Editorial Team