Publication statistics

Pub. period:2002-2012
Pub. count:31
Number of co-authors:43



Co-authors

Number of publications with 3 favourite co-authors:

Yevgen Borodin:18
Faisal Ahmed:8
Jalal Mahmud:7

 

 

Productive colleagues

I. V. Ramakrishnan's 3 most productive colleagues in number of publications:

Jeffrey P. Bigham:32
Yevgen Borodin:23
Wei Xu:15
 
 
 
Jul 25

For us, our most important stakeholder is not our stockholders, it is our customers. We’re in business to serve the needs and desires of our core customer base

-- John Mackey

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

I. V. Ramakrishnan

Personal Homepage:
http://www.cs.sunysb.edu/~ram/

Add description
Add publication

Publications by I. V. Ramakrishnan (bibliography)

 what's this?
2012
 
Edit | Del

Islam, Muhammad Asiful, Ahmed, Faisal, Borodin, Yevgen and Ramakrishnan, I. V. (2012): Thematic organization of web content for distraction-free text-to-speech narration. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 17-24.

People with visual disabilities, especially those who are blind, have digital content narrated to them by text-to-speech (TTS) engines (e.g., with the help of screen readers). Naively narrating web pages, particularly the ones consisting of several diverse pieces (e.g., news summaries, opinion pieces, taxonomy, ads), with TTS engines without organizing them into thematic segments will make it very difficult for the blind user to mentally separate out and comprehend the essential elements in a segment, and the effort to do so can cause significant cognitive stress. One can alleviate this difficulty by segmenting web pages into thematic pieces and then narrating each of them separately. Extant segmentation methods typically segment web pages using visual and structural cues. The use of such cues without taking into account the semantics of the content, tends to produce "impure" segments containing extraneous material interspersed with the essential elements. In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility.

© All rights reserved Islam et al. and/or ACM Press

 
Edit | Del

Ahmed, Faisal, Borodin, Yevgen, Soviak, Andrii, Islam, Muhammad, Ramakrishnan, I. V. and Hedgpeth, Terri (2012): Accessible skimming: faster screen reading of web pages. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 367-378.

In our information-driven web-based society, we are all gradually falling ""victims"" to information overload [5]. However, while sighted people are finding ways to sift through information faster, Internet users who are blind are experiencing an even greater information overload. These people access computers and Internet using screen-reader software, which reads the information on a computer screen sequentially using computer-generated speech. While sighted people can learn how to quickly glance over the headlines and news articles online to get the gist of information, people who are blind have to use keyboard shortcuts to listen through the content narrated by a serial audio interface. This interface does not give them an opportunity to know what content to skip and what to listen to. So, they either listen to all of the content or listen to the first part of each sentence or paragraph before they skip to the next one. In this paper, we propose an automated approach to facilitate non-visual skimming of web pages. We describe the underlying algorithm, outline a non-visual skimming interface, and report on the results of automated experiments, as well as on our user study with 23 screen-reader users. The results of the experiments suggest that we have been moderately successful in designing a viable algorithm for automatic summarization that could be used for non-visual skimming. In our user studies, we confirmed that people who are blind could read and search through online articles faster and were able to understand and remember most of what they have read with our skimming system. Finally, all 23 participants expressed genuine interest in using non-visual skimming in the future.

© All rights reserved Ahmed et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Sovyak, Andrii, Dimitriyadi, Alexander, Puzis, Yury, Melnyk, Valentyn, Ahmed, Faisal, Dausch, Glenn and Ramakrishnan, I. V. (2012): Universal and ubiquitous web access with Capti. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 27.

In this paper we present Capti -- a universally and ubiquitously accessible web browsing application enabling intuitive and usable web access for people with and w/o vision impairments. Capti provides a usable screen-reader interface for web browsing and an accessible listen-to-it-later Playlist (charmtechlabs.com).

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Ahmed, Faisal, Borodin, Yevgen, Puzis, Yury and Ramakrishnan, I. V. (2012): Why read if you can skim: towards enabling faster screen reading. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 39.

Skimming broadly refers to different speed-reading methods that aim to enhance the rate of reading without unduly compromising on comprehension and retention of information. Skimming of content could be particularly useful for people with vision impairments, who frequently experience information overload when listening to reams of digital content online. Support for usable and useful skimming in modern screen readers remains very poor. This paper explores the user requirements for a usable non-visual skimming interface, informed by a large-scale human-subject experiment with blind individuals. Specifically, the study has: (1) helped identify the type of skimming that can be useful in screen reading main content in web pages; (2) led to the development of a usable interface for accessible online skimming; (3) demonstrated the utility of the accessible skimming interface in two realistic use scenarios; (4) identified automatic summarization techniques that could "closely" approximate skimming methods used by sighted people.

© All rights reserved Ahmed et al. and/or ACM Press

 
Edit | Del

Puzis, Yury, Borodin, Yevgen, Ahmed, Faisal and Ramakrishnan, I. V. (2012): An intuitive accessible web automation user interface. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 41.

In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has advanced at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish web browsing tasks that were previously slow, hard, or even impossible to achieve. In this paper, we propose and evaluate an intuitive and accessible web automation interface. We validate the design in a Wizard-of-Oz user study with visually-impaired subjects and show that the proposed approach has the potential to significantly increase accessibility and usability of web pages, reduce interaction time, and increase user satisfaction. Our findings demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.

© All rights reserved Puzis et al. and/or ACM Press

2011
 
Edit | Del

Puzis, Yury, Borodin, Eugene, Ahmed, Faisal, Melnyk, Valentine and Ramakrishnan, I. V. (2011): Guidelines for an accessible web automation interface. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 249-250.

In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has been advancing at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish quickly web browsing tasks that were previously slow, hard, or even impossible to complete. In this paper, we propose guidelines for the design of intuitive and accessible web automation that has the potential to increase accessibility and usability of web pages, reduce interaction time, and improve user browsing experience. Our findings and a preliminary user study demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.

© All rights reserved Puzis et al. and/or ACM Press

2010
 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Dausch, Glenn and Ramakrishnan, I. V. (2010): More than meets the eye: a survey of screen-reader browsing strategies. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 13.

Browsing the Web with screen readers can be difficult and frustrating. Web pages often contain inaccessible content that is expressed only visually or that can be accessed only with the mouse. Screen-reader users must also contend with usability challenges encountered when the reading content is designed with built-in assumptions of how it will be accessed -- generally by a sighted person on a standard display. Far from passive consumers of content who simply accept web content as accessible or not, many screen-reader users are adept at developing, discovering, and employing browsing strategies that help them overcome the accessibility and usability problems they encounter. In this paper, we overview the browsing strategies that we have observed screen-reader users employ when faced with challenges, ranging from unfamiliar web sites and complex web pages to dynamic and automatically-refreshing content. A better understanding of existing browsing strategies can inform the design of accessible websites, development of new tools that make experienced users more effective, and help overcome the initial learning curve for users who have not yet acquired effective browsing strategies.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Borodin, Yevgen, Ahmed, Faisal, Islam, Muhammad Asiful, Puzis, Yury, Melnyk, Valentyn, Feng, Song, Ramakrishnan, I. V. and Dausch, Glenn (2010): Hearsay: a new generation context-driven multi-modal assistive web browser. In: Proceedings of the 2010 International Conference on the World Wide Web 2010. pp. 1233-1236.

This demo will present HearSay, a multi-modal non-visual web browser, which aims to bridge the growing Web Accessibility divide between individuals with visual impairments and their sighted counterparts, and to facilitate full participation of blind individuals in the growing Web-based society.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Mixture model based label association techniques for web accessibility. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 67-76.

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

© All rights reserved Islam et al. and/or their publisher

 
Edit | Del

Ahmed, Faisal, Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Assistive web browsing with touch interfaces. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 235-236.

This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.

© All rights reserved Ahmed et al. and/or their publisher

2009
 
Edit | Del

Borodin, Yevgen, Dausch, Glenn and Ramakrishnan, I. V. (2009): TeleWeb: accessible service for web browsing via phone. In: Proceedings of the 2009 International Cross-Disciplinary Conference on Web Accessibility W4A 2009. pp. 96-97.

In this paper, we present TeleWeb -- a telephony service for web browsing via the most ubiquitous communication device, the Phone. TeleWeb integrates a simple and usable phone interface with the intelligent features such as context-directed browsing, template-detection, and macro-replaying. We expect that TeleWeb will equally appeal to people with vision impairments, older adults who may not be comfortable using computers, and anyone who wants to access the Web on the move.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Ramakrishnan, I. V. and Ramakrishnan, C. R. (2009): Automated construction of web accessibility models from transaction click-streams. In: Proceedings of the 2009 International Conference on the World Wide Web 2009. pp. 871-880.

Screen readers, the dominant assistive technology used by visually impaired people to access the Web, function by speaking out the content of the screen serially. Using screen readers for conducting online transactions can cause considerable information overload, because transactions, such as shopping and paying bills, typically involve a number of steps spanning several web pages. One can combat this overload by using a transaction model for web accessibility that presents only fragments of web pages that are needed for doing transactions. We can realize such a model by coupling a process automaton, encoding states of a transaction, with concept classifiers that identify page fragments "relevant" to a particular state of the transaction. In this paper we present a fully automated process that synergistically combines several techniques for transforming unlabeled click-stream data generated by transactions into a transactionmodel. These techniques include web content analysis to partition a web page into segments consisting of semantically related content, contextual analysis of data surrounding clickable objects in a page, and machine learning methods, such as clustering of page segments based on contextual analysis, statistical classification, and automata learning. The use of unlabeled click streams in building transaction models has important benefits: (i) visually impaired users do not have to depend on sighted users for creating manually labeled training data to construct the models; (ii) it is possible to mine personalized models from unlabeled transaction click-streams associated with sites that visually impaired users visit regularly; (iii) since unlabeled data is relatively easy to obtain, it is feasible to scale up the construction of domain-specific transaction models (e.g., separate models for shopping, airline reservations, bill payments, etc.); (iv) adjusting the performance of deployed models over timtime with new training data is also doable. We provide preliminary experimental evidence of the practical effectiveness of both domain-specific, as well as personalized accessibility transaction models built using our approach. Finally, this approach is applicable for building transaction models for mobile devices with limited-size displays, as well as for creating wrappers for information extraction from web sites.

© All rights reserved Mahmud et al. and/or ACM Press

2008
 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Raman, Rohit and Ramakrishnan, I. V. (2008): What's new?: making web page updates accessible. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 145-152.

Web applications facilitated by technologies such as JavaScript, DHTML, AJAX, and Flash use a considerable amount of dynamic web content that is either inaccessible or unusable by blind people. Server side changes to web content cause whole page refreshes, but only small sections of the page update, causing blind web users to search linearly through the page to find new content. The connecting theme is the need to quickly and unobtrusively identify the segments of a web page that have changed and notify the user of them. In this paper we propose Dynamo, a system designed to unify different types of dynamic content and make dynamic content accessible to blind web users. Dynamo treats web page updates uniformly and its methods encompass both web updates enabled through dynamic content and scripting, and updates resulting from static page refreshes, form submissions, and template-based web sites. From an algorithmic and interaction perspective Dynamo detects underlying changes and provides users with a single and intuitive interface for reviewing the changes that have occurred. We report on the quantitative and qualitative results of an evaluation conducted with blind users. These results suggest that Dynamo makes access to dynamic content faster, and that blind web users like it better than existing interfaces.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen and Ramakrishnan, I. V. (2008): Assistive browser for conducting web transactions. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 365-368.

People with visual impairments use screen readers to browse the Web. Sequential processing of web pages by screen readers causes information overload, making web browsing time-consuming and strenuous. These problems are further exacerbated in web transactions (e.g.: online shopping), which involve multiple steps spanning several web pages. In this paper we present a lightweight approach for doing Web transactions using non-visual modalities. We describe how analysis of context surrounding the link coupled with a shallow knowledge-base with patterns and keywords can help identify various concepts (e.g.: "add to cart", "item description", etc.) that are important in web transactions. Our preliminary results show promise that presenting such concepts to the users can reduce information overload and improve their overall browsing experience.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Stent, Amanda and Ramakrishnan, I. V. (2008): Towards one world web with HearSay3. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 130-131.

In this paper, we present the key functionalities of HearSay 3, a non-visual web browser designed with the goal of improving web accessibility across the world. The featured functionalities include transparent support for multiple languages, collaborative labeling that harnesses the power of the social web, and a usable interface for handling dynamic web content.

© All rights reserved Borodin et al. and/or ACM Press

2007
 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Das, Dipanjan and Ramakrishnan, I. V. (2007): Combating information overload in non-visual web access using context. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 341-344.

Web sites are designed for graphical mode of interaction. Sighted users can visually segment Web pages and quickly identify relevant information. In contrast, visually-disabled individuals have to use screen readers to browse the Web. Screen readers process pages sequentially and read through everything, making Web browsing time-consuming and strenuous. The use of shortcut keys and searching offers some improvements, but the problem still remains. In this paper, we address this problem using the notion of context. When a user follows a link, we capture the context of the link, and use it to identify relevant information on the next page. The content of this page is rearranged, so that the relevant information is read out first. We conducted a series experiments to compare the performance of our prototype system with the state-of-the-art JAWS screen reader. Our results show that the use of context can potentially save browsing time as well as improve browsing experience of visually disabled individuals.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Mahmud, Jalal U., Borodin, Yevgen and Ramakrishnan, I. V. (2007): CSurf: a context-driven non-visual web-browser. In: Proceedings of the 2007 International Conference on the World Wide Web 2007. pp. 31-40.

Web sites are designed for graphical mode of interaction. Sighted users can "cut to the chase" and quickly identify relevant information in Web pages. On the contrary, individuals with visual disabilities have to use screen-readers to browse the Web. As screen-readers process pages sequentially and read through everything, Web browsing can become strenuous and time-consuming. Although, the use of shortcuts and searching offers some improvements, the problem still remains. In this paper, we address the problem of information overload in non-visual Web access using the notion of context. Our prototype system, CSurf, embodying our approach, provides the usual features of a screen-reader. However, when a user follows a link, CSurf captures the context of the link using a simple topic-boundary detection technique, and uses it to identify relevant information on the next page with the help of a Support Vector Machine, a statistical machine-learning model. Then, CSurf reads the Web page starting from the most relevant section, identified by the model. We conducted a series experiments to evaluate the performance of CSurf against the state-of-the-art screen-reader, JAWS. Our results show that the use of context can potentially save browsing time and substantially improve browsing experience of visually disabled people.

© All rights reserved Mahmud et al. and/or International World Wide Web Conference Committee

 
Edit | Del

Borodin, Yevgen, Mahmud, Jalal, Ramakrishnan, I. V. and Stent, Amanda (2007): The HearSay non-visual web browser. In: Proceedings of the 2007 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2007. pp. 128-129.

This paper describes HearSay, a non-visual Web browser, featuring context-directed browsing, a unique and innovative Web accessibility feature, and an extensible VoiceXML dialog interface. The browser provides most of the standard browsing functionalities, including flexible navigation and form-filling. The paper also outlines future work aiming to make the Web more accessible for individuals with visual impairments.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Sun, Zan, Mahmud, Jalal, Ramakrishnan, I. V. and Mukherjee, Saikat (2007): Model-directed Web transactions under constrained modalities. In ACM Transactions on the Web, 1 (3) p. 12.

Online transactions (e.g., buying a book on the Web) typically involve a number of steps spanning several pages. Conducting such transactions under constrained interaction modalities as exemplified by small screen handhelds or interactive speech interfaces -- the primary mode of communication for visually impaired individuals -- is a strenuous, fatigue-inducing activity. But usually one needs to browse only a small fragment of a Web page to perform a transactional step such as a form fillout, selecting an item from a search results list, and so on. We exploit this observation to develop an automata-based process model that delivers only the "relevant" page fragments at each transactional step, thereby reducing information overload on such narrow interaction bandwidths. We realize this model by coupling techniques from content analysis of Web documents, automata learning and statistical classification. The process model and associated techniques have been incorporated into Guide-O, a prototype system that facilitates online transactions using speech/keyboard interface (Guide-O-Speech), or with limited-display size handhelds (Guide-O-Mobile). Performance of Guide-O and its user experience are reported.

© All rights reserved Sun et al. and/or ACM

2006
 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Das, Dipanjan and Ramakrishnan, I. V. (2006): Improving non-visual web access using context. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 273-274.

To browse the Web, blind people have to use screen readers, which process pages sequentially, making browsing timeconsuming. We present a prototype system, CSurf, which provides all features of a regular screen reader, but when a user follows a link, CSurf captures the context of the link and uses it to identify relevant information on the next page. CSurf rearranges the content of the next page, so, that the relevant information is read out first. A series experiments have been conducted to evaluate the performance of CSurf.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Sun, Zan, Mahmud, Jalal, Mukherjee, Saikat and Ramakrishnan, I. V. (2006): Model-directed web transactions under constrained modalities. In: Proceedings of the 2006 International Conference on the World Wide Web 2006. pp. 447-456.

Online transactions (e.g., buying a book on the Web) typically involve a number of steps spanning several pages. Conducting such transactions under constrained interaction modalities as exemplified by small screen handhelds or interactive speech interfaces -- the primary mode of communication for visually impaired individuals -- is a strenuous, fatigue-inducing activity. But usually one needs to browse only a small fragment of a Web page to perform a transactional step such as a form fillout, selecting an item from a search results list, etc. We exploit this observation to develop an automata-based process model that delivers only the "relevant" page fragments at each transactional step, thereby reducing information overload on such narrow interaction bandwidths. We realize this model by coupling techniques from content analysis of Web documents, automata learning and statistical classification. The process model and associated techniques have been incorporated into Guide-O, a prototype system that facilitates online transactions using speech/keyboard interface (Guide-O-Speech), or with limited-display size handhelds (Guide-O-Mobile). Performance of Guide-O and its user experience are reported.

© All rights reserved Sun et al. and/or ACM Press

2005
 
Edit | Del

Enagandula, Vineet, Juthani, Niraj, Ramakrishnan, I. V., Rawal, Devashish and Vidyasagar, Ritwick (2005): BlackBoardNV: a system for enabling non-visual access to the blackboard course management system. In: Seventh Annual ACM Conference on Assistive Technologies 2005. pp. 220-221.

 
Edit | Del

Mukherjee, Saikat and Ramakrishnan, I. V. (2005): Browsing fatigue in handhelds: semantic bookmarking spells relief. In: Proceedings of the 2005 International Conference on the World Wide Web 2005. pp. 593-602.

Focused Web browsing activities such as periodically looking up headline news, weather reports, etc., which require only selective fragments of particular Web pages, can be made more efficient for users of limited-display-size handheld mobile devices by delivering only the target fragments. Semantic bookmarks provide a robust conceptual framework for recording and retrieving such targeted content not only from the specific pages used in creating the bookmarks but also from any user-specified page with similar content semantics. This paper describes a technique for realizing semantic bookmarks by coupling machine learning with Web page segmentation to create a statistical model of the bookmarked content. These models are used to identify and retrieve the bookmarked content from Web pages that share a common content domain. In contrast to ontology-based approaches where semantic bookmarks are limited to available concepts in the ontology, the learning-based approach allows users to bookmark ad-hoc personalized semantic concepts to effectively target content that fits the limited display of handhelds. User evaluation measuring the effectiveness of a prototype implementation of learning-based semantic bookmarking at reducing browsing fatigue in handhelds is provided.

© All rights reserved Mukherjee and Ramakrishnan and/or ACM Press

 
Edit | Del

Xu, Wei, Sekar, R., Ramakrishnan, I. V. and Venkatakrishnan, V. N. (2005): An approach for realizing privacy-preserving web-based services. In: Proceedings of the 2005 International Conference on the World Wide Web 2005. pp. 1014-1015.

2004
 
Edit | Del

Julasana, Nikeeta, Khandelwal, Akshat, Lolage, Anupama, Singh, Prabhdeep, Vasudevan, Priyanka, Davulcu, Hasan and Ramakrishnan, I. V. (2004): WinAgent: a system for creating and executing personal information assistants using a web browser. In: Nunes, Nuno Jardim and Rich, Charles (eds.) International Conference on Intelligent User Interfaces 2004 January 13-16, 2004, Funchal, Madeira, Portugal. pp. 356-357.

WinAgent is a software system for creating and executing Personal Information Assistants (PIAs). These are software robots that can locate and extract targeted data buried deep within a web site. They do so by automatically navigating to relevant sites, locating the correct Web pages (which can be either directly accessed by traversing appropriate links or by filling out HTML forms), and extracting, structuring, and organizing data of interest from these pages into XML. The primary thrust of WinAgent technology effort was to make these tools easy-to-use by users who are not necessarily trained in computing. In particular users create and execute PIAs through a Web Browser.

© All rights reserved Julasana et al. and/or ACM Press

 
Edit | Del

Mukherjee, Saikat, Ramakrishnan, I. V. and Kifer, Michael (2004): Semantic bookmarking for non-visual web access. In: Sixth Annual ACM Conference on Assistive Technologies 2004. pp. 185-192.

Bookmarks are shortcuts that enable quick access of the desired Web content. They have become a standard feature in any browser and recent studies have shown that they can be very useful for non-visual Web access as well. Current bookmarking techniques in assistive Web browsers are rigidly tied to the structure of Web pages. Consequently they are susceptible to even slight changes in the structure of Web pages. In this paper we propose semantic bookmarking for non-visual Web access. With the help of an ontology that represents concepts in a domain, content in Web pages can be semantically associated with bookmarks. As long as these associations can be identified, semantic bookmarks are resilient in the face of structural changes to the Web page. The use of ontologies allows semantic bookmarks to span multiple Web sites covered by a common domain. This contributes to the ease of information retrieval and bookmark maintenance. In this paper we describe highly automated techniques for creating and retrieving semantic bookmarks. These techniques have been incorporated into an assistive Web browser. Preliminary experimental evidence suggests the effectiveness of semantic bookmarks for non-visual Web access.

© All rights reserved Mukherjee et al. and/or ACM Press

 
Edit | Del

Ramakrishnan, I. V., Stent, Amanda and Yang, Guizhen (2004): Hearsay: enabling audio browsing on hypertext content. In: Proceedings of the 2004 International Conference on the World Wide Web 2004. pp. 80-89.

In this paper we present HearSay, a system for browsing hypertext Web documents via audio. The HearSay system is based on our novel approach to automatically creating audio browsable content from hypertext Web documents. It combines two key technologies: (1) automatic partitioning of Web documents through tightly coupled structural and semantic analysis, which transforms raw HTML documents into semantic structures so as to facilitate audio browsing; and (2) VoiceXML, an already standardized technology which we adopt to represent voice dialogs automatically created from the XML output of partitioning. This paper describes the software components of HearSay and presents an initial system evaluation.

© All rights reserved Ramakrishnan et al. and/or ACM Press

 
Edit | Del

Davulcu, Hasan, Kifer, Michael and Ramakrishnan, I. V. (2004): CTR-S: a logic for specifying contracts in semantic web services. In: Proceedings of the 2004 International Conference on the World Wide Web 2004. pp. 144-153.

A requirements analysis in the emerging field of Semantic Web Services (SWS) (see http://daml.org/services/swsl/requirements/) has identified four major areas of research: intelligent service discovery, automated contracting of services, process modeling, and service enactment. This paper deals with the intersection of two of these areas: process modeling as it pertains to automated contracting. Specifically, we propose a logic, called CTR-S, which captures the dynamic aspects of contracting for services. Since CTR-S is an extension of the classical first-order logic, it is well-suited to model the static aspects of contracting as well. A distinctive feature of contracting is that it involves two or more parties in a potentially adversarial situation. CTR-S is designed to model this adversarial situation through its novel model theory, which incorporates certain game-theoretic concepts. In addition to the model theory, we develop a proof theory for CTR-S and demonstrate the use of the logic for modeling and reasoning about Web service contracts.

© All rights reserved Davulcu et al. and/or ACM Press

2003
 
Edit | Del

Yang, Guizhen, Ramakrishnan, I. V. and Kifer, Michael (2003): On the complexity of schema inference from web pages in the presence of nullable data attributes. In: Proceedings of the 2003 ACM CIKM International Conference on Information and Knowledge Management November 2-8, 2003, New Orleans, Louisiana, USA. pp. 224-231.

2002
 
Edit | Del

Davulcu, Hasan, Mukherjee, Saikat, Seth, Arvind and Ramakrishnan, I. V. (2002): CuTeX: a system for extracting data from text tables. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2002. p. 457.

A wealth of information relevant for e-commerce often appears in text form. This includes specification and performance data sheets of products, financial statements, product offerings etc. Typically these types of product and financial data are published in tabular form. The only separators between items in the table are white spaces and line separators. We will refer to such tables as text tables. Due to the lack of structure in such tables, the information present is not readily queriable using traditional database query languages like SQL. One way to make it amenable to standard database querying techniques is to extract the data items in the tables and create a database out of the extracted data. But extraction from text tables poses difficulties due to the irregularity of the data in the column.

© All rights reserved Davulcu et al. and/or ACM Press

 
Edit | Del

Choudhari, Prashant, Davulcu, Hasan, Joglekar, Abhishek, More, Akshay, Mukherjee, Saikat, Patil, Supriya and Ramakrishnan, I. V. (2002): YellowPager: a tool for ontology-based mining of service directories from web sources. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2002. p. 458.

The web has established itself as the dominant medium for doing electronic commerce. Realizing that its global reach provides significant market and business opportunities, service providers, both large and small are advertising their services on the web. A number of them operate their own web sites promoting their services at length while others are merely listed in a referral site. Aggregating all of the providers into a queriable service directory makes it easy for customers to locate the one most suited for his/her needs. YellowPager is a tool for creating service directories by mining web sources. Service directories created by YellowPager have several merits compared to those generated by existing practices, which typically require participation by service providers (e.g. Verizon's SuperYellowPages.com). Firstly, the information content will be rich. Secondly since the process is automated and repeatable the content can always be kept current. Finally the same process can be readily adapted to different domains. YellowPager builds service directories by mining the web through a combination of keyword-based search engines, web agents, text classifiers and novel extraction algorithms. The extraction is driven by a services ontology consisting of a taxonomy of service concepts and their associated attributes (such as names and addresses) and type descriptions for the attributes. In addition the ontology also associates an extractor function with each attribute. Applying the function to a web page will identify all the occurrences of the attribute in that page. YellowPager's mining algorithm consists of a training step followed by classification and extraction steps. In the training step a classifier is trained to identify web pages relevant to the service of interest. The classification step proceeds by doing a search for the particular service of interest using a keyword based web search engine and retrieves all the matching web pages. From these pages the relevant ones are identified using the classifier. The final step is extraction of attribute values, associated with the service, from these pages. Each web page is parsed into a DOM tree and the extractor functions are applied. All of the attributes corresponding to a service provider are then correctly aggregated. This can pose difficulties especially in the presence of multiple service providers in a page. Using a novel concept of scoring and conflict resolution to prevent erroneous associations of attributes with service provider entities in the page, the algorithm aggregates all the attribute occurrences correctly. The extractor function may not be complete in the sense that it cannot always identify all the attributes in a page. By exploiting the regularity of the sequence in which attributes occur in referral pages, the mining algorithm automatically learns generalized patterns to locate attributes that the extractor function misses. The distinguishing aspects of YellowPager's extraction algorithm are: (i) it is unsupervised, and (ii) the attribute values in the pages are extracted independent of any page-specific relationships that may exist among the markup tags. YellowPager has been used by a large pet food producer to build a directory of veterinarian service providers in the United States. The resulting database was found to be much larger and richer than that found in Vetquest, Vetworld, and the Super Yellow pages. YellowPager is implemented in JAVA and is interfaced to Rainbow, a library utility in C that is used for classification. The tool will demonstrate the creation of a service directory for any service domain by mining web sources.

© All rights reserved Choudhari et al. and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
09 Nov 2012: Modified
03 Apr 2012: Modified
15 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
09 Jul 2009: Modified
27 Jun 2009: Modified
27 Jun 2009: Modified
29 May 2009: Modified
08 Apr 2009: Modified
07 Apr 2009: Modified
25 Jul 2007: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
24 Jun 2007: Modified
24 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Modified
28 Apr 2003: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/i__v__ramakrishnan.html

Publication statistics

Pub. period:2002-2012
Pub. count:31
Number of co-authors:43



Co-authors

Number of publications with 3 favourite co-authors:

Yevgen Borodin:18
Faisal Ahmed:8
Jalal Mahmud:7

 

 

Productive colleagues

I. V. Ramakrishnan's 3 most productive colleagues in number of publications:

Jeffrey P. Bigham:32
Yevgen Borodin:23
Wei Xu:15
 
 
 
Jul 25

For us, our most important stakeholder is not our stockholders, it is our customers. We’re in business to serve the needs and desires of our core customer base

-- John Mackey

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!