Publication statistics

Pub. period:2006-2012
Pub. count:23
Number of co-authors:24



Co-authors

Number of publications with 3 favourite co-authors:

I. V. Ramakrishnan:18
Faisal Ahmed:7
Jeffrey P. Bigham:5

 

 

Productive colleagues

Yevgen Borodin's 3 most productive colleagues in number of publications:

Hironobu Takagi:33
Richard E. Ladner:32
Jeffrey P. Bigham:32
 
 
 
Jul 13

A general principle for all user interface design is to go through all of your design elements and remove them one at a time. If the design works as well without a certain design element, kill it.

-- Jakob Nielsen, Designing Web Usability, p. 22.

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!

 
 

Yevgen Borodin

Add description
Add publication

Publications by Yevgen Borodin (bibliography)

 what's this?
2012
 
Edit | Del

Islam, Muhammad Asiful, Ahmed, Faisal, Borodin, Yevgen and Ramakrishnan, I. V. (2012): Thematic organization of web content for distraction-free text-to-speech narration. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 17-24.

People with visual disabilities, especially those who are blind, have digital content narrated to them by text-to-speech (TTS) engines (e.g., with the help of screen readers). Naively narrating web pages, particularly the ones consisting of several diverse pieces (e.g., news summaries, opinion pieces, taxonomy, ads), with TTS engines without organizing them into thematic segments will make it very difficult for the blind user to mentally separate out and comprehend the essential elements in a segment, and the effort to do so can cause significant cognitive stress. One can alleviate this difficulty by segmenting web pages into thematic pieces and then narrating each of them separately. Extant segmentation methods typically segment web pages using visual and structural cues. The use of such cues without taking into account the semantics of the content, tends to produce "impure" segments containing extraneous material interspersed with the essential elements. In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility.

© All rights reserved Islam et al. and/or ACM Press

 
Edit | Del

Ahmed, Faisal, Borodin, Yevgen, Soviak, Andrii, Islam, Muhammad, Ramakrishnan, I. V. and Hedgpeth, Terri (2012): Accessible skimming: faster screen reading of web pages. In: Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012. pp. 367-378.

In our information-driven web-based society, we are all gradually falling ""victims"" to information overload [5]. However, while sighted people are finding ways to sift through information faster, Internet users who are blind are experiencing an even greater information overload. These people access computers and Internet using screen-reader software, which reads the information on a computer screen sequentially using computer-generated speech. While sighted people can learn how to quickly glance over the headlines and news articles online to get the gist of information, people who are blind have to use keyboard shortcuts to listen through the content narrated by a serial audio interface. This interface does not give them an opportunity to know what content to skip and what to listen to. So, they either listen to all of the content or listen to the first part of each sentence or paragraph before they skip to the next one. In this paper, we propose an automated approach to facilitate non-visual skimming of web pages. We describe the underlying algorithm, outline a non-visual skimming interface, and report on the results of automated experiments, as well as on our user study with 23 screen-reader users. The results of the experiments suggest that we have been moderately successful in designing a viable algorithm for automatic summarization that could be used for non-visual skimming. In our user studies, we confirmed that people who are blind could read and search through online articles faster and were able to understand and remember most of what they have read with our skimming system. Finally, all 23 participants expressed genuine interest in using non-visual skimming in the future.

© All rights reserved Ahmed et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Sovyak, Andrii, Dimitriyadi, Alexander, Puzis, Yury, Melnyk, Valentyn, Ahmed, Faisal, Dausch, Glenn and Ramakrishnan, I. V. (2012): Universal and ubiquitous web access with Capti. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 27.

In this paper we present Capti -- a universally and ubiquitously accessible web browsing application enabling intuitive and usable web access for people with and w/o vision impairments. Capti provides a usable screen-reader interface for web browsing and an accessible listen-to-it-later Playlist (charmtechlabs.com).

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Ahmed, Faisal, Borodin, Yevgen, Puzis, Yury and Ramakrishnan, I. V. (2012): Why read if you can skim: towards enabling faster screen reading. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 39.

Skimming broadly refers to different speed-reading methods that aim to enhance the rate of reading without unduly compromising on comprehension and retention of information. Skimming of content could be particularly useful for people with vision impairments, who frequently experience information overload when listening to reams of digital content online. Support for usable and useful skimming in modern screen readers remains very poor. This paper explores the user requirements for a usable non-visual skimming interface, informed by a large-scale human-subject experiment with blind individuals. Specifically, the study has: (1) helped identify the type of skimming that can be useful in screen reading main content in web pages; (2) led to the development of a usable interface for accessible online skimming; (3) demonstrated the utility of the accessible skimming interface in two realistic use scenarios; (4) identified automatic summarization techniques that could "closely" approximate skimming methods used by sighted people.

© All rights reserved Ahmed et al. and/or ACM Press

 
Edit | Del

Puzis, Yury, Borodin, Yevgen, Ahmed, Faisal and Ramakrishnan, I. V. (2012): An intuitive accessible web automation user interface. In: Proceedings of the 2012 International Cross-Disciplinary Conference on Web Accessibility W4A 2012. p. 41.

In recent years, the Web has become an ever more sophisticated and irreplaceable tool in our daily lives. While the visual Web has advanced at a rapid pace, assistive technology has not been able to keep up, increasingly putting visually impaired users at a disadvantage. Web automation has the potential to bridge the accessibility divide between the ways blind and sighted people access the Web; specifically, it can enable blind people to accomplish web browsing tasks that were previously slow, hard, or even impossible to achieve. In this paper, we propose and evaluate an intuitive and accessible web automation interface. We validate the design in a Wizard-of-Oz user study with visually-impaired subjects and show that the proposed approach has the potential to significantly increase accessibility and usability of web pages, reduce interaction time, and increase user satisfaction. Our findings demonstrate the feasibility of and emphasize the pressing need for truly accessible web automation technologies.

© All rights reserved Puzis et al. and/or ACM Press

2011
 
Edit | Del

Bigham, Jeffrey P., Ladner, Richard E. and Borodin, Yevgen (2011): The design of human-powered access technology. In: Thirteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2011. pp. 3-10.

People with disabilities have always overcome accessibility problems by enlisting people in their community to help. The Internet has broadened the available community and made it easier to get on-demand assistance remotely. In particular, the past few years have seen the development of technology in both research and industry that uses human power to overcome technical problems too difficult to solve automatically. In this paper, we frame recent developments in human computation in the historical context of accessibility, and outline a framework for discussing new advances in human-powered access technology. Specifically, we present a set of 13 design principles for human-powered access technology motivated both by historical context and current technological developments. We then demonstrate the utility of these principles by using them to compare several existing human-powered access technologies. The power of identifying the 13 principles is that they will inspire new ways of thinking about human-powered access technologies.

© All rights reserved Bigham et al. and/or ACM Press

2010
 
Edit | Del

Borodin, Yevgen (2010): Remote web browsing via the phone with teleweb. In: Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010. pp. 4339-4344.

TeleWeb is an assistive voice-enabled application empowering users to remotely access the Web through the most ubiquitous device -- the phone. The uniqueness of the technology is that it enables users to gain access to information from almost anywhere via a plain, old-fashioned telephone. TeleWeb users will be able to call their own personal numbers, authenticate themselves, and then use speech and phone key-pad to remotely browse the Web on their own PCs. TeleWeb may especially appeal to people with vision loss, as well as older adults who may find the phone interface to be more familiar and easier to use. In this paper, I describe the TeleWeb approach and the interface.

© All rights reserved Borodin and/or his/her publisher

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Dausch, Glenn and Ramakrishnan, I. V. (2010): More than meets the eye: a survey of screen-reader browsing strategies. In: Proceedings of the 2010 International Cross-Disciplinary Conference on Web Accessibility W4A 2010. p. 13.

Browsing the Web with screen readers can be difficult and frustrating. Web pages often contain inaccessible content that is expressed only visually or that can be accessed only with the mouse. Screen-reader users must also contend with usability challenges encountered when the reading content is designed with built-in assumptions of how it will be accessed -- generally by a sighted person on a standard display. Far from passive consumers of content who simply accept web content as accessible or not, many screen-reader users are adept at developing, discovering, and employing browsing strategies that help them overcome the accessibility and usability problems they encounter. In this paper, we overview the browsing strategies that we have observed screen-reader users employ when faced with challenges, ranging from unfamiliar web sites and complex web pages to dynamic and automatically-refreshing content. A better understanding of existing browsing strategies can inform the design of accessible websites, development of new tools that make experienced users more effective, and help overcome the initial learning curve for users who have not yet acquired effective browsing strategies.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Borodin, Yevgen, Ahmed, Faisal, Islam, Muhammad Asiful, Puzis, Yury, Melnyk, Valentyn, Feng, Song, Ramakrishnan, I. V. and Dausch, Glenn (2010): Hearsay: a new generation context-driven multi-modal assistive web browser. In: Proceedings of the 2010 International Conference on the World Wide Web 2010. pp. 1233-1236.

This demo will present HearSay, a multi-modal non-visual web browser, which aims to bridge the growing Web Accessibility divide between individuals with visual impairments and their sighted counterparts, and to facilitate full participation of blind individuals in the growing Web-based society.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Mixture model based label association techniques for web accessibility. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 67-76.

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

© All rights reserved Islam et al. and/or their publisher

 
Edit | Del

Ahmed, Faisal, Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Assistive web browsing with touch interfaces. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 235-236.

This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.

© All rights reserved Ahmed et al. and/or their publisher

2009
 
Edit | Del

Borodin, Yevgen, Dausch, Glenn and Ramakrishnan, I. V. (2009): TeleWeb: accessible service for web browsing via phone. In: Proceedings of the 2009 International Cross-Disciplinary Conference on Web Accessibility W4A 2009. pp. 96-97.

In this paper, we present TeleWeb -- a telephony service for web browsing via the most ubiquitous communication device, the Phone. TeleWeb integrates a simple and usable phone interface with the intelligent features such as context-directed browsing, template-detection, and macro-replaying. We expect that TeleWeb will equally appeal to people with vision impairments, older adults who may not be comfortable using computers, and anyone who wants to access the Web on the move.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Ramakrishnan, I. V. and Ramakrishnan, C. R. (2009): Automated construction of web accessibility models from transaction click-streams. In: Proceedings of the 2009 International Conference on the World Wide Web 2009. pp. 871-880.

Screen readers, the dominant assistive technology used by visually impaired people to access the Web, function by speaking out the content of the screen serially. Using screen readers for conducting online transactions can cause considerable information overload, because transactions, such as shopping and paying bills, typically involve a number of steps spanning several web pages. One can combat this overload by using a transaction model for web accessibility that presents only fragments of web pages that are needed for doing transactions. We can realize such a model by coupling a process automaton, encoding states of a transaction, with concept classifiers that identify page fragments "relevant" to a particular state of the transaction. In this paper we present a fully automated process that synergistically combines several techniques for transforming unlabeled click-stream data generated by transactions into a transactionmodel. These techniques include web content analysis to partition a web page into segments consisting of semantically related content, contextual analysis of data surrounding clickable objects in a page, and machine learning methods, such as clustering of page segments based on contextual analysis, statistical classification, and automata learning. The use of unlabeled click streams in building transaction models has important benefits: (i) visually impaired users do not have to depend on sighted users for creating manually labeled training data to construct the models; (ii) it is possible to mine personalized models from unlabeled transaction click-streams associated with sites that visually impaired users visit regularly; (iii) since unlabeled data is relatively easy to obtain, it is feasible to scale up the construction of domain-specific transaction models (e.g., separate models for shopping, airline reservations, bill payments, etc.); (iv) adjusting the performance of deployed models over timtime with new training data is also doable. We provide preliminary experimental evidence of the practical effectiveness of both domain-specific, as well as personalized accessibility transaction models built using our approach. Finally, this approach is applicable for building transaction models for mobile devices with limited-size displays, as well as for creating wrappers for information extraction from web sites.

© All rights reserved Mahmud et al. and/or ACM Press

2008
 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Raman, Rohit and Ramakrishnan, I. V. (2008): What's new?: making web page updates accessible. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 145-152.

Web applications facilitated by technologies such as JavaScript, DHTML, AJAX, and Flash use a considerable amount of dynamic web content that is either inaccessible or unusable by blind people. Server side changes to web content cause whole page refreshes, but only small sections of the page update, causing blind web users to search linearly through the page to find new content. The connecting theme is the need to quickly and unobtrusively identify the segments of a web page that have changed and notify the user of them. In this paper we propose Dynamo, a system designed to unify different types of dynamic content and make dynamic content accessible to blind web users. Dynamo treats web page updates uniformly and its methods encompass both web updates enabled through dynamic content and scripting, and updates resulting from static page refreshes, form submissions, and template-based web sites. From an algorithmic and interaction perspective Dynamo detects underlying changes and provides users with a single and intuitive interface for reviewing the changes that have occurred. We report on the quantitative and qualitative results of an evaluation conducted with blind users. These results suggest that Dynamo makes access to dynamic content faster, and that blind web users like it better than existing interfaces.

© All rights reserved Borodin et al. and/or ACM Press

 
Edit | Del

Kawanaka, Shinya, Borodin, Yevgen, Bigham, Jeffrey P., Lunn, Darren, Takagi, Hironobu and Asakawa, Chieko (2008): Accessibility commons: a metadata infrastructure for web accessibility. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 153-160.

Research projects, assistive technology, and individuals all create metadata in order to improve Web accessibility for visually impaired users. However, since these projects are disconnected from one another, this metadata is isolated in separate tools, stored in disparate repositories, and represented in incompatible formats. Web accessibility could be greatly improved if these individual contributions were merged. An integration method will serve as the bridge between future academic research projects and end users, enabling new technologies to reach end users more quickly. Therefore we introduce Accessibility Commons, a common infrastructure to integrate, store, and share metadata designed to improve Web accessibility. We explore existing tools to show how the metadata that they produce could be integrated into this common infrastructure, we present the design decisions made in order to help ensure that our common repository will remain relevant in the future as new metadata is developed, and we discuss how the common infrastructure component facilitates our broader social approach to improving accessibility.

© All rights reserved Kawanaka et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen (2008): Automation of repetitive web browsing tasks with voice-enabled macros. In: Tenth Annual ACM SIGACCESS Conference on Assistive Technologies 2008. pp. 307-308.

Non-visual aural web browsing remains inefficient as compared to regular browsing with visual modalities. This paper proposes an approach for automation of repetitive browsing tasks by using personalized macros, which are easy to record and replay with a speech-enabled interface. The prototype system is implemented in the framework of the HearSay non-visual web browser.

© All rights reserved Borodin and/or ACM Press

 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen and Ramakrishnan, I. V. (2008): Assistive browser for conducting web transactions. In: Proceedings of the 2008 International Conference on Intelligent User Interfaces 2008. pp. 365-368.

People with visual impairments use screen readers to browse the Web. Sequential processing of web pages by screen readers causes information overload, making web browsing time-consuming and strenuous. These problems are further exacerbated in web transactions (e.g.: online shopping), which involve multiple steps spanning several web pages. In this paper we present a lightweight approach for doing Web transactions using non-visual modalities. We describe how analysis of context surrounding the link coupled with a shallow knowledge-base with patterns and keywords can help identify various concepts (e.g.: "add to cart", "item description", etc.) that are important in web transactions. Our preliminary results show promise that presenting such concepts to the users can reduce information overload and improve their overall browsing experience.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen, Bigham, Jeffrey P., Stent, Amanda and Ramakrishnan, I. V. (2008): Towards one world web with HearSay3. In: Proceedings of the 2008 International Cross-Disciplinary Conference on Web Accessibility W4A 2008. pp. 130-131.

In this paper, we present the key functionalities of HearSay 3, a non-visual web browser designed with the goal of improving web accessibility across the world. The featured functionalities include transparent support for multiple languages, collaborative labeling that harnesses the power of the social web, and a usable interface for handling dynamic web content.

© All rights reserved Borodin et al. and/or ACM Press

2007
 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Das, Dipanjan and Ramakrishnan, I. V. (2007): Combating information overload in non-visual web access using context. In: Proceedings of the 2007 International Conference on Intelligent User Interfaces 2007. pp. 341-344.

Web sites are designed for graphical mode of interaction. Sighted users can visually segment Web pages and quickly identify relevant information. In contrast, visually-disabled individuals have to use screen readers to browse the Web. Screen readers process pages sequentially and read through everything, making Web browsing time-consuming and strenuous. The use of shortcut keys and searching offers some improvements, but the problem still remains. In this paper, we address this problem using the notion of context. When a user follows a link, we capture the context of the link, and use it to identify relevant information on the next page. The content of this page is rearranged, so that the relevant information is read out first. We conducted a series experiments to compare the performance of our prototype system with the state-of-the-art JAWS screen reader. Our results show that the use of context can potentially save browsing time as well as improve browsing experience of visually disabled individuals.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Mahmud, Jalal U., Borodin, Yevgen and Ramakrishnan, I. V. (2007): CSurf: a context-driven non-visual web-browser. In: Proceedings of the 2007 International Conference on the World Wide Web 2007. pp. 31-40.

Web sites are designed for graphical mode of interaction. Sighted users can "cut to the chase" and quickly identify relevant information in Web pages. On the contrary, individuals with visual disabilities have to use screen-readers to browse the Web. As screen-readers process pages sequentially and read through everything, Web browsing can become strenuous and time-consuming. Although, the use of shortcuts and searching offers some improvements, the problem still remains. In this paper, we address the problem of information overload in non-visual Web access using the notion of context. Our prototype system, CSurf, embodying our approach, provides the usual features of a screen-reader. However, when a user follows a link, CSurf captures the context of the link using a simple topic-boundary detection technique, and uses it to identify relevant information on the next page with the help of a Support Vector Machine, a statistical machine-learning model. Then, CSurf reads the Web page starting from the most relevant section, identified by the model. We conducted a series experiments to evaluate the performance of CSurf against the state-of-the-art screen-reader, JAWS. Our results show that the use of context can potentially save browsing time and substantially improve browsing experience of visually disabled people.

© All rights reserved Mahmud et al. and/or International World Wide Web Conference Committee

 
Edit | Del

Borodin, Yevgen, Mahmud, Jalal, Ramakrishnan, I. V. and Stent, Amanda (2007): The HearSay non-visual web browser. In: Proceedings of the 2007 International Cross-Disciplinary Conference on Web Accessibility (W4A) 2007. pp. 128-129.

This paper describes HearSay, a non-visual Web browser, featuring context-directed browsing, a unique and innovative Web accessibility feature, and an extensible VoiceXML dialog interface. The browser provides most of the standard browsing functionalities, including flexible navigation and form-filling. The paper also outlines future work aiming to make the Web more accessible for individuals with visual impairments.

© All rights reserved Borodin et al. and/or ACM Press

2006
 
Edit | Del

Mahmud, Jalal, Borodin, Yevgen, Das, Dipanjan and Ramakrishnan, I. V. (2006): Improving non-visual web access using context. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 273-274.

To browse the Web, blind people have to use screen readers, which process pages sequentially, making browsing timeconsuming. We present a prototype system, CSurf, which provides all features of a regular screen reader, but when a user follows a link, CSurf captures the context of the link and uses it to identify relevant information on the next page. CSurf rearranges the content of the next page, so, that the relevant information is read out first. A series experiments have been conducted to evaluate the performance of CSurf.

© All rights reserved Mahmud et al. and/or ACM Press

 
Edit | Del

Borodin, Yevgen (2006): A flexible VXML interpreter for non-visual web access. In: Eighth Annual ACM Conference on Assistive Technologies 2006. pp. 301-302.

VoiceXML (VXML) is a W3C's standard for specifying interactive dialogs. It finds multiple uses in variousWeb applications. VXML can also be used in non-visual Web browsing. There is no suitable, complete, open-source, flexible VXML interpreter to process VXML dialogs. My project is focusing on developing a VXML interpreter, VXMLSurf, that will be fully compliant with VXML 2.0 specifications and geared toward accessing Web content. VXMLSurf implements a number of extended features that provide blind users with more control over interactive browsing dialogs. VXMLSurf is a part of the HearSay project for developing a non-visual Web browser. The goal of the project is to make the Web more accessible for blind people.

© All rights reserved Borodin and/or ACM Press

 
Add publication
Show list on your website
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
 
 
 

Changes to this page (author)

23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
23 Nov 2012: Modified
09 Nov 2012: Modified
03 Apr 2012: Modified
15 Jan 2011: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
03 Nov 2010: Modified
02 Nov 2010: Modified
09 Jul 2009: Modified
27 Jun 2009: Modified
27 Jun 2009: Modified
08 Apr 2009: Modified
07 Apr 2009: Modified
07 Apr 2009: Modified
07 Apr 2009: Modified
25 Jul 2007: Modified
25 Jul 2007: Modified
24 Jul 2007: Modified
22 Jun 2007: Modified
22 Jun 2007: Added

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/yevgen_borodin.html

Publication statistics

Pub. period:2006-2012
Pub. count:23
Number of co-authors:24



Co-authors

Number of publications with 3 favourite co-authors:

I. V. Ramakrishnan:18
Faisal Ahmed:7
Jeffrey P. Bigham:5

 

 

Productive colleagues

Yevgen Borodin's 3 most productive colleagues in number of publications:

Hironobu Takagi:33
Richard E. Ladner:32
Jeffrey P. Bigham:32
 
 
 
Jul 13

A general principle for all user interface design is to go through all of your design elements and remove them one at a time. If the design works as well without a certain design element, kill it.

-- Jakob Nielsen, Designing Web Usability, p. 22.

 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

Kumar and Herger 2013: Gamification at Work: Designing Engaging Business Software...
by Janaki Mythily Kumar and Mario Herger

 
Start reading

Whitworth and Ahmad 2013: The Social Design of Technical Systems: Building technologies for communities...
by Brian Whitworth and Adnan Ahmad

 
Start reading

Soegaard and Dam 2013: The Encyclopedia of Human-Computer Interaction, 2nd Ed....
by Mads Soegaard and Rikke Friis Dam

 
Start reading
 
 

Help us help you!