Publication statistics

Pub. period:2010-2012
Pub. count:4
Number of co-authors:7



Co-authors

Number of publications with 3 favourite co-authors:

I. V. Ramakrishnan:4
Yevgen Borodin:4
Faisal Ahmed:3

 

 

Productive colleagues

Muhammad Asiful Islam's 3 most productive colleagues in number of publications:

I. V. Ramakrishnan:31
Yevgen Borodin:23
Faisal Ahmed:9
 
 
 

Upcoming Courses

go to course
Emotional Design: How to make products people will love
90% booked. Starts in 5 days
go to course
UI Design Patterns for Successful Software
82% booked. Starts in 13 days
 
 

Featured chapter

Marc Hassenzahl explains the fascinating concept of User Experience and Experience Design. Commentaries by Don Norman, Eric Reiss, Mark Blythe, and Whitney Hess

User Experience and Experience Design !

 
 

Our Latest Books

 
 
The Social Design of Technical Systems: Building technologies for communities. 2nd Edition
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
Gamification at Work: Designing Engaging Business Software
by Janaki Mythily Kumar and Mario Herger
start reading
 
 
 
 
The Social Design of Technical Systems: Building technologies for communities
by Brian Whitworth and Adnan Ahmad
start reading
 
 
 
 
The Encyclopedia of Human-Computer Interaction, 2nd Ed.
by Mads Soegaard and Rikke Friis Dam
start reading
 
 

Muhammad Asiful Islam

 

Publications by Muhammad Asiful Islam (bibliography)

 what's this?
2012
 
Edit | Del

Islam, Muhammad Asiful, Ahmed, Faisal, Borodin, Yevgen and Ramakrishnan, I. V. (2012): Thematic organization of web content for distraction-free text-to-speech narration. In: Fourteenth Annual ACM SIGACCESS Conference on Assistive Technologies 2012. pp. 17-24. Available online

People with visual disabilities, especially those who are blind, have digital content narrated to them by text-to-speech (TTS) engines (e.g., with the help of screen readers). Naively narrating web pages, particularly the ones consisting of several diverse pieces (e.g., news summaries, opinion pieces, taxonomy, ads), with TTS engines without organizing them into thematic segments will make it very difficult for the blind user to mentally separate out and comprehend the essential elements in a segment, and the effort to do so can cause significant cognitive stress. One can alleviate this difficulty by segmenting web pages into thematic pieces and then narrating each of them separately. Extant segmentation methods typically segment web pages using visual and structural cues. The use of such cues without taking into account the semantics of the content, tends to produce "impure" segments containing extraneous material interspersed with the essential elements. In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility.

© All rights reserved Islam et al. and/or ACM Press

2010
 
Edit | Del

Borodin, Yevgen, Ahmed, Faisal, Islam, Muhammad Asiful, Puzis, Yury, Melnyk, Valentyn, Feng, Song, Ramakrishnan, I. V. and Dausch, Glenn (2010): Hearsay: a new generation context-driven multi-modal assistive web browser. In: Proceedings of the 2010 International Conference on the World Wide Web 2010. pp. 1233-1236. Available online

This demo will present HearSay, a multi-modal non-visual web browser, which aims to bridge the growing Web Accessibility divide between individuals with visual impairments and their sighted counterparts, and to facilitate full participation of blind individuals in the growing Web-based society.

© All rights reserved Borodin et al. and/or their publisher

 
Edit | Del

Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Mixture model based label association techniques for web accessibility. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 67-76. Available online

An important aspect of making the Web accessible to blind users is ensuring that all important web page elements such as links, clickable buttons, and form fields have explicitly assigned labels. Properly labeled content is then correctly read out by screen readers, a dominant assistive technology used by blind users. In particular, improperly labeled form fields can critically impede online transactions such as shopping, paying bills, etc. with screen readers. Very often labels are not associated with form fields or are missing altogether, making form filling a challenge for blind users. Algorithms for associating a form element with one of several candidate labels in its vicinity must cope with the variability of the element's features including label's location relative to the element, distance to the element, etc. Probabilistic models provide a natural machinery to reason with such uncertainties. In this paper we present a Finite Mixture Model (FMM) formulation of the label association problem. The variability of feature values are captured in the FMM by a mixture of random variables that are drawn from parameterized distributions. Then, the most likely label to be paired with a form element is computed by maximizing the log-likelihood of the feature data using the Expectation-Maximization algorithm. We also adapt the FMM approach for two related problems: assigning labels (from an external Knowledge Base) to form elements that have no candidate labels in their vicinity and for quickly identifying clickable elements such as add-to-cart, checkout, etc., used in online transactions even when these elements do not have textual captions (e.g., image buttons w/o alternative text). We provide a quantitative evaluation of our techniques, as well as a user study with two blind subjects who used an aural web browser implementing our approach.

© All rights reserved Islam et al. and/or their publisher

 
Edit | Del

Ahmed, Faisal, Islam, Muhammad Asiful, Borodin, Yevgen and Ramakrishnan, I. V. (2010): Assistive web browsing with touch interfaces. In: Twelfth Annual ACM SIGACCESS Conference on Assistive Technologies 2010. pp. 235-236. Available online

This demonstration will propose a touch-based directional navigation technique, on touch interface (e.g., iPhone, Macbook) for people with visual disabilities especially blind individuals. Such interfaces coupled with TTS (text-to-speech) systems open up intriguing possibilities for browsing and skimming web content with ease and speed. Apple's seminal VoiceOver system for iOS is an exemplar of bringing touch-based web navigation to blind people. There are two major shortcomings: "fat finger" and "finger-fatigue" problems, which have been addressed in this paper with two proposed approaches. A preliminary user evaluation of the system incorporating these ideas suggests that they can be effective in practice.

© All rights reserved Ahmed et al. and/or their publisher

 
Add publication
Show list on your website
 
 

Join our community and advance:

Your
Skills

Your
Network

Your
Career

 
Join our community!
 
 
 

Page Information

Page maintainer: The Editorial Team
URL: http://www.interaction-design.org/references/authors/muhammad_asiful_islam.html