<$BlogRSDUrl$>

Every Friday I pick a paper from the ACM Digital Library that is found by the search term +connected +2005 +"mobile device" +"user interface", and write a brief discussion of it. Why? Because it makes me actually read them.

virtual journal club: "Connected Mobile Devices UI"
Friday, February 11, 2005
Display-agnostic hypermedia 
Link

Unmil P. Karadkar Texas A&M University, College Station, TX
Richard Furuta Texas A&M University, College Station, TX
Selen Ustun Texas A&M University, College Station, TX
YoungJoo Park Texas A&M University, College Station, TX
Jin-Cheon Na Nanyang Technological University, Singapore
Vivek Gupta Texas A&M University, College Station, TX
Tolga Ciftci Texas A&M University, College Station, TX
Yungah Park Texas A&M University, College Station, TX

Conference on Hypertext and Hypermedia archive
Proceedings of the fifteenth ACM conference on Hypertext & hypermedia table of contents
Santa Cruz, CA, USA
SESSION: Novel interfaces table of contents
Pages: 58 - 67
Year of Publication: 2004
ISBN:1-58113-848-2

Abstract:
In the diversifying information environment, contemporary hypermedia authoring and filtering mechanisms cater to specific devices. Display-agnostic hypermedia can be flexibly and efficiently presented on a variety of information devices without any modification of their information content. We augment context-aware Trellis (caT) by introducing two mechanisms to support display-agnosticism: development of new browsers and architectural enhancements. We present browsers that reinterpret existing caT hypertext structures for a different presentation. The architectural enhancements, called MIDAS, flexibly deliver rich hypermedia presentations coherently to a set of diverse devices.

My Discussion:
Mainly a description of the MIDAS, a hypermedia system that is very different from the web we know today. MIDAS stores the hypermedia, best described here as linked multi-media presentations, such that the same presentation can contain the same content in different modalities. For example, a URI could abstractly point to a narrative that in the repository is available as both as a PDF, a spoken audio file, and a simple text-file, and by exchaing metadata about the content and the capability of the client and the context of the user, MIDAS can choose to present the narrative as audio, for example if the user is using a cellphone, or the PDF if the user is using a laptop. MIDAS servers also synchronize the accessing of the presentation URI with multiple simultaneous browsers and can coordinate what is being shown, so a user listening to the narrative on the cellphone could watch the associated pictures on a PDA or a large screen of a public terminal, and since MIDAS knows the user is already listening, will not use any of the real estate on the PDA for the text, but just show the pictures.
It sounds really advanced and useful, but reading the paper one gets the feeling that authoring media for a system in which the author has so little prior knowledge of how the narratives will be viewed, might make it very difficult to create a specific experience, something current web authors are very invested in. Very little is said about what state MIDAS is in, although the figures leave the impression that while the servers may be specced and built well, the client side browsers are still downright primitive. It has potential to allow users access to their media in many different ways, but the paper doesn't spend time actually discussing much of the experiences and feelings of the users, either as browers or authors.

Comments: Post a Comment

This page is powered by Blogger. Isn't yours?