<$BlogRSDUrl$>

Every Friday I pick a paper from the ACM Digital Library that is found by the search term +connected +2005 +"mobile device" +"user interface", and write a brief discussion of it. Why? Because it makes me actually read them.

virtual journal club: "Connected Mobile Devices UI"
Sunday, August 29, 2004
Metadata creation system for mobile images 
Link

Risto Sarvas Helsinki Institute for Information Technology (HIIT), HUT, Finland
Erick Herrarte UC Berkeley, Berkeley, CA
Anita Wilhelm UC Berkeley, Berkeley, CA
Marc Davis UC Berkeley, Berkeley

International Conference On Mobile Systems, Applications And Services archive
Proceedings of the 2nd international conference on Mobile systems, applications, and services table of contents
Boston, MA, USA
SESSION: Mobile applications table of contents
Pages: 36 - 48
Year of Publication: 2004
ISBN:1-58113-793-1

Abstract:
The amount of personal digital media is increasing, and managing it has become a pressing problem. Effective management of media content is not possible without content-related metadata. In this paper we describe a content metadata creation process for images taken with a mobile phone. The design goals were to automate the creation of image content metadata by leveraging automatically available contextual metadata on the mobile phone, to use similarity processing algorithms for reusing shared metadata and images on a remote server, and to interact with the mobile phone user during image capture to confirm and augment the system supplied metadata. We built a prototype system to evaluate the designed metadata creation process. The main findings were that the creation process could be implemented with current technology and it facilitated the creation of semantic metadata at the time of image capture.

My Discussion:
A neat attempt to get a head start on the problem that will confront all of us engineers of mobile consumer devics: how will we deal with the massive amounts of data the users of the cameraphones and other recording devices will generate? Here the authors made a system to allow the user to attach keywords ("metadata") to a picture immediatly after snapping it on a cameraphone. That way the user will be able to organize and retrieve it based on attributes like where it was taken, who was in it, who was close by. Unfortuantly the system created is hampered by very spotty connections with the data network that manages and suggests the annotations, making the experience painful and time consuming instead of a snappy immediate part of taking and keeping a picture. Now this is in a major urban area in California (Berkley) using the deployed data (GPRS) network by a major carrier, AT&T. This paper highlights how we are really nowhere near the ubiquitous wireless cloud of mobile assiting services following us everywhere we go, and how network lag and constrained UI problems seriously can interfere with the design goals, something much ad- and research-copy about the mobile revolutions wishes to ignore.

Friday, August 13, 2004
Interaction design concepts for a mobile personal assistant 
Link

Stacey F. NagataUtrecht University, Utrecht, The Netherlands
Herre van Oostendorp Utrecht University, Utrecht, The Netherlands
Mark A. Neerincx Delft University of Technology, Soesterberg, The Netherlands

ACM International Conference Proceeding Series
archiveProceedings of the conference on Dutch directions in HCI table of contents
Amsterdam, Holland
SESSION: HCI and society table of contents
Page: 9
Year of Publication: 2004
ISBN:1-58113-944-6

Abstract:
The Personal Assistant for onLine Services (PALS) project aims to develop an intelligent interface that facilitates efficient user interaction through personalization and context awareness with commerce web sites on a handheld device. The types of assistance services and interaction support represented by a mobile personal assistant have been investigated in the PALS project. Scenario Based Design was used to develop the PALS framework for the personal assistance services, generic scenarios and a usage model. The service concepts (e.g. direct, solicited, non-solicited, independent) characterize interaction between the user and virtual assistant during mobile web tasks. The generic scenarios and usage model aid to develop design and interaction of the PALS interface. A theme of "personal customer support" through an attentive interactive display can aid user acceptance of mobile web task assistance.

My Discussion:
At first a technologist reading this paper might wonder "Where is the meat?" Where is the actual system, where are the user tests, how does this system interact with the actual wired world? The paper seems to be about people sitting in a room going "Wouldn't it be nice if..." with regards to working on connected handhelds in the real world, and the result seems like a blue-sky agent-based system. But that would be missing the point that this paper is a description of a formalization of the blue-sky thinking. As much as our eyes glaze over when being described the beautiful world of future digital avatar assistant, reality is that if we actually want these avatars to appear for real and actually work, nailing down what exactly they should do and how before they are built is imperative, lest we end up with compromise solutions yet again because we are trying to cobble systems together bottom-up. Thus the formalizations of what actual interactions and modes of communications need to be supported is useful for anyone wanting to seriously make useable systems for mobile interaction in the domain of managing your electronic life on the run. These formalizations are using the latest ideas: how to manage and block IM, how to deal with lapses in communication, things that hamper mobile connected usability that currently are being taken for granted. Builders of current and future systems could do far worse than browsing these formal scenarions and find out what users actually want.

Slightly disturbing about this paper is how many of the references are about the same work, as if it is happening in complete isolation of other work done in academia.


Friday, August 06, 2004
Modelling internet based applications for designing multi-device adaptive interfaces 
Link

Enrico Bertini Università di Roma "La Sapienza", Roma, Italy archive
Proceedings of the working conference on Advanced visual interfaces table of contents
Gallipoli, Italy
SESSION: Advancing interaction table of contents
Pages: 252 - 256
Year of Publication: 2004
ISBN:1-58113-867-9

Abstract:
The wide spread of mobile devices in the consumer market has posed a number of new issues in the design of internet applications and their user interfaces. In particular, applications need to adapt their interaction modalities to different portable devices. In this paper we address the problem of defining models and techniques for designing internet based applications that automatically adapt to different mobile devices. First, we define a formal model that allows for specifying the interaction in a way that is abstract enough to be decoupled from the presentation layer, which is to be adapted to different contexts. The model is mainly based on the idea of describing the user interaction in terms of elementary actions. Then, we provide a formal device characterization showing how to effectively implements the AIUs in a multidevice context.

My Discussion:
This papers is related to previously discussed paper by Florins & Verdocnk and references other work by Verdonck and collaborators. Not surprising since it is about a model-based approach to user interfaces that have to run on multiple classes of devices. The paper starts off with listing rerefences to other work done in the field, and then continues with explaining its own modelling systems based on Atomic Interaction Units (AIU) like 'Browse Text' or 'Interact with Image'. No indication is given that the authors are trying to come up with a cannonical set of AIUs, more that a full task like "Make a hotel reservation" or "check the balance of a bank account" is broken down into specific AIUs like "Select hotels from a list of hotels". The paper is too short to really have a discussion of how well the AIUs are defined ass a formal construct, or if they are just formal-looking notations for informal intuitive decomposiions of web interfaces.

They couple AIUs with formal descriptions of the output device like "Can display 40 rows" or "can run JAVA", and based on that transform the display of an AIU to an appropriate form based on the device. They do give an example, but do not mention how much human intervention is required to make that same list appear differently on the two discussed devices.

This page is powered by Blogger. Isn't yours?