<$BlogRSDUrl$>

Every Friday I pick a paper from the ACM Digital Library that is found by the search term +connected +2005 +"mobile device" +"user interface", and write a brief discussion of it. Why? Because it makes me actually read them.

virtual journal club: "Connected Mobile Devices UI"
Sunday, March 20, 2005
User Interfaces for Applications on a Wrist Watch 
Link

M. T. Raghunath Wearable Computing Platforms, IBM TJ Watson Research Center, Yorktown Heights, NY, USA
Chandra Narayanaswami Wearable Computing Platforms, IBM TJ Watson Research Center, Yorktown Heights, NY, USA

Personal and Ubiquitous Computing archive
Volume 6 , Issue 1 (February 2002) table of contents
Pages: 17 - 30
Year of Publication: 2002
ISSN:1617-4909

Abstract:
Advances in technology have made it possible to package a reasonably powerful processor and memory subsystem coupled with an ultra high-resolution display and wireless communication into a wrist watch. This introduces a set of challenges in the nature of input devices, navigation, applications, and other areas. This paper describes a wearable computing platform in a wrist watch form-factor we have developed. We built two versions: one with a low resolution liquid crystal display; and another with a ultra high resolution organic light emitting diode display. In this paper we discuss the selection of the input devices and the design of applications and user interfaces for these two prototypes, and the compare the two versions.

My Discussion:
A bit misplaced in my ACM binder, since the wristwatches discussed do not have an electronic connection to the net or any other device. This papers is an exploration of one-handed user interfaces in a terribly constrained space, as put on two wrist watches, one with an almost standard LCD screen and one with a very high resolution screen. While I never expected to ever read the words "X11" and "wrist watch" in describing the same techonology, the strength of this paper is in the thoughts about the affordances of smart wrist watches and what position in daily life the UI will take. It is, two years later, certainly not in the actual UI, which comes accross as fairly unimaginative from their descriptions. It would be interesting to take this technology and strip out the computing power to just make it a wireless display to a personal server worn on the body, interfacing with other personal technology.

Friday, March 11, 2005
Hubbub: a sound-enhanced mobile instant messenger that supports awareness and opportunistic interactions 
Link

Ellen Isaacs AT&T Labs, Menlo Park, CA
Alan Walendowski AT&T Labs, Menlo Park, CA
Dipti Ranganthan AT&T Labs, Menlo Park, CA

onference on Human Factors in Computing Systems archive
Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves table of contents
Minneapolis, Minnesota, USA
SESSION: I Think, therefore IM table of contents
Pages: 179 - 186
Year of Publication: 2002
ISBN:1-58113-453-3

Abstract:
There have been many attempts to support awareness and lightweight interactions using video and audio, but few have been built on widely available infrastructure. Text-based systems have become more popular, but few support awareness, opportunistic conversations, and mobility, three important elements of distributed collaboration. We built on the popularity of text-based Instant Messengers (IM) by building a mobile IM called Hubbub that tries to provide all three, notably through the use of earcons. In a 5.5-month use study, we found that Hubbub helped people feel connected to colleagues in other locations and supported opportunistic interactions. The sounds provided effective awareness cues, although some found them annoying. It was more important to support graceful transitions between multiple fixed locations than to support wireless access, although both were useful


My Discussion:
Many of the innovations in IM have already been taken up into the large systems like Yahoo! and AIM, but with modifications: instead of people having a personal sound, they have personal avatar icons. They also include sounds to announce someone has joined or left (although not personalized to the person in question), and specialised clients like Trillian also allow sounds for when people start or stop being idle. The systems now alos give visual feedback of when someone is typing. Not implemented (much to my personal dismay) is the innovation described in the paper of making it possible for the same person to be logged in on multiple locations, like both desktop and mobile device, and the system making a choice where to deliver the IM based on what device has been idel least (AIM is coming close to this now, though. Yahoo insist on having the user be logged in on only one device and choosing which explicitly).

This is a clear, well-written, and comprehensive paper in that it describes a system clearly and the use of the system, complete with quantitative and qualitative data. While, as I said, the system may be outdated, the data and recorded attitudes about IM are very valuable to now have as a good refernce to point to when building future work on mobile IM. The attitudes recorded about the sense of presence, and the mentioning how the IM clients had to be re-designed to deal with spotty mobile connectivity, are also very useful in this regard.

Friday, March 04, 2005
A multimodal interaction manager for device independent mobile applications 
Link

Florian Wegscheider Telecommunications Research Center, Vienna, Austria
Thomas Dangl Siemens österreich AG, Vienna, Austria
Michael Jank Kapsch CarrierCom AG, Vienna, Wien
Rainer Simon Telecommunications Research Center, Vienna, Austria

International World Wide Web Conference archive
Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters table of contents
New York, NY, USA
Pages: 272 - 273
Year of Publication: 2004
ISBN:1-58113-912-8

Abstract:
This poster presents an overview of the work on an interaction manager of a platform for multimodal applications in 2.5G and 3G mobile phone networks and WLAN environments. The poster describes the requirements for the interaction manager (IM), its tasks and the resulting structure. We examine the W3C's definition of an interaction manager and compare it to our implementation, which accomplishes some additional tasks.

My Discussion:
There is a school of thought that posters are just failed papers that tried again. This poster either invalidates that principle or is an exception, because it is hard to see how a multi-page paper could be less interesting or even fun. The poster describes a system of which the goals seem very ambitious: multi-modal in that it combines GUI interactions with speech, multi-user enaled for collaborative experiences like gaming, device independent programming for display on devices including WAP and regular web, and still allowing the programmer fine grained control. It seems contradicting and the poster, of course, has not enough room to show what applications actually look like. Unfortunately there seem to be no other papers by these authors currently in the ACM library that show more of MONA applications.

This page is powered by Blogger. Isn't yours?