<$BlogRSDUrl$>

Every Friday I pick a paper from the ACM Digital Library that is found by the search term +connected +2005 +"mobile device" +"user interface", and write a brief discussion of it. Why? Because it makes me actually read them.

virtual journal club: "Connected Mobile Devices UI"
Friday, March 16, 2007
Ok, as you can tell, ever since I left Nokia I have not updated this blog. I think we can officially call this journal club dead. I could restarts it, but I want to either refocus more into papers and articles surrounding mobile visualizations, or broaden it into mobile tech blog number six million. I am not sure yet.

Friday, December 09, 2005
Need for non-visual feedback with long response times in mobile HCI 
Link

Virpi Roto Nokia Research Center, Nokia Group, Finland
Antti Oulasvirta Helsinki Institute of Information Technology, HUT, Finland

International World Wide Web Conference archive
Special interest tracks and posters of the 14th international conference on World Wide Web table of contents
Chiba, Japan
SESSION: Embedded web papers table of contents
Pages: 775 - 781
Year of Publication: 2005
ISBN:1-59593-051-5

Abstract:
When browsing Web pages with a mobile device, the system response times are variable and much longer than on a PC. Users must repeatedly glance at the display to see when the page finally arrives, although mobility demands a Minimal Attention User Interface. We conducted a user study with 27 participants to discover the point at which visual feedback stops reaching the user in mobile context. In the study, we examined the deployment of attention during page loading to the phone vs. the environment in several different everyday mobility contexts, and compared these to the laboratory context. The first part of the page appeared on the screen typically in 11 seconds, but we found that the user's visual attention shifted away from the mobile browser usually between 4 and 8 seconds in the mobile context. In contrast, the continuous span of attention to the browser was more than 14 seconds in the laboratory condition. Based on our study results, we recommend mobile applications provide multimodal feedback for delays of more than four seconds.

My Discussion:
A neat paper with some intruiging recommendations for mobile UI designers. UIs requiring minimal attention is a pretty unmined field since most UIs assume they have the user's full attention, and the proper use of attention-getting messaging is still a large unknown. I liked the paper, if only for its description of the field methods.

Friday, November 18, 2005
The value of mobile applications: a utility company study 
Link

Fiona Fui-Hoon Nah University of Nebraska-Lincoln
Keng Siau University of Nebraska-Lincoln
Hong Sheng University of Nebraska-Lincoln

Communications of the ACM archive
Volume 48 , Issue 2 (February 2005) table of contents
Medical image modeling
Pages: 85 - 90
Year of Publication: 2005
ISSN:0001-0782

Abstract:
Mobile and wireless devices are enabling organizations to conduct business more effectively. Mobile applications can be used to support e-commerce with customers and suppliers, and to conduct e-business within and across organizational boundaries. Despite these benefits, organizations and their customers still lack an understanding of the value of mobile applications. Value is defined here as the principles for evaluating the consequences of action, inaction, or decision [4]. The value proposition of mobile applications can be defined as the net value of the benefits and costs associated with the adoption and adaptation of mobile applications [2].

My Discussion:
The first time I read this article, directly in CACM, I wondered why anyone bothered. The result seemed to be a very obvious network of goals people in corporations wanted to achieve by using mobile applications, and the problems creating such applications that people in the mobile applications community have known about for ages. Re-reading it now, I realize it is a fundamental paper to reference when a researcher needs to have solid empirical justification for why certain problems are being worked on, and good pointers to what people in the field say the problems are with mobile applications. They may seem obvious to us in the field, but they still need to be properly discussed when doing science.

Friday, November 04, 2005
An agent-based approach to dialogue management in personal assistants 
[My apologies for being so very very late, but I had this website to launch.]

Link

Anh Nguyen University of New South Wales, Sydney, Australia
Wayne Wobcke University of New South Wales, Sydney, Australia

International Conference on Intelligent User Interfaces< archive
Proceedings of the 10th international conference on Intelligent user interfaces table of contents
San Diego, California, USA
SESSION: Long papers: natural language and gestural input table of contents
Pages: 137 - 144
Year of Publication: 2005
ISBN:1-58113-894-6

Abstract:
Personal assistants need to allow the user to interact with the system in a flexible and adaptive way such as through spoken language dialogue. In this research we focus on an application in which the user can use a variety of devices to interact with a collection of personal assistants each specializing in a task domain such as email or calendar management, information seeking, etc. We propose an agent-based approach for developing the dialogue manager that acts as the central point maintaining continuous user-system interaction and coordinating the activities of the assistants. In addition, this approach enables development of multi-modal interfaces. We describe our initial implementation which contains an email management agent that the user can interact with through a spoken dialogue and an interface on PDAs. The dialogue manager was implemented by extending a BDI agent architecture.

My Discussion:
The paper is very useful for showing where current thinking about agents and agent-based technologies is, going briefly over current definitions of agents plus supplying references to further reading. The same goes for the linguistic structure of dialogs. If you have spent the last decade of your UI career not following these fields but focussing on WWW technologies, it is a good glimpse on what went on. The paper tries to show in a concise format how the system derives meaning from the user's utterances, and how it keeps track of what objects are in play to likely be acted on. The paper also has the perfunctory paragraphs about related systems, so as to satisfy reviewers who want to see some form of scientific context for what is the description of a technical system.

Friday, September 30, 2005
eBag: a ubiquitous Web infrastructure for nomadic learning 
Link

Christina Brodersen University of Aarhus, Aarhus N, Denmark
Bent G. Christensen University of Aarhus, Aarhus N, Denmark
Kaj Grønbæk University of Aarhus, Aarhus N, Denmark
Christian Dindler University of Aarhus, Aarhus N, Denmark
Balasuthas Sundararajah University of Aarhus, Aarhus N, Denmark

International World Wide Web Conference archive
Proceedings of the 14th international conference on World Wide Web table of contents
Chiba, Japan
SESSION: Web-based educational applications table of contents
Pages: 298 - 306
Year of Publication: 2005
ISBN:1-59593-046-9

Abstract:
This paper describes the eBag infrastructure, which is a generic infrastructure inspired from work with school children who could benefit from a electronic schoolbag for collaborative handling of their digital material. The eBag infrastructure is utilizing the Context-aware HyCon framework and collaborative web services based on WebDAV. A ubiquitous login and logout mechanism has been built based on BlueTooth sensor networks. The eBag infrastructure has been tried out in field tests with school kids. In this paper we discuss experiences and design issues for ubiquitous Web integration in interactive school environments with multiple interactive whiteboards and workstations. This includes proposals for specialized and adaptive XLink structures for organizing school materials as well as issues in login/logout based on proximity of different display surfaces.

My Discussion:
Hey, it's our HyCon friends again, this time with an equally forward-looking scenario for connected mobile producitivity. They have left the ambulant restaurant user-review business, and in this paper look at mobile classroom learning. They use their mobile hypertext sensor-enabled (they call that 'context-aware') framework to create an application whereby students have document spaces on the web that are accessible from multiple kinds of terminals, and use mobile devices to add data to their own space. The facilities HyCon has for changing what is shown on any point-of-contact based on that point's capabilities and responses from nearby sensors is cleverly used to create a system where a pupil can see their document space on a big screen when they approach it with their Bluetooth-enabled phone. When a group approaches, all their document spaces become available, on the same screen, making it very easy to group them together and make shared folders accessible to all pupils in the group.

The scenario in this paper doesn't go into problems of maintaining useability when a point of service, like say the mobile handset the pupil is carrying, loses connectivity, but the tools used make this not be a tremendous problem -- the pupil would just upload the gathered data when connectivity was resumed, and connectivity can be assumed to exist once on the well-defined school grounds or at home. The use of the BT sensors for creating proximity-based logins on terminals does have some privacy issues, but again seems appropriate for the locations. (Do I want automatic log-in when I am walking past a computer in an airport? No. Do I want automatic login when I sit down in a classroom and the teacher needs to look at my stuff? Yes.) There is ample mentioning of other related work and refernces to issues like proximity-based logins. Alas, like the last paper, there is very little on actual experiences, since the system is not all finished. There is some discussion about proto-type experiences that seem to indicate that HyCon doesn't scale well, but the experiences gathered with it are useful.

Saturday, August 13, 2005
User and Concept Studies as Tools in Developing Mobile Communication Services for the Elderly 
Link

M. Mikkonen Nokia Mobile Phones, Oulu, Finland
S. Väyrynen Work Science Laboratory, University of Oulu, Oulu, Finland
V. Ikonen Work Science Laboratory, University of Oulu, Oulu, Finland and Nokia Mobile Phones, Oulu, Finland
M. O. Heikkilä Nokia Mobile Phones, Oulu, Finland

Personal and Ubiquitous Computing archive
Volume 6 , Issue 2 (April 2002) table of contents
Pages: 113 - 124
Year of Publication: 2002
ISSN:1617-4909

Abstract:
The basis of this study was the ageing of the population all over the world. The study concentrated on finding out the key service needs of elderly people. The service needs from the end users’ as well as the experts’ perspective were gathered by means of various group methods such ideation sessions. Four mobile communication service concepts were created using these groups’ opinions. After diverse communication, these concepts were tested by the elderly. The research methods comprised a user study and a concept study. Based on the results, the needs could be prioritised. Additionally, the main trend of the results confirmed the opinions presented in the literature. One important finding was the positive opinions about additional value of wireless devices and services. This knowledge can be used in mobile communication product development. Most of the elderly are ready to accept new forms of mobile communication service. Ease of use and actual need of the services are important criteria. The elderly are ready to begin using the services as long as they truly facilitate independent living.

My Discussion:
Utterly by the numbers exploration of services of rthe lederly -- and that is why the paper is so remarkable. It captures a process that actually doesn't happen in most software creation efforts: that of actually finding out properly what the target audience wants, and what context they live in. The paper descrbes the formats for this solliciting of ideas, and has pointers to more information about them, while listing some points they found make the exercises more useful.

Friday, July 22, 2005
Location management for mobile commerce applications in wireless Internet environment 
Link

Upkar Varshney Georgia State University, Atlanta, GA

ACM Transactions on Internet Technology (TOIT)
archive
Volume 3 , Issue 3 (August 2003) table of contents
Pages: 236 - 255
Year of Publication: 2003
ISSN:1533-5399

Abstract:
With recent advances in devices, middleware, applications and networking infrastructure, the wireless Internet is becoming a reality. We believe that some of the major drivers of the wireless Internet will be emerging mobile applications such as mobile commerce. Although many of these are futuristic, some applications including user-and location-specific mobile advertising, location-based services, and mobile financial services are beginning to be commercialized. Mobile commerce applications present several interesting and complex challenges including location management of products, services, devices, and people. Further, these applications have fairly diverse requirements from the underlying wireless infrastructure in terms of location accuracy, response time, multicast support, transaction frequency and duration, and dependability. Therefore, research is necessary to address these important and complex challenges. In this article, we present an integrated location management architecture to support the diverse location requirements of m-commerce applications. The proposed architecture is capable of supporting a range of location accuracies, wider network coverage, wireless multicast, and infrastructure dependability for m-commerce applications. The proposed architecture can also support several emerging mobile applications. Additionally, several interesting research problems and directions in location management for wireless Internet applications are presented and discussed.

My Discussion:
Sometimes a paper I select turns out to be really relevant for the UI part of mobile UI interfaces, sometimnes it isn't. This paper does start out by stating that under the nomer "wireless Internet" amongst other concepts "user interfaces" explicitely should be included. The author then proceeds to state that mobile commerce will drive the wireless Internet, survey areas of local commerce that would require location identification, propose an architecture for comprehensive location identification, and not mention how any of this will integrate with the user interface on the device at all. The survey of location-aware mobile commerce applications and the needs they impose on the location identification component are interesting. The acrhitecture proposed less so, because it seems more a statement of which diagram boxes should fit together how in what figure and what their real-world effects should be than an actual solution stating how things should work and how we get there from the current context. But again, the needs this architecture has to fulfill seem to have been crafted without a sentence about what the users and agents in this scheme need to see and be told by their technologies to be made confident enough that their transactions will do what they intend and no more, and that their money and services will be secure and effective, to partake in this mobile commerce. Does the consumer need to see any identifiers in the small-cell local scenario on their device and the location of the service provider? How will opting in actually work without having the user opt in to everything in a square kilometer, or nothing in a tiny area? The paper doesn't say, and doesn't seem to care.

Friday, July 08, 2005
A reflective framework for discovery and interaction in heterogeneous mobile environments 
Link

Paul Grace Lancaster University, Lancaster, UK
Gordon S. Blair Lancaster University, Lancaster, UK
Sam Samuel Lucent Technologies, Swindon, UK

ACM SIGMOBILE Mobile Computing and Communications Review archive
Volume 9 , Issue 1 (January 2005) table of contents
COLUMN: Papers from MC2R open call table of contents
Pages: 2 - 14
Year of Publication: 2005

Abstract:
To operate in dynamic and potentially unknown environments a mobile client must first discover the local services that match its requirements, and then interact with these services to obtain the application functionality. However, high levels of heterogeneity characterize mobile environments; that is, contrasting discovery protocols including SLP, UPnP and Jini, and different styles of service interaction paradigms e.g. Remote Procedure Call, Publish-Subscribe and agent based solutions. Therefore given this type of heterogeneity, utilizing single discovery and interaction systems is not optimal as the client will only be able to use the services available to that particular platform. Hence, in this paper we present an adaptive middleware solution to this problem. ReMMoC is a Web-Services based reflective middleware that allows mobile clients to be developed independently of both discovery and interaction mechanisms. We describe the architecture, which dynamically reconfigures to match the current service environment. Finally, we investigate the incurred performance overhead such dynamic behaviour brings to the discovery and interaction process.

My Discussion:
The technical section seems really wordy to describe a technology that abstracts several service discovery and invocation protocols with a new layer based on web services WSDL bindings -- after all, creating an abstract layer is one of the oldest comp-sci tricks in the book. It is too bad that very little time is spent showing where the abstraction will not properly fit the abstracted protocols. There is a nice section to demonstrate that adding a dynamically changing environment on top of these service protocols isn't that much of an imposition, but from a mobile user interface POV this paper is not that interesting.

Friday, July 01, 2005
ACM's <interactions> July 2005 
Link

interactions archive
Volume 12 , Issue 4 July + August 2005 table of contents
Year of Publication: 2005
ISSN:1072-5520

Abstract:
[Editor note: instead of the standard choice of a paper from ACM's digitial library, this week's edition of comouipapers urges you as a UI practitioner to read the linked issue of ACMs <interactions> bulletin, specifically the articles 'CHI: the practitioner's dilemma' by Arnowitz & Dykstra-Erickson, 'Why doesn't SIGCHI eat its own dog food?' by Morris, and most certainly 'Human-centered design considered harmful' by Donald A. Norman.]

My Discussion:
These articles may be the harbinger of a revolution in which the UI community confronts that User-Centered Design, UCD, is simply not going to give us the results we need. The Morris article highlights that as UI practitioners, the CHI community couldn't even get itself to use its tools and methodologies for somehing so obvious as to create a good conference for ourselves. The Arnowitz & Dykstra-Erickson article the describes an actual revolt in the CHI community during a planning session between practitioners and academics, with practitioners being clear they weren't getting much out of the academic tracks, and the academics feeling threatened by too much focus on practitioners. The Norman article, written by the grand master and authority on clear design that instantly communicates its function to the user and is a joy to use, actually makes a point that UCD is not leading to the best products out there but that genius design vision very often trumps painstaking UCD protocol.

This last part is something practitioners already quietly know, but cannot say out loud, because we are not all design genius visionaries, and we need something to guide us to good results in our efforts. However, academic inquiry into UCD has not delivered a clear and workable set of guidelines and practices, and in practice UCD clashes badly with business constraints on product design. We may see UCD as a guide being replaced, or at least have to consider it.

Friday, June 24, 2005
Toss-it: intuitive information transfer techniques for mobile devices 
Link

Koji Yatani University of Tokyo, Chiba, Japan
Koiti Tamura University of Tokyo, Chiba, Japan
Keiichi Hiroki University of Tokyo, Chiba, Japan
Masanori Sugimoto University of Tokyo, Chiba, Japan
Hiromichi Hashizume National Institute of Informatics, Tokyo, Japan

Conference on Human Factors in Computing Systems archive
CHI '05 extended abstracts on Human factors in computing systems table of contents
Portland, OR, USA
SESSION: Late breaking results: short papers table of contents
Pages: 1881 - 1884
Year of Publication: 2005
ISBN:1-59593-002-7

Abstract:
In recent years, mobile devices have rapidly penetrated into our daily lives. However, several drawbacks of mobile devices have been mentioned so far. The proposed system called Toss-It provides intuitive information transfer techniques for mobile devices, by fully utilizing their mobility. A user of Toss-It can send information from the user's PDA to other electronic devices with a toss or swing action, as the user would toss a ball or deal cards to others. This paper describes the current implementation of Toss-It and its user studies.

My Discussion:
The new idea for a method to quickly transfer data from one device to another is laid out well here, from the intital thinking and requirement, the work done to make a hardware implementations, all the way to a short user study. It is presented clearly, and quite clear then is also that this method as described comes with enormous problems for very many people in the population who do not have very accurate motor skills. While the issue of how intuitive it actually is to fling data from one device to another is not addressed -- is it for example something that users would come up with to try? How much visual or audio queueing does it require? -- the requirements of having to be able to fling data over one user to the user behind them in line-of-sight such that the person in the middle does not receive it puts a very big burden of accurate motion on the user. As their user studies point out, they cannot do it with any reliability of more than, on average, 70% using able-bodied users in a very controlled environment. If my Copy/Paste system only worked 7 out of 10 times in the best of conditions because I couldn't type in the exact right speed, I would probably Toss the system.

Friday, June 10, 2005
Using treemaps to visualize threaded discussion forums on PDAs 
Link

Björn Engdahl Royal Institute of Technology, Stockholm, Sweden
Malin Köksal Royal Institute of Technology, Stockholm, Sweden
Gary Marsden University of Cape Town, Cape Town, South Africa

Conference on Human Factors in Computing Systems archive
CHI '05 extended abstracts on Human factors in computing systems table of contents
Portland, OR, USA SESSION: Late breaking results: short papers table of contents
Pages: 1355 - 1358
Year of Publication: 2005
ISBN:1-59593-002-7

Abstract:
This paper describes a new way of visualizing threaded discussion forums on compact displays. The technique uses squarified treemaps to render the threads in discussion forums as colored rectangles, thereby using 100% of the limited screen space. We conducted a preliminary user study, which compared the treemap version and a traditional text based tree interface. This showed that the contents of the discussion forum were easily grasped when using a treemap, even though there were in excess of one hundred threads. In particular our technique showed a significant improvement in time for finding the largest and most active threads. Overall, it was shown that the benefits derived from using treemaps on desktop computers are still valid for small screens.

My Discussion:
Short and sweet results paper describing an experiment with a visualization technique. It highlights that, even with a test-population 6 computer-savvy users, a little user-testing can quickly bring a UI designer's feet back firmly on the ground with regards to something like color coding (the user's did not immediatly understand the coding, much in line with earlier but often neglected experimental results). The visualization technique used, treemaps, can be useful for more than just forums, so papers like this can serve as creative stimulation when having to tackle design problems.

Friday, June 03, 2005
A context based storage system for mobile computing applications 
Link

Sharat Khungar University of Oulu, Finland
Jukka Riekki University of Oulu, Finland

ACM SIGMOBILE Mobile Computing and Communications Review archive
Volume 9 , Issue 1 (January 2005) table of contents COLUMN: Special feature on MOBICOM 2004 posters table of contents
Pages: 64 - 68
Year of Publication: 2005

Abstract:
In this paper, we describe a novel context based storage system that use context to manage user data and make it available to him based on his situation. First, we examine several existing systems that use context with documents. Subsequently, a new storage system is presented that uses context to aid in the capture of and access to documents in mobile environment. We describe file browser and calendar applications that we have developed within our mobile computing infrastructure utilizing the features of context based storage. Novel features of our system include the access rights mechanism for data and support for group activity.

My Discussion:
A nice paper about a system that actually implements some ideas about context-based computing. The system described, CBS, is layered on top of a replicated distributed storage system that can work on wired computers and handhelds, making it appropriate for experiments in including contextual queues like location, time, and participants as metadata to store files.The actual applications do not get many page inches, but already hint at interesting crosslinks between data retrieval and occasions, like storing files with pointers to a meeting so that returning to the meeting becomes an entry portal to the files, that currently is not so easily done. I would like to see future work in this system.

Change In Search 
Because my binder in the ACM digital library was not updating, I decided to create a new one. Since mobile technology changes so fast, I decided to focus on modern papers. My search string is no longer UI connected mobile devices but is now +connected +2005 +"mobile device" +"user interface".

Friday, May 20, 2005
FReCon: a fluid remote controller for a FReely connected world in a ubiquitous environment 
Link

Alexandre Sanguinetti Department of Knowledge Engineering and Computer Science, Doshisha University, 1-3 Miyakotani, Tatara, Kyotanabe, Japan
Hirohide Haga Department of Knowledge Engineering and Computer Science, Doshisha University, 1-3 Miyakotani, Tatara, Kyotanabe, Japan
Aya Funakoshi Department of Knowledge Engineering and Computer Science, Doshisha University, 1-3 Miyakotani, Tatara, Kyotanabe, Japan
Atsushi Yoshida Department of Knowledge Engineering and Computer Science, Doshisha University, 1-3 Miyakotani, Tatara, Kyotanabe, Japan
Chiho Matsumoto Department of Knowledge Engineering and Computer Science, Doshisha University, 1-3 Miyakotani, Tatara, Kyotanabe, Japan

Personal and Ubiquitous Computing archive
Volume 7 , Issue 3-4 (July 2003) table of contents
Pages: 163 - 168
Year of Publication: 2003
ISSN:1617-4909

Abstract:
In this paper, we propose a Fluid Remote Controller, a general-purpose remote controller based on the ubiquitous computing view. FReCon offers remote control features over a wide range of appliances located within a room, with a unique controller implemented on portable devices like PDAs, handheld PCs, mobile phones, etc. More than the controller itself, FReCon means the whole FReely Connected world in which FReCon-enabled users and appliances interact: though acting naturally, the user can freely connect to the desired appliance, control it, disconnect from it and start communicating with another. A prototype implementation in the form of a smart TV remote controller is also described. This simple prototype makes it possible to understand the validity and the limits of our view, and give clues for further improvements.

My Discussion:
The paper starts out allright making its case for a remote control that can control any device the user ends up in contact with in a world where computing is ubiquitous and has recessed into the background. But when in the next section the design requirements include "use small web servers embedded in all the applications to be controlled" and "reduce user's interactions to none but natural ones",[italics mine] without justifying why webservers are required and how the learned behavior of how to use a remote, like that it needs to be pointed at the device or that buttons can be pressed and held, is somehow natural, it looks that the readers are in for a bumpy ride. It is only near the end that the need for a complicated configuration, that includes devices constantly broadcasting over both irDA and bluetooth so that the remote can fetch a page from an embedded webserver in every device, becomes clear: this is a system in which every device to be controlled can send a UI -- as, in this case, a webpage -- to the device being used as a 'remote'. But the limitations their set up runs into (for example, to select which device to control the remote has to be pointed at a device, so no two devices can be located close to each other), limitations bound to become relevant in almost every home with a home theater cluster of AV devices, seriously made me wonder why they bothered implementing their flawed design.

Friday, May 06, 2005
Challenge:: recombinant computing and the speakeasy approach 
Link

W. Keith Edwards Trevor Smith Palo Alto Research Center, Palo Alto, CA
Mark W. Newman Trevor Smith Palo Alto Research Center, Palo Alto, CA
Jana Sedivy Trevor Smith Palo Alto Research Center, Palo Alto, CA
Shahram Izadi University of Nottingham

International Conference on Mobile Computing and Networking archive
Proceedings of the 8th annual international conference on Mobile computing and networking table of contents
Atlanta, Georgia, USA
SESSION: Challenges table of contents
Pages: 279 - 286
Year of Publication: 2002
ISBN:1-58113-486-X

Abstract:
Interoperability among a group of devices, applications, and services is typically predicated on those entities having some degree of prior knowledge of each another. In general, they must be written to understand the type of thing with which they will
interact, including the details of communication as well as semantic knowledge such as when and how to communicate. This paper presents a case for “recombinant computing”—a set of common interaction patterns that leverage mobile code to allow rich interactions among computational entities with only limited a priori knowledge of one another. We have been experimenting with a particular embodiment of these ideas, which we call Speakeasy. It is designed to support ad hoc, end user configurations of hardware and software, and provides patterns for data exchange, user control, discovery of new services and devices, and contextual awareness.

My Discussion:
From a perspective of servicing the user, this paper has it completely backwards. It proposes a framework for making ubiquitous computing extensible, starting off with formulating premisses about what technology should be like to interconnect all kinds of devices with each other to allow users to do things. One of the premisses is even that the users will be ultimate arbiters of what connections will be made and are desireable, since in this extensible framework no functions, capabilities, or permissions can really be defined for new classes of devices and data transfer. The paper then ends with the note in the challenges section that making the UI to this extensible framework will be hard. The paper describing this framework to allow users to navigate the landscape of ubiquitous electronic devices thus proves itself to not be user-centered at all, begging the question whether this framework will actually give the user enough useable, clear, and humane information to make informed decisions about what devices should be connected and how.

Friday, April 22, 2005
Integrating context information into enterprise applications for the mobile workforce - a case study 
Link

A. Spriestersbach SAP AG
H. Vogler SAP Labs
F. Lehmann TU-Dresden
T. Ziegert TU-Dresden

International Conference on Mobile Computing and Networking archive
Proceedings of the 1st international workshop on Mobile commerce table of contents
Rome, Italy
Pages: 55 - 59
Year of Publication: 2001
ISBN:1-58113-376-6

Abstract:
The integration of context information (especially location information) into mobile applications and services is one of the most crucial requirements to achieve a broader usability and hence acceptance of these. So far location information is used for typical business-to-consumer applications such as mobile-MapQuest or ATM-finder. The application of location awareness in typical enterprise or business applications, such as logistics or Customer Relationship Management (CRM), is currently addressed rather poor.

In this paper we discuss the enhancement of mobile enterprise applications by context information. Starting from a customer demand and for a mobile sales force scenario, our objective was the improvement of the usability of mobile enterprise applications by introducing context information to these.

My Discussion:
The paper makes the process they went through seem straightforward. Create a IR based location beacon for a shop, and then the shopper's handheld picks up a location ID and gets a customized form for that shop from the back-end. Add to that the knowledge of who the shopper is, and the user barely has to enter anything to make a purchase from the form. What I find interesting is how a 2001 paper already reads as old because they couldn't find the handhels they liked -- now Symbian phones with GPRS connections are all over the place.
The paper has a short but good discussion about how location is a subset of context, and useful refernces to context work at the time.

Friday, April 15, 2005
Zippering: managing intermittent connectivity in DIANA 
Link

Arthur M. Keller Stanford Univ., Stanford, CA
Owen Densmore Sun Microsystems
Wei Huang Sun Microsystems
Behfar Razavi Sun Microsystems

Mobile Networks and Applications archive
Volume 2 , Issue 4 (January 1998) table of contents
Special issue on personal communications services
Pages: 357 - 364
Year of Publication: 1997
ISSN:1383-469X

Abstract:
This paper describes an approach for handling intermittent connectivity between mobile clients and network-resident applications, which we call zippering. When the client connects with the application, communication between the client and the application is synchronous. When the client intermittently connects with the application, communication becomes asynchronous. The DIANA (Device-Independent, Asynchronous Network Access) approach allows the client to perform a variety of operations while disconnected. Finally, when the client reconnects with the application, the operations performed independently on the client are replayed to the application in the order they were originally done. Zippering allows the user at the client to fix errors detected during reconciliation and continues the transaction gracefully instead of aborting the whole transaction when errors are detected.


My Discussion:
Although pretty old, this paper caught my eye because it explicitely addresses the problem so many papers about mobile UIs ignore: walking into a connectivity dead spot. The paper describes the error-handling system called 'Zippering' for this disconnection problem built into DIANA, a method and implementation for information retrieval competing with HTML/HTTP and the agents based Telescript. (In fact, the paper is so old it discusses HTML/HTTP by the name of 'Mosaic', NCSAs name for its WWW browser.) Zippering works by having the application being accessed over the network also have a small surrogate available on the client. For example, a group calendaring application written with DIANA would also on the client have a calendaring surrogate that would approve all meetings requested. When the user, while using the remote application, gets disconnected, the surrogate takes over responding and approves every requests. When the mobile device then re-connects, the surrogate replays the information exchange between UI and surrogate to the remote application, and alerts the user when the remote application reacts differently (denies a meeting in this example) from the surrogate. The user gets a chance to change that request, after which the surrogate replays the other requests, and so on until the user is synchronised.
So is this paper now just a curiosity in a world where HTML/HTTP forms have become ubiquitous, and all infrastructure is based on web-servers, network caching, and Javascript? Not really, zippering is suddenly relevant again for mobile SOAP-based applications. When data access is being done by simple SOAP messages, writing a client that detects network breaks and then routes the SOAP messages to a surrogate becomes feasable, since the appliction developer is writing a custom SOAP client anyway. The SOAP surrogate on the client can the synchronise when the network is back, and catch up that way. Since SOAP has many interesting qualities for the mobile space, zippering may be necessary again just yet.

Friday, April 08, 2005
An adaptive viewing application for the web on personal digital assistants 
Link

Kwang Bok Lee Rensselaer Polytechnic Institute, Troy, NY
Roger A. Grice Rensselaer Polytechnic Institute, Troy, NY

ACM Special Interest Group for Design of Communications archive
Proceedings of the 21st annual international conference on Documentation table of contents
San Francisco, CA, USA
PANEL SESSION: Getting and giving information table of contents
Pages: 125 - 132
Year of Publication: 2003
ISBN:1-58113-696-X

Abstract:
With the proliferation of Personal Digital Assistants (PDAs), people are using such small devices to access the web; however, the web is not accommodating such access. Here, for small devices' users, we present an efficient method for extracting readable documents from XML-based files, which will be used for information streams for mobile Internet access. We designed a selector for handling information streams to extract the customized information based on the user request for the small screen devices. The selector's attributes can be adapted from an XML document, and then used for translating information streams into the new file that will be displayed on the devices. Also, the selector has visual menu interfaces so that users can easily choose each attribute according to their preferences. This is developed to devise an efficient method for the small screen computers' problems. Furthermore, we prepared usability testing for the application in order to find usability problems, and then we offer further progress to improve the usability of working on devices. The prototype and implementation of this approach will be also provided in this paper.

My Discussion:
Technical paper demonstrating a system to hide or disclose more or less of an XML based document on a PDA. The user can manipulate the viewer on the PDA to select what level of elements to see, with choices like 'headlines on;y' or 'pictures as icons'. Sometimes a paper tries so hard to be clear about every step of a technological solution that the exposition ends up actually being a more confusing. This paper does not make clear why the XML needs to be loaded off a PC instead of being retreived directly over the internet by the PDA, and what transformation the PC is doing to the XML before the selector program in the PDA can handle the document. Seeing also that the corpus of (badly formatted) XML-related documents out there -- the WWW of which most is in HTML -- is never mentioned, the paper seems oddly abstract and ungrounded.

Sunday, March 20, 2005
User Interfaces for Applications on a Wrist Watch 
Link

M. T. Raghunath Wearable Computing Platforms, IBM TJ Watson Research Center, Yorktown Heights, NY, USA
Chandra Narayanaswami Wearable Computing Platforms, IBM TJ Watson Research Center, Yorktown Heights, NY, USA

Personal and Ubiquitous Computing archive
Volume 6 , Issue 1 (February 2002) table of contents
Pages: 17 - 30
Year of Publication: 2002
ISSN:1617-4909

Abstract:
Advances in technology have made it possible to package a reasonably powerful processor and memory subsystem coupled with an ultra high-resolution display and wireless communication into a wrist watch. This introduces a set of challenges in the nature of input devices, navigation, applications, and other areas. This paper describes a wearable computing platform in a wrist watch form-factor we have developed. We built two versions: one with a low resolution liquid crystal display; and another with a ultra high resolution organic light emitting diode display. In this paper we discuss the selection of the input devices and the design of applications and user interfaces for these two prototypes, and the compare the two versions.

My Discussion:
A bit misplaced in my ACM binder, since the wristwatches discussed do not have an electronic connection to the net or any other device. This papers is an exploration of one-handed user interfaces in a terribly constrained space, as put on two wrist watches, one with an almost standard LCD screen and one with a very high resolution screen. While I never expected to ever read the words "X11" and "wrist watch" in describing the same techonology, the strength of this paper is in the thoughts about the affordances of smart wrist watches and what position in daily life the UI will take. It is, two years later, certainly not in the actual UI, which comes accross as fairly unimaginative from their descriptions. It would be interesting to take this technology and strip out the computing power to just make it a wireless display to a personal server worn on the body, interfacing with other personal technology.

Friday, March 11, 2005
Hubbub: a sound-enhanced mobile instant messenger that supports awareness and opportunistic interactions 
Link

Ellen Isaacs AT&T Labs, Menlo Park, CA
Alan Walendowski AT&T Labs, Menlo Park, CA
Dipti Ranganthan AT&T Labs, Menlo Park, CA

onference on Human Factors in Computing Systems archive
Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves table of contents
Minneapolis, Minnesota, USA
SESSION: I Think, therefore IM table of contents
Pages: 179 - 186
Year of Publication: 2002
ISBN:1-58113-453-3

Abstract:
There have been many attempts to support awareness and lightweight interactions using video and audio, but few have been built on widely available infrastructure. Text-based systems have become more popular, but few support awareness, opportunistic conversations, and mobility, three important elements of distributed collaboration. We built on the popularity of text-based Instant Messengers (IM) by building a mobile IM called Hubbub that tries to provide all three, notably through the use of earcons. In a 5.5-month use study, we found that Hubbub helped people feel connected to colleagues in other locations and supported opportunistic interactions. The sounds provided effective awareness cues, although some found them annoying. It was more important to support graceful transitions between multiple fixed locations than to support wireless access, although both were useful


My Discussion:
Many of the innovations in IM have already been taken up into the large systems like Yahoo! and AIM, but with modifications: instead of people having a personal sound, they have personal avatar icons. They also include sounds to announce someone has joined or left (although not personalized to the person in question), and specialised clients like Trillian also allow sounds for when people start or stop being idle. The systems now alos give visual feedback of when someone is typing. Not implemented (much to my personal dismay) is the innovation described in the paper of making it possible for the same person to be logged in on multiple locations, like both desktop and mobile device, and the system making a choice where to deliver the IM based on what device has been idel least (AIM is coming close to this now, though. Yahoo insist on having the user be logged in on only one device and choosing which explicitly).

This is a clear, well-written, and comprehensive paper in that it describes a system clearly and the use of the system, complete with quantitative and qualitative data. While, as I said, the system may be outdated, the data and recorded attitudes about IM are very valuable to now have as a good refernce to point to when building future work on mobile IM. The attitudes recorded about the sense of presence, and the mentioning how the IM clients had to be re-designed to deal with spotty mobile connectivity, are also very useful in this regard.

This page is powered by Blogger. Isn't yours?