Saturday, March 5, 2011

Paper Reading #14: Bonfire: a nomadic system for hybrid laptop-tabletop interaction



Comments
Cindy Skach
Evin Schurchardt

Reference Information
Title: Bonfire: a nomadic system for hybrid laptop-tabletop interaction
Authors: Shaun K. Kane  University of Washington, Seattle, WA, USA
              Daniel Avrahami  Intel Research Seattle, Seattle, WA, USA
              Jacob O. Wobbrock  University of Washington, Seattle, WA, USA
              Beverly Harrison  Intel Research Seattle, Seattle, WA, USA
              Adam D. Rea  Intel Research Seattle, Seattle, WA, USA
              Matthai Philipose  Intel Research Seattle, Seattle, WA, USA 
              Anthony LaMarca  Intel Research Seattle, Seattle, WA, USA
             
Presentation Venue: UIST 2009: 22nd annual ACM  symposium on User interface software and technology; Date: 2009;
Location: New York, NY, USA

Summary
This paper presents Bonfire, a system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. The authors explain that the idea is novel because thus far we have only seen work on improving laptops, and tabletop systems respectively, but this project aims to merge the two ideas and make a mobile solution out of it.

Bonfire currently uses  two laser-based micro projectors, two cameras, two custom mounts and two mirros. This setup allows the use of computer vision techniques to recognize objects in the projection space and to track hands and gestures on the table. These techniques were implemented using the Intel OpenCV computer vision libraries and Python 2.5. To identify ojects on the tabletop, the authors are also using a background substitution technique that is constantly monitoring the projection space. 

As far as user input methods, Bonfire currently supports four which are tapping, dragging, flicking, and crossing.

Some usage scenarios the authors list are:
  •  physical objects as contextual cues: I could be listening to music with headphones on, and when I would put my headset on the table, the music would stop.
  • seamless cross-device interaction: the user transfers photos between his mobile device and their laptop by interacting in the physical space between them.
  • enhanced gampeplay experience: a laptop displays the main game screen while on the right side of the laptop a map is projected and on the left of the laptop, an inventory is projected.
  • object capture and share: the system captures a photo from a travel magazine that gets put into an e-mail. 
  • social interaction through physical interaction: you have a cup of coffee on the table. Bonfire recognizes it and gives the option of sharing this information with your social network.
Future work, the authors explain, may be adding additional cameras or other sensors to learn more about the user’s activities. Other work may be using Bonfire to enhance co-located collaboration. They plan to explore the possibilities for control, sharing, and collaboration that emerge from overlapping camera and projection views.
Discussion

This paper was an easy read, as well as interesting. The authors have used the technique of merging two current existing technologies into one to implement a mobile solution. I like that approach. I think Bonfire has a lot of potential, however the authors will have to make user case studies to determine whether people will be on board with the idea. Speaking of being on board, I wonder how one could use this on a plane considering you need space for your projections. Lighting of the area one is in conditions could also pose a problem.

1 comment:

  1. I agree with you that the user study was lacking in this paper. It almost seems like the initial appeal of the ideas presented in the paper are warrant enough for the design and that it will have acclaim to users.

    ReplyDelete