Sunday, January 31, 2010

Ethnography Idea

I'd like to do something having to do with YouTube. Maybe looking at rating, view count, comment count amount, Im not sure yet.

Tuesday, January 26, 2010

UIST 09- Collabio

Summary
Collabio is a game for Facebook that incorporates creating social tags for people. The idea of tagging is for a user community to create tags that describe information. Social tagging is creating tags that describe a person such as their interests, affiliations, hobbies, and personality. In order to encourage tag creation they incorporated the tagging into a game. They also hypothesized that the social implications would deter people from posting inaccurate and offensive tags. In their study they let the game run live as a Facebook app and logged all the interactions. In their study they found that users of the system had an average of 9 tags for themselves and no incorrect tags were created.

Thoughts
Tagging is a useful thing but I think it is silly to put tags on a Facebook page. Someone Facebook page is already tagged many different ways by the user themselves, isn't that the purpose of the site. Using a game to accomplish the goal was a good idea and proved once again that games can serve a useful purpose.

Monday, January 25, 2010

UIST 09- Ripples

Daniel Wigdor, Sarah Williams, Michael Cronin, Robert Levy, Katie White, Maxim Mazeev, Hrvoje Benko
Microsoft Surface | Microsoft Corp. | Microsoft Research

Summary
This research is aimed at solving a tricky issue when it comes to multitouch and touchscreen devices. The issue is that touch screens lack in the inherent tangible, audio, and visual feedback of a traditional mouse. When using a mouse you know that a click has been made because of the clicking sound and the depression of the button. You also always know exactly where on the screen the mouse is because of the cursor. On a touchscreen however you do not get these forms of feedback. This creates ambiguity and can lead to user frustration. To address this issue they have created a visualization technique for providing feedback about finger location and pressure. The simple rippling and tailing effects remove the ambiguity of the screen and create a less frustrating environment.

Thoughts
I am curious as to why they did not include a sound component. I think that the clicking noise of a button or key is a significant feedback element. Why did they choose to leave it out? Also in respect to future work, I wonder how they will choose to address the issue if mouse-in and mouse-out.

Thursday, January 21, 2010

UIST 09- Tounge Gestures

T. Scott Saponas, Daniel Kelly, and Babak A. Parviz (University of Washington, Seattle) and Desney S. Tan (Microsoft Research)
Summary
This paper is aimed at exploring new ways of interaction for those with severe physical handicaps. In most cases of gross motor loss the ability to control eyes, jaw and tongue often are unaffected. Much work has been done exploring eye tracking and speech recognition; this paper attempts to explore the untouched area of tongue input. Prior work on tongue sensing have used things like a mini joystick in your mouth that can be controlled with the tongue or pressure sensitive buttons on a dental retainer. This paper asserts that these devices treat the tongue only like a finger when in fact the tongue is a complex muscle able often used to perform feats of dexterity like swallowing or generating speech. The device they created is an optical tongue sensing retainer (pictured above). Their optical approach allows for tongue gesture recognition. In their experiment they used four gestures, left swipe, right swipe, tap on the palette, hold on the palette. The qualitative results of the experiment exposed some interesting tongue-unique problems. Gesturing with the tongue causes it to deform and change shape. The set of tongue gestures they implemented could be performed in multiple ways ie. a swipe against your front teeth or a swipe in the back of your mouth. Also the varying difference in mouth and tongue shape across participants required a large amount of custom configuring and calibration.

Thoughts
Although I was not aware about tongue sensing technologies, after hearing about the approaches of prior work I was intrigued by this paper's view of the tongue as a unique modality instead of as a finger. I was disappointed however by the non innovative gestures. Swipes, taps and holds seem to me to be finger-centric designs. After they defined the tongue as a unique non-finger-like control I was hoping for some more innovative tongue gestures. I'm not sure, but I would maybe use the tongue gestures that you naturally use when speaking. That way you could map gestures to different sounds like 'eeh' 'ooh' 'sst' 'ess' 'mmm' etc. I think this would be far more intuitive and useful.


UIST 09- Always On Muscle Input

T. Scott Saponas, Daniel Kelly, and Babak A. Parviz (University of Washington, Seattle) and Desney S. Tan (Microsoft Research)
Summary
The focus of this paper is combining interface design principals with prior work on muscle sensing and gesture recognition to create a always available muscle input for real world use. To do this effectively the researchers had to answer an important question about detecting gestures versus relaxation. That is to say, registering a correct gesture at the correct time and not registering a gesture when user is in a transition state. One of the major complicatios of this problem is the EMG technology for sensing small muscle finger movements is not near as fast or accurate as large muscle movements. Prior work has show that these large muscle movements lack the wide gesture set needed for everyday use. The researchers came up with a unique bi-manual solution, building on past discoveries. Their bi-manual approach was to have the user form gestures in his dominant hand and then use his non dominant hand to signal when a gesture is formed. This method allows for the input to be always available without registering incorrect gestures. They used a set of four gestures for their experiment, pinching thumb and index, thumb and middle, etc. The non-dominant hand used only one gesture, squeezing a fist, which is a large muscle movement. In their experiment they tested gesture recognition accuracy over three hand states, free hand, holding a cup and holding a heavy bag. The systems gesture accuracy for each case: 79% free hand, 85% hold a mug, and 88% carrying a weighted bag.


Thoughts
I was very impressed with the user study results. I thought they were very thorough in their testing. The fact that they tested the gestures for the conditions of holding a cup and carrying a bag answered a lot of questions about possibly functionality. I also liked the fact that they included a real world scenario in their experiment. The gesture set is not yet complete enough to be truly useful but it is encouraging that they are focusing on questions of actual usability. I was disappointed by the fact that they don't actually have an EMG arm band like the picture. Instead they used a series of electrodes and the participants had to remain seated next to a large EMG device. Although the technology they used is not at a real-world level, this paper answered a significant amount of important questions with a unique solution.

Intro Poster

My intro poster.


I couldn't resist adding some pictures of my cat, Holland. They were taken at the Houston Cat Show a few weekends ago. My wife and I dressed him up as a wizard for the costume contest. He was a very angry wizard, but still managed to win second place out of three.