Last week my PhD proposal defense was held. And I successfully passed! I was a little nervous, but I think it went as well as could be expected. My committee members gave great feedback as well. Of course, there’s no rest for the weary; with that over I’m back to literature review and working on content for the user study.

One point that was emphasized to me: It’s likely that my study will not have positive results, because it’s very difficult to prove user adherence and retention in this area. I was strongly encouraged to focus on a solid design for the experiment, and to not be frustrated with results that are not significant. A strongly designed study contributes more to academic research because it allows others to follow in your design with similar projects, and negative or neutral results still provide new information.

Of course I’m hoping my hypotheses will prove true, but I’ve taken their advice and continue working hard on this pilot. Which mostly involves sitting in a room while people play dance games, and that’s pretty awesome

This week was all about organizing how this user study will actually take place. I used my skills from being a program manager two summers ago to treat this as a software design, and I created a User Flow for the study. From a participant’s perspective, I wrote out every meeting and action they would be involved in. This turned out to be extremely helpful. Not only can I use the diagram in my proposal presentation, but it brought to light several areas that still need to be figured out. I won’t be posting this diagram online until after the study is done, but here is some of the data I learned from it.
Time per Participant: We have a condition that relates to monitored time, where half the participants will be moderated and half will work on their own. The In-lab participants could take 16.5-31.5 hours each of moderated time, depending on their own initiative. The other participants will take 6.5 hours in the lab. With this in mind, we can see how much in-lab time we have and how many participants can be run in a given amount of weeks. Task for next week: find out how this maps out in weeks of moderator hours.

Data per Participant: Between website time, physical data and surveys, there is a lot of information being generated. There are 8 body measurements being taken, and each session will also have several heart rate measurements. And there are four different data logs on their activity on the website.

How many documents needed: We estimate 6 questionnaires which need to be drafted up soon. These are based on previous studies so it shouldn’t take too long.

How much story content needed to write: For one of our potential conditions we are creating lots of narrative content. I’m trying to write enough to encourage someone that was engaged to work out every day, or push themselves for longer sessions, in order to finish the storyline. I’m aiming to finish a scene a day in order to be all set for the beginning of the pilot. Even if some of the final chapters aren’t finished, this should still be good because no one will be able to reach those last chapters in the first couple weeks, buying me more time.

As we approach the pilot study, I’m thinking more and more about which conditions we will keep. We are piloting a whopping 10 conditions, and I’m hoping to keep four of them at the most. Well, I guess I already knew the best course of action, it wouldn’t be research, right?

Time for the weekly update! So most of my focus this week is on preparing to do a pilot study. This pilot study is really important because the planned study will be 6 weeks long per participant. That’s a lot of time to make up if there’s an error in the format!
Here are some things I’ve been working on this week:

  1. Paperwork: Without going in much detail, I’ve had a lot of paperwork to fill out regarding my degree status, and I’m working hard to make sure everything is settled and I can graduate on time. A classmate has thoughtfully put up a guide to doing everything for the proposal presentation, so I’m grateful for that!
  2. Running Participants: We will need 10 participants for the pilot study to try out all conditions we are considering for the real study. Running ten conditions in the end will be a large time commitment and risks failure, so we are hoping that the pilot will illuminate which conditions are most interesting. I’ve been trying to figure out the logistics of how many people can be run in a week from my side of things as well.
  3. Equipment: For this study, we will need to lend out Kinect/Xbox360 bundles to many participants. We can’t use only people that already own Kinects, because that might skew our results. So I’m in the process of tracking down how we might be able to get these at a discount, and rent them out in a logical way.
  4. Scheduling: This is one of the biggest pain points of running a user study is managing a calendar of appointments. A typical study for us had 30-40 people, and this one could be much larger than that. We not only have to worry about cancellations and changes, but in this case each person has to come in multiple times and for multiple types of appointments. Luckily, I think I found a good solution with acuityscheduling.com. This site is free if you have one person’s calendar, and allows anyone to sign up for time periods that are listed as free. I’m hoping this will streamline the process and allow me to simply show up for the study sessions without too much overhead.
  5. Art Assets: I’ve found someone to help me with art assets for one of modes we are testing. This mode requires a lot of nice artwork in order to be competitive with similar games made by professional companies. We should be adding another 64 items soon. Very excited to have the help!

Next on my list: I still have a lot of story content to write for the narrative portion of our investigation. So next week will be about working out the details of using the lab for pre-experiment sessions, and getting the timeframe for this pilot figured out.

I am in the last stretch of my PhD work, so I thought I’d try to keep track of how things are going again!

It’s a rough time, the calm before the storm as I plan the logistics of a pilot study. There are two parts I am launching. One is a straight out pilot of what the real study will be: a longitudinal experiential study examining different conditions of playing a dance game. The other part will be setting up several informal interviews to get feedback on the type of rewards and motivations people would like in fitness games.

Here’s what’s on my immediate task list:

  • Double check my paperwork is in order with the school
  • Sync up with my committee members, make sure they know about this blog (Hello!)
  • Figure out what I need to compile for IRB submission
  • Figure out logistics of running study, estimate workload

Doesn’t look so hard, but there’s a lot to figure out before getting all these participants invested!

An outtake from our screen captures for our upcoming paper deadline. I had no idea that this was going to be the picture saved for use in our paper! I keep asking to have it changed since I’m not doing anything, but I was told ‘it looks cool’…

From our latest dance game prototype. Data from the Optrima depth camera in our lab. Sparkly thanks to the DPSF plugin.

,,,,

Along with my colleague Juliet Norton, we visited the Monterey Bay Aquarium for fun and for research. The aquarium is a beautiful mix of animal exhibits and interactive technology. Here is a brief video and our thoughts on the experience.

0:13 The Otter Habitat

These are sea otters, and are quite different from river otters. This habitat was two levels. The bottom level had an underwater view only, but there were few fish and almost no fauna. It was rather unexciting because the otters were playing at the surface. The top level showed the top 4 feet of water and the surface. The land part of the habitat wasn’t terribly large (only about 20 x 5 feet and mostly rocky), but the water was quite deep. The entire habitat had a very “man made” feel. They were sleeping in the water on their backs, but had plastic toys to play with when awake. We didn’t see a feeding, but this camera shows it daily.

0:35 Pledge to help the environment

This interactive interface aims to motivate children to help keep the environment clean by taking a pledge and cleaning up a virtual environment. On a touch screen proximal display a user chooses his or her pledge (e.g. picking up trash, recycling, reducing fuel use, etc) and their picture is taken. The headshot is placed on the user’s avatar in a virtual town (of presumably Monterey) shown on a large global display. Each user’s avatar begins working towards their chosen pledge. The camera focus is routinely changed between users so each person can see their involvement in keeping Monterey clean. The user can continue to interact with the proximal display by exploring available trivia about the current pledge or decide to pursue another pledge.

There were a lot of people around this station – the children were the ones interacting and pledging, but parents were enthusiastically helping them or watching.

1:00 Death of a Whale

This kiosk educates children about the circle of life under the sea. The global display shows a whale that has died under the sea and there is a narrator that explains how the whale died and the technology used to monitor the whale on the ocean bottom. There were several proximal stations where the users interaction occurred. The users are asked to collect video of eight different organisms found around the decaying whale (all virtual assets). When you spotted the organism you selected (touch screen) and then are able to collect the video recording. In a pop-up window a video clip is shown of the corresponding live animal to indicate the user has collected the video. This is the extent of the interaction with the proximal display. The global narrator is where the bulk of the information came from. There were also plastic buttons to the side of the touch-screen proximal displays, but they were-non functioning and kind of distracting.

1:30 Magnifying Glass

There was an open aquarium of water where coral and other small organisms were living, but on top were several floating magnifying glasses that were anchored in to the side wall with wire. There was a sign that said “Do Not Touch” but it was unclear if the sign was referring to the magnifying glasses or the water and organisms. Forbidden use of the magnifying glass would remove all possible interaction rendering the station less than purposeful. I took the risk and moved the glass around and was able to find a tiny little crab that would have otherwise gone unnoticed.

1:40 Safe Wildlife Encounters

This station aimed in educating users about how humans can harm sea life when walking around tide pools and picking up animals. There are four users stations around a large global display. The global display features a virtual tide pool and foot prints that represent the four users. Each user station has a joystick for moving their foot prints and a small touch screen that reports warnings when you’re about to step on sensitive areas full of life. When you get close enough, you are given the choices to take a photo or to pick up an organism. If you pick up an organism that can be harmed you are told so and asked to only look at these creatures when you encounter them in real life.

2:22 Making Informed Decisions About Seafood

This experience informs visitors of the consequences of eating certain kinds of seafood. Users sit down at a diner counter and order from a touch-screen menu by selecting one of three available options on the menu. The restaurant staff informs you about your decision and tells you why it is safe for you and the ecosystem to eat that item or not. There are many menu stations (proximal displays) but the restaurant staff can only address one order at a time, so the orders are queued.

3:15 Explore the Habitat of Tiny Hermit Crabs

This aquatic tank featured a rocky habitat which hermit crabs live in. This was only an exploratory experience for users via a movable camera. The camera had a joystick to translate on the XZ plane and a switch that controlled translation in the Y direction (assuming Y is up). I discovered a third control that allowed the camera to pivot and instinctually tried to get my friend in the frame. If the pivot feature had been removed it is likely I would have focused on the mysterious lives of the hermits.

Conclusion

This video has provided a glimpse of what can be found at the Monterey Bay Aquarium. The though of a visitor’s experience can be found in all design aspects of this establishment from the interactive learning experience featured in this film to placing deep sea exhibits on the bottom floor and locating coral and shoreline exhibits on the top floor. For further questions please contact Emiko and Juliet or visit the Monterey Bay Aquarium website.

Howdy from Monterey! Specifically Asilomar. I am in a giant barn surrounded by tired by talkative game design people. This is my second time at the Foundations of Digital Gaming and so far it has been a blast; I’m really glad I was able to come back this year. Yesterday I attended the Intelligent Narrative Technologies Workshop, a fascinating look on story telling technology which pretty much broke my brain by the end of the day. My pen is almost dry from taking notes.

Reasons why the last two days have been pretty awesome:

  • We are right next to the ocean. Like, the sound of waves is everywhere. It’s so ‘wild’ we found a dead seal and watched a deer take a pee.
  • We went to the Monterey Aquarium to take in how they incorporate technology in their animal exhibits. Blown away! Lots of cool education games working side-by-side with the animal environments! Video coming.
  • Great people are pretty much everywhere. I need a mixed reality address book HMD to keep track of everyone, my brain is tired.
  • Catered food and wine: YES! It certainly dissolved our reluctance to talk to strangers.
  • Adding to my reading list as I try to keep track of all the cool things people are involved with…
  • As usual at conferences, I get a lot of brainstorming done during the downtimes. First draft of proposed dissertation work is looming over my head, after all!

Reasons why the last two days could have been better:

  • Being from Florida, I expected it to be windy buy mild weather. Now I’m very cold.
  • It could have been two days on a boat to the Bahamas, like last year. But I’m sure I’m not the only one sad about that.
  • There’s so much going that it’s been difficult to maintain my P90X workout schedule. No, I’m not joking.

If you happen to be at the convention, come see my colleague, Juliet Norton, speak on her paper “Exploring Strategies and Guidelines for Developing Full Body Video Game Interfaces” at 7:30 pm on Sunday.

One of my advisors, Dr. Joseph LaViola Jr., was asked to write an article for Gamasutra on 3D interaction in games. We’ve experienced some serious troubles trying to communicate between industry folk and academia and this is one way that he is trying to increase awareness of what has been going on in research over the years.

You can read it here.

If you enjoy the article, he has appended a reading list with some related papers from our ISUE lab, including my first SIGGRAPH Sandbox paper on dance game interfaces.

Mentioned before in this blog and elsewhere, but here is some footage of my mixed reality painting prototype. This project was created in order to facillitate physical therapy for stroke patients with recovering upper arm difficulties.

So far this demonstration has been part of ISMAR 2009 and Otronicon 2010. The focus of this project is on making therapy a less difficult and tedious experience through enjoyable virtual activities. It is designed for three different skill levels based on the mobility of the patient. This was a challenge because at the lowest level patients may have essentially a frozen limb that can barely move millimeters.

Some specific features of this system:

  • 3d stereo Using the same technology in current movie theaters, we project the virtual world onto the real space in 3D.
  • Head Tracking The stereo glasses are tracked so that the user’s head movements translate the view of the virtual world, allowing them to move and change brush color or look behind the canvas at the fruit bowl they are painting.
  • Many Activities For a variety in context we designed nine distinct games that may appeal to different people.
  • Same Gameplay The main purpose of this system is to ease the burden of physical therapy drills, so despite the contextual difference in gameplay between activities, the controls are almost the same.
  • Passive Haptics The feel of the real paintbrush against the clear acrylic pane gives the user the tactile sensation of painting.

Work on this project is on going as we wrap up the software and plan future user testing. This project was made possible by Emiko Charbonneau, Steven Braeger, Daniel Mapes, Eileen Smith, and Charles E. Hughes.

Categories