[Written by Noah Liebman] During my internship at FUSE Labs, Jürgen Brandstetter and I deployed Fidgebot for about 2.5 weeks in the MSR NYC and FUSE Labs in Redmond, WA. We collected data on the system use to examine how Fidgebot supported players’ physical-activity -at-work goals. We interviewed participants about their experience. This post focuses on preliminary analysis of the quantitative data.
Did players use Fidgebot? People came and went for conferences and vacation, the orange shows the number of possible players each day. The blue shows how many people actually engaged with Fidgebot for that day.
In general, there’s a tendency for fewer than half the eligible players to be active on any given day. Despite this, there is a clear trend: participation is highest at the beginning of each week, and drops off through the week. This may be partially attributable to reminder emails we sent to each participant, but there were also emails sent during other parts of the week.
How much players met their goals was substantially split between micro-exercise and standing goals. We looked at each by individual, and over time.
The number of micro-exercise breaks each person aimed to do over the course of the 2.5-week period was based on both their daily goals and the number of days they were active. When looking at this graph, the orange bars are the number of micro-exercises each person wanted to do. Each person wanted to do ten per day, so in effect it tells us how many days each person was active. The blue shows how many of those were actually achieved.
There are a few things to notice about this. First, the two teams on the left (both located in FUSE Labs in Redmond, WA) were active way more than the two teams on the right (both located at MSR NYC). FUSE teams outperformed the NYC teams in terms of goal achievement, although user 34 brings NYC’s goal achievement percentage to about on par with Redmond. The other thing to notice is that with the exception of a few outliers people didn’t come close to meeting their micro-exercise goals. This is less than impressive, and may provide insight for system designers about the ways goals and progress are made apparent. The fact that no players changed the number of their micro-exercise breaks per day from the default, also implies that we overshot this goal.
When reviewed over time, we see the opposite of participation: micro-exercise goal achievement increases through each week. This is probably because as people who are less interested or motivated drop off during the week, it pushes up the average among those who are still active.
When looked at by player, standing desk usage paints a similar picture. This time, orange is the number of hours people wanted to stand, which again, increased with the number of days they were using Fidgebot. Blue is the actual time spent standing. If the blue is greater than the orange, a person exceeded their goal. Again we see that the teams at FUSE Labs in Redmond are more active than the teams at MSR NYC; that is, orange is higher, but goal achievement is the same.
The main difference in an absolute sense: goal achievement was much higher. This, unfortunately, is not because people were much better standers. This is because people would frequently forget to log when they came or went from their desks, or when they sat or stood. We were able to remove most of the most egregious cases, like people saying that they were standing and at their desks all night, but we were still left with quite a bit of what is presumably excess standing time.
Standing goal achievement over time looks substantially different than it did for micro-exercises. It declines for the first 1.5 weeks, then heads back up. We’re still working to figure out the reason for this, but this may be one case when the back-to-back emails on the final Monday and Tuesday of the pilot impacted activity. Alternatively, this trend might be overwhelmingly driven by just a few individuals. We still need to tease this apart.
Finally, when we look at overall goal achievement (combined micro-exercises and standing) broken down by team, we can see a few things. One is that it generally falls within a pretty small range (55%-63%), suggesting that, while there’s a lot of variation between people, it tends to average out over the four or five team members. We also see that there’s one relatively high performing team and one relatively low performing team in Redmond and New York. While not conclusive, this suggests that although participants in Redmond are more active in the system, they do not perform any better. That engagement and goal achievement may be independent is interesting, and worth investigating further.
Keep a look out for Jürgen’s post on the team that had a humanoid robot encourage their exercise breaks at the office.
Jürgen and I are looking forward to diving into further analysis for a CHI Work In Progress paper.
Be sure to check out our other posts on why, the robot, and qualitative results (upcoming).