At Cornell University in Ithaca, there’s a group of researchers who are working to teach electronic devices what makes people tick. The group, called People-Aware Computer, is headed by Tanzeem Choudhury, a professor of information science at Cornell. Lately, Choudhury’s electronic device of choice is the mobile phone. As well as a gadget for calling and checking email, a smartphone is a collection of sensors that sits at most people’s side for most of the day. Currently, though, a large portion of a phone’s sensor data is ignored.
But what if phone sensors were programmed to pay better attention to us? We are currently slaves to our phones, constantly checking for updates. But what if our phones constantly checked for updates on us? One of Choudhury’s latest projects, an Android app called BeWell, is trying to answer these questions.
A watchful phone might sound scary to some. After all, even consumer-favorite Apple experienced a vocal backlash last year when people learned location data collected by their iPhones could easily be exposed. But Choudhury believes that there are privacy-sensitive ways that phones can act as windows into people’s habits, health, and happiness. Her software specifically lets people control who has access to this information.
BeWell is designed to passively collect data about a person’s overall health. The app infers the state of a person’s health by looking at three different measures: social interaction, physical activity, and hours of sleep. Impressively, it does all this without any input from the user. As long as the phone is in your pocket, purse, or resting nearby (and picked up when you move), it’s able to get a read for these three dimensions.
I caught up with Choudhury to talk about her goals with the BeWell, how it works and why designing an app like this such a challenge. An edited version of our conversation follows.
Kate Greene: Why is now a good time for an Android phone app that passively tracks your health like this?
Tanzeem Choudhury: There have already been similar consumer apps with some traction. People are familiar with apps like Runkeeper and also with external devices like the Fitbit [pedometer]. But a lot of these apps are focusing on one dimension or requiring the user to engage in terms of entering their information. Pretty much none of them look at health as a multidimensional problem and try to actively push behavior change. We’re interested in giving people the right cues to make changes in a positive direction.
KG: But in order for this to work, you have to assume something about the way people use their phones. How can you be sure you’re picking up the right information about social interaction, activity, and sleep?
TC: The most extensive testing we’ve done is with people carrying phones in their pockets. But I don’t often carry my phone in my pocket; a lot of the time it’s in my purse. In that case, there’s a lot more noise. Noise is a reason we limit our analysis to coarse measurements. If you keep things coarse, you don’t need perfect data.
KG: Why are you focusing on social interactions, physical activity and sleep?
TC: We chose three dimensions because we wanted to go beyond physical wellbeing and physical fitness. One area that I’ve been interested in is mental health. It tends to remain hidden for a long time. The three things that came up in our discussions with doctors was physical activity, social interaction, and changes in sleep patterns. They’re all indicative of mental health. We felt like we could measure these things, and they would be key to understanding mental wellbeing.
KG: How does the phone measure physical activity?
TC: We are looking at physical activity purely using the phone’s accelerometer. We have three categories: sedentary, walking, or running. Walking includes jogging. It’s mostly about how quickly a person takes steps, and that’s different for different people. It’s not as fine grained or accurate if we had the device positioned in a specific location [on the body]. But it does allow us to classify the data into these three categories, which correspond to a metabolic equivalent used to estimate the total energy spent.
KG: And for social measurements?
TC: For social, we’re looking at face-to-face social interactions as well as activity on the phone. We’re looking at whether you’re having conversations with people. We’re just looking at the total amount of interactions you have, but at this point we’re not looking at the nature of those interactions, if they’re stressful or positive or negative.
KG: So the phone’s microphone is an important sensor for social measurements. How do you ensure privacy?
TC: We’re not looking directly at the audio content: all of the processing is done on the phone. Anything that leaves the phone can’t be used to reconstruct words. We’re looking for a certain energy profile in a human voice. Over time you can determine that it’s bimodal, which tells you it’s a conversation between two people. We also look at texting, but we’re not looking at content, just how much time you spend texting and the number of messages.
Privacy overall is important. BeWell is in two versions. There’s BeWell on the phone and BeWell Plus that lets you see your data online. But the data never has to leave your phone. If people just want to see how they’re doing, data stays on the phone. But if they want to see trends, compare to others, and get more accurate measurements of their own health, then they can send it off their phone. [If people choose to send their data off their phone to a server, then software analyzes it to classify a person into a group of people with similar profiles. These profiles are used to refine individual measurements and suggested lifestyle changes. Medical students, for example, have different profiles than nursing home residents, and this knowledge helps the system’s accuracy.] Also, we’re not doing anything with location data.
KG: How do you monitor sleep without input from a person?
TC: You’re probably familiar with the sleep cycle alarm clock. You put your phone on your bed and it measures how much you move. When we started, we found a lot of people didn’t want to put their phone on their bed. So what we do is called best-effort sleep sensing. It’s not the most accurate. In our testing we saw that we got it to one hour. So it’s coarse, but we don’t require the user to do something specific.
Essentially, we combine sound and the person’s interaction with the phone. We assume that you charge your phone overnight when you’re sleeping. We assume you don’t interact with your phone when you’re sleeping. Also, a lot of people interact with their phone just before sleep and right when they wake up, but not everyone does. So we look at whether your phone is being charged, if it’s in an environment that’s quiet, and the time of day.
Even though it’s not the most accurate, having people use it in daily lives so they can see trends over time will be very important. You can see a change that indicate’s a decline. That’s our overall philosophy. Ultimately it’s the change that’s going to be more important than the absolute measures. And if we can make it easy for people then we’ll still be able to provide a lot of interesting feedback.
KG: Tell me about the feedback.
TC: One type of BeWell feedback is active wallpaper. It comes from Ubifit, a project from Sunny Consolvo at Intel. She designed a flower garden as a representation for physical activity level. Different types of flowers for different activity. It’s a great example of taking advantage of opportune moments [such as people looking at their phone] rather than actively prodding people [such as a phone alert].
Since we are monitoring three dimensions, we decided to use a school of fish for activity and social interaction and light in water to represent sleep. Click on the fish and you get your scores and suggestions.
KG: What makes creating an app like that a challenge?
TC: BeWell is robust enough that we feel we can put it out there. But there are fragmentation issues with some of the Android phones. We’ve experimented with the Samsung Galaxy and with those we get about 16 to 17 hours of battery life, but then there are other phones where we get pop-up messages that the battery will die in about 6 or 7 hours. That’s a problem that we’ll fix in updated versions of the app. We also want to get more data for how to improve the app overall, something that will come from more users.
Battery drain is why people uninstall apps on phones. But if we purely optimize on battery power, then the scores are meaningless, so there’s a tradeoff. A lot of time you don’t actually need to have the sensors on. Silence is an example, or when someone’s sedentary. That’s what we do now. We’re not doing continuous sensing. The other thing we do is coarser and coarser sensing based on battery power. But there’s a lot more we can do. Once we know a person’s lifestyle, we can adapt sampling based on the person.
KG: Any plans for an iPhone version of BeWell?
TC: One of the great things about iPhones is that they have the best sensors. The accelerometer on Android phones varies by platform, and then so does the sampling rate, which matters even if you’re going to do a simple estimation of walking. Whereas with iPhone, you never have that problem. You don’t have the fragmentation issue.
We use Android because we can do the continuous monitoring that we need to do: record microphone data and process data during a phone call, for instance. It’s important to have a computer be an open book, but with the iPhone, you only have access to certain things. From an academic perspective, you don’t want to spend too much time reverse engineering, which we did in the beginning. You want to spend time in actual research.
Apple doesn’t really engage with academics in research. It’s hard to know if they’ll open up. If they did, they’d be an excellent platform in terms of building stable apps that you know will work in the same way on all phones.
Here’s a video that explains the basics of BeWell: