When we normally breathe, we take in oxygen down to our lungs, from where it is distributed to red blood cells and transported across our body. In one sense, oxygen is like a fuel that enables our body to function, and we need a lot of it to be able to survive. Most healthy people have between 95% and 100% oxygen saturation all the time, but sometimes, disease hits — and when this happens, oxygen saturation can drop, sometimes dangerously low.
In a proof of concept study, researchers from University of Washington and University of California San Diego have shown that regular smartphones can be used to detect blood oxygen saturation levels down to 70% — a key threshold recommended by most health bodies.
Oxygen saturation is the fraction of oxygen-saturated hemoglobin relative to total hemoglobin in the blood, and it can be a valuable indicator and even a predictor of disease symptoms.
“It was actually fairly timely paper,” says Shwetak Patel, one of the study authors. “When the COVID situation happened, blood oxygenation was a big predictor of whether you’re going to be hospitalized or not, but most people didn’t oxygenation monitors,” Patel added at the Heidelberg Laureate Forum.
In a hospital environment, doctors can monitor oxygen saturation with ease, but monitoring oxygen saturation at home can also be useful to keep an eye on potential symptoms. So Patel and colleagues proposed using phones.
As it turns out, by placing your finger over the camera and flash of a smartphone, you can get a good enough reading of your blood oxygen levels. This is powered by a deep-learning algorithm that converts the image reading into a blood oxygen reading.
“So if you have a phone and you put your finger over the camera, you can instantly get a saturation reading to see how well you’re doing, and so you even have the ability to be able to screen for some of these things well before somebody is symptomatic. It is a very powerful thing that you can do,” Patel adds.
It’s not the first time this approach has been used but until now, smartphones were only able to detect readings down to 80-90%. But that’s not exactly relevant — your saturation goes to 90% when you go in a plane, so that’s not really indicative of anything. In order for the method to work, it needs to be able to read down to at least 70%.
In order to test their method, the researchers delivered a controlled mixture of nitrogen and oxygen to get the oxygen level of six participants lower and repeated the observation several times. Three participants identified as female, three identified as male; five identified as Caucasian, while one identified as African American. The participants were monitored with a medical pulse oximeter as well as the smartphone camera approach.
“The camera is recording a video: Every time your heart beats, fresh blood flows through the part illuminated by the flash,” said senior author Edward Wang, who started this project as a UW doctoral student studying electrical and computer engineering and is now an assistant professor at UC San Diego’s Design Lab and the Department of Electrical and Computer Engineering.
The researchers report that 80% of the time, the smartphone correctly predicted whether the subjects had low blood oxygen levels. The team acquired over 10,000 blood oxygen readings, between 61% and 100%.
“Other smartphone apps that do this were developed by asking people to hold their breath. But people get very uncomfortable and have to breathe after a minute or so, and that’s before their blood-oxygen levels have gone down far enough to represent the full range of clinically relevant data,” said co-lead author Jason Hoffman, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With our test, we’re able to gather 15 minutes of data from each subject. Our data shows that smartphones could work well right in the critical threshold range.”
It’s remarkable that something so ubiquitous as smartphones can be used for medical-level readings, but it’s not a solved problem by any means. While the method is promising, it’s still only tested on six participants, and there can be significant variations from person to person.
“One of our subjects had thick calluses on their fingers, which made it harder for our algorithm to accurately determine their blood oxygen levels,” Hoffman said. “If we were to expand this study to more subjects, we would likely see more people with calluses and more people with different skin tones. Then we could potentially have an algorithm with enough complexity to be able to better model all these differences.”
Skin color is another potential problem: camera based imaging has proven, in some cases, to be more problematic for darker skin tones. Patel says they are well aware of this and are working to address it — and a key aspect of it is conducting larger studies.
“So I think that the bigger the data set, the better these models become. We focus on balanced datasets, making sure it’s equitable, and works just as well with light skin tone and dark skin tone. So and that’s active research that we do,” Patel added.
“But there might be some biological or physiological sensing problems where you don’t even know how to balance. There’s male and female, skin tone, those kinds of things, but there are other physiological phenomena we don’t know how we balance.”
Ultimately, the potential of using smartphones (along with machine learning algorithms) for medical readings is well worth the effort, and this is a promising step towards that goal — but more work is needed to ensure reliability.
“It’s so important to do a study like this,” Wang said. “Traditional medical devices go through rigorous testing. But computer science research is still just starting to dig its teeth into using machine learning for biomedical device development and we’re all still learning. By forcing ourselves to be rigorous, we’re forcing ourselves to learn how to do things right.”
The results have been published in the journal Digital Medicine.