SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

Evaluating Mobile Health Tools Is Comparing Apples to Oranges

[ad_1]

Like a doctor on one’s wrist, mobile health (mHealth) tools offer the promise of providing personal physiological data on demand.

How many steps did you take today? Press the button. What’s your current heart rate or amount of oxygen in your blood? Push the button. Glucose level? Scan the sensor on your arm.

Unlike a human physician or nurse, though, the digital tools and the mobile apps they often pair with have no way of identifying how each user might respond to them. What might be motivating for one person, such as a personalized encouraging message, may seem intrusive to another. A study published in Management Information Systems Quarterly in February 2022 examined a digital diabetes management tool through the experiences of 1,070 patients in Asia. The study’s authors found a generic SMS messaging scheme was 18 percent more effective in lowering a patient’s glucose level than a personalized patient-specific message string.

Moreover, the authors found, “personalization is not as effective as non-personalization if we try to improve diabetes patients’ engagement with the app usage or general life style (i.e., sleeping behavior or movement habits). This is likely because patients might perceive frequent personalized SMS messages as intrusive and annoying.”

The authors—a group of researchers at Carnegie Mellon University, in Pittsburgh, Harbin Institute of Technology, in China, and New York University—add that “these findings are surprising and suggest personalized messaging may not always work in the context of mHealth, and the design of the mHealth platform is critical in achieving better patient health outcomes.”

And, according to those who study the field, there isn’t yet a common approach to assessing the effectiveness of developers’ and researchers’ cultural adaptation of these tools, including aspects of personalization, for different user bases.

“The implementation science around how to translate digital health tools that perform well in silico into real-world utility, in the form of desired behavior change and better patient outcomes, is still a very nascent field. There is still a lot of work to be done,” says Jayson Marwaha, a postdoctoral research fellow at Harvard University.

Some researchers have tackled the issue of adapting mHealth tools to different cultures. For instance, in 2020 a team at the Zurich University of Applied Sciences published a comparison survey of Swiss and Chinese consumers and found markedly different reasons why a person might use one depending on the culture of each nation.

A Swiss consumer might start using an mHealth tool based on a physician’s endorsement and evidence that the device was accurate, they found. A Chinese consumer, however, would be more likely to consider the opinions of members of their social circle and employers, as well as devices that could augment a stretched-thin health system with credible advice.

The Zurich University team hasn’t pursued the cultural components of Internet or mHealth tool acceptance further. But another group, at the University of Freiburg and Ulm University, in Germany, has in several meta-analyses. Those analyses have found that evaluating the efficacy of these interventions is still very much an apples-to-oranges situation, which may be inhibiting faster and wider adoption of them.

For example, one of the Ulm researchers, Sümeyye Balci, said the tone of SMS messaging is just one aspect of trying to keep participants motivated to keep going with a trial in which they are enrolled.

“The bigger issue in cultural adaptation studies is that we still don’t know to what extent we should adapt an intervention’s content or delivery method, and for which population,” Balci said. “So it’s not entirely clear what works best for which group and what behavior. That’s what we’re trying to understand in our group.”

Balci and her colleagues have laid out 17 discrete components of cultural adaptation that should be considered in deploying a digital tool (Internet-based or mobile) in disparate cultural groups. They outlined these components in the context of mental-health tools, but Balci says they could be used for any tool for any condition; some elements could be given less weight or discarded entirely, depending on the tool’s purpose. For instance, intense personalization may be deemed intrusive by one group for a diabetes tool—such as the Carnegie Mellon/Harbin/New York University group found—but expected and welcomed for a behavioral therapy app.

Marwaha recently coauthored an editorial in NPJ Digital Medicine that called for digital health-tool developers to use those 17 components as a guide when deploying tools across disparate populations. “I think it is a very helpful initial attempt at reducing heterogeneity in how people do these kinds of adaptation efforts,” he said. “Identifying a comprehensive list of all the things you should consider is an incredibly important start.”

The Freiburg/Ulm group’s latest study is a meta-analysis of 13 studies that looks at mHealth cultural-adaptation efforts across the subjects of healthy eating, physical activity, alcohol consumption, sexual-health behavior, and smoking cessation. The results led the group to conclude that those efforts currently haven’t shown they are worth the effort (only culturally adapted physical activity platforms were superior to control group results). But neither Balci nor Marwaha say that means cultural adaptations aren’t important. Balci says the paper isn’t meant as an argument to halt them entirely, but rather to find common ground in how best to measure their effectiveness: “We should work on specifying to what extent we should do it, or for which population we should do it.”

Likewise, Marwaha says that concluding that such personalization and adaptation isn’t important is the wrong idea. Instead, he said, “it will just take further study to figure out how to do it right and how to do it in a standardized, consistent fashion. The way researchers are doing it now—at least as seen in the data—doesn’t seem to be improving the clinical impact of these tools, and the clinical impact is what really matters.”

[ad_2]
Source link