top of page
< Back

Bias in Digital Mental Health

Originally two-in-oned for AID 4 Mental Health


COVID-19 has unarguably brought Digital + Health closer than ever - powering hundreds of digitally-augmented trials to remote assessment and care delivery. However, uniform use of health tech by a diverse and representative population should not be taken for granted.


There is a need for increased transparency - who benefits from health-tech solutions and the impact of leaving some behind - to help us build more inclusive & equitable solutions.


Here is a quick readout (admittedly biased) of what we are doing to keep bias and equity in digital health at the front and center to enable the development of equitable & inclusive people-centered digital health solutions.


Digital + health research


The growing ubiquity and ability of connected devices (e.g., smartphones and wearables) to collect day-to-day lived experiences of people has gotten a lot of attention from healthcare researchers. High-frequency continual usage of smartphones allows for objective quantification of personalized daily behavior using multimodal real-world data (RWD) streams.


Through this decentralized approach, researchers can also cast a wider net and access larger and more diverse portions of the population at a fraction of the cost of traditional brick-and-mortar clinical trials. One of the key promises of RWD is to help us learn a comprehensive picture of an individual’s unique lived experience. Ultimately, the goal is to know what works and for whom to guide the development of new patient-centric treatments.


The Global South also stands to benefit from this a great deal due to regional challenges - inadequate health systems and infrastructure, low doctor-patient ratios, and expensive healthcare. The rising penetration of smartphones makes digital health a potentially viable solution to these challenges.


Beyond the Technology


However, to expect diverse and representative use of technology for health may be delusionary, atleast in the short term. There are several real-world challenges - like the reluctance of people to participate and continue to engage and share their health-related data in decentralized research trials due to privacy concerns or the digital divide.






There is a risk for researchers to jump at the opportunity to have this wealth of data at arm’s reach and perhaps unintentionally overlook the need for stringent privacy policies. Recent misadventures in data sharing could seriously impact the willingness of people to participate in research that collects sensitive health-related data.


Digital divide is another major concern - the disparity between people who are open to adapting to new technology and those who stay on the periphery, typically older and less tech-savvy people. The divide could exclude the latter from equitable representation in digital health research, making the derived learnings from such studies all the more incomprehensible to them, and so leaving them behind as the world advances.


The digital divide can also be impacted by the difference in the rate of the population connected to the internet between the Global North and Global South. This poses other challenges like digital health tools not functioning properly due to lack of internet access, low data bandwidth, varying hardware specifications, etc.


So while going digital can prove to be an effective and scalable solution to address the massive supply and demand gap in healthcare in the Global South, we need to be more mindful in the design and development of decentralized studies to collect representative data that can inform the development of equitable health policies.




The interplay of such real-world factors can lead to bias that may limit the impact of digital health from seeking its full potential by leaving a significant chunk of the global population behind - i.e., the less tech-savvy, the reluctant, and the disconnected.


And bias can originate in the many stages of research, from unbalanced recruitment to differential long-term retention of the population in longitudinal studies. Underrepresentation can make resulting outcomes inapplicable to those who weren’t included in the trials and, therefore, biased.


A well-known phenomenon is algorithms learning bias from datasets derived from human beings' actions and responses, which may be biased. The use of these algorithms in health tools and applications sets a dangerous precedent for the perpetuation of bias.


We think there is a critical and unmet demand to explore bias in digital health research to develop long-term equitable and inclusive technology-augmented solutions. We aim to quantify these biases and develop methods to correct them to generate robust, generalizable, and, most importantly, transparent insights. We also create strategies to improve cohort diversity and boost long-term engagement in decentralized studies by working closely with patients, families, and providers.

©2022 by aditisurendra. Easily created with Wix.com

bottom of page