Skip to main content
Medical Devices

Digital Therapeutics' Responsible Integration

Han Zuyderwijk

Digital therapeutics' responsible integration into mental health care becomes an increasingly important topic. As digital tools grow more popular, it's critical to think about how to conform them into mental health care responsibly. The way forward demands continuous focus on proper oversight and care models Also data protection and justice difficulties need focus. The key moral issues for digital mental health solutions are:

Safety & surveillance; digital therapeutics pose challenges in the demand for accountability and effective regulation. Especially those that involve machine learning.

Privacy & data protection; most of the data obtained by digital therapeutics is likely to be threated as health records. But patients will need to be warned about some of the risks associated with data exchange and usage.

Accessibility; will health insurances cover digital therapeutics? What are the current discrepancies in the resources and infrastructure required to implement digital therapeutics?

Bigotry & equity; the design and development of digital therapeutics needs  to address whether the tools are equally effective in a variety of demographics and circumstances.

Safety And Surveillance

Many mental health apps for consumers are unregulated. Furthermore, concerns about the lack of research base for consumer mental health apps have been raised. Digital remedial treatment is regulated as medical devices. As a result, the FDA in the US is in charge of ensuring that products are safe and effective. In the EU, compliance to CE marking according to the Medical Devices Directive – 93/42/EEC is required.

Two difficult tasks remain. First of all the formulating and implementing quality control procedures for algorithms that are used in digital therapies is challenging. Additionally the analysis of essential external factors (such as operating systems or connections) for supplying digital medicines continues to be difficult. Many digital therapeutics are designed to evolve over time, which may necessitate re-evaluation after initial certification.

digital health therapeutics mental safety
We need to integrate digital therapeutics into mental health care responsibly | Photographer: Emily Underworld | Source: Unsplash

In the US and EU, medical device regulation focuses on the product: the digital tool itself. It's vital to remember that a digital tool will be utilized within the framework of a health-care delivery system. It is for purposes and goals defined inside that system, such as allocating available resources or treating a specific patient group. As a result, in order to assess the safety and effectiveness of a digital tool thoroughly, a systems perspective on how that tool will be utilized is also required. The scope needs a shift in perspective: from a product view (medical AI/ML-based products) to assessing systems. This is central to maximizing the safety and efficacy of Artificial Intelligence (AI) & Machine Learning (ML) in health care. The ( US-based) National Center for Biotechnology Information has offered several suggestions for US and EU regulators to make this challenging but important transition. The NCBI is aware of the significant difficulties for watchdogs. They are used to regulating products, not systems. It is evident that digital therapeutics' responsible integration into mental health care becomes increasingly important.

Machine learning

Machine-learning-based digital tools provide extra regulatory challenges. It can be difficult to determine why certain data inputs resulted in various outputs or discoveries when using machine learning algorithms. As a result, evaluating and addressing systematic flaws in the results might be difficult. Think of biases that have a disproportionate influence on certain groups of people. The best approaches for detecting and correcting potential biases are still being developed even though there are efforts to develop algorithms that are more explainable.

There have been proposals (by NCBI ) for more transparency in health algorithms, such as allowing third-party assessment of algorithms by developers. Clinicians must also carefully evaluate how to advise patients about the risks and limitations of digital treatment instruments in order to obtain informed consent. Clinicians themselves may need to be trained in order to appreciate the limits of digital tools. Releveant contributors need to be involved in planning for the adoption and implementation of digital therapies in a health care system. From physicians to patients and community members, they can also help to resolve concerns about fairness.

Privacy And Data Protection With Digital Therapeutics' Responsible Integration

Statistics on mental health is usually regarded as more delicate and potentially stigmatizing than data on other types of health data. A catastrophic data breach in October 2020. Hackers exploited a data security weakness in a Finnish popular psychotherapy app. That resulted in the blackmailing of thousands of users over their personal information. The security flaw opened the gateway for hackers to obtain an entire patient database, including e-mail addresses, social security numbers and maybe worst of all, the actual written notes that therapists had taken. This incident underlined the significance of effective data security procedures as well as the value of behavioral data and digital therapeutics' responsible integration.

Personal and biometric data regulations are being reviewed by an increasing number of jurisdictions. In light of this, clinicians' patients must understand the risks and benefits of data obtained through digital therapeutics. it should be communicated through an informed consent process. Moreover, certain digital treatments continuously monitor patients, generating a large amount of personal information. Further research should be done to see how ubiquitous surveillance affects patients and the therapeutic alliance.

Bias And Fairness In Digital Therapeutics' Responsible Integration

Not only the COVID-19 pandemic but also recent social justice movements have put a spotlight on bias and inequities in the health care system. Because of historical health-care inequities Black and Latinx people are more likely to express worries about privacy and the quality of digital mental health services. The shift to telehealth has shown that not all communities or populations have the means or infrastructure to benefit from digital tools. In the US the equipment is less likely to be available in community mental health facilities which disproportionately serve Black and Latinx patients. If digital therapeutics are to fulfill the promise of increased access, improvements are needed in infrastructure, training, and availability of clinician oversight to better serve low-income demographics. It's possible that additional resources, such as an internet connection or hardware, will also be required.

Machine learning and digital health technologies also raise issues of racial bias and fairness. Bigotry can take many forms. Like an insufficient fit between the data obtained and the research aim. Also datasets that lack representative samples of the target population can be a problem. Thirdly, digital technologies that have various effects depending on how they are used can be a challenge. There are several solutions to tackling bias in digital health tools. These include technological changes in datasets and algorithms as well as defining fairness principles for algorithmic tools.

Machine learning and digital health technologies also raise issues of racial bias and fairness. | Photographer: h heyerlein | Source: Unsplash

Conclusion

Telehealth and digital treatments show a lot of promise for improving mental health care. However, we must strive for digital therapeutics' responsible integration in ways that enhance the therapeutic interaction and provide fair care. Digital therapeutics pose concerns about proper lines of supervision and responsibility. They may have an impact on the nature of the guardian relationships involved. Frameworks for how digital therapeutics can address preventative care, patients in crisis, or special populations also need to be developed and implemented.


[Reference: Dr Martinez-Martin. She is an assistant professor at Stanford Center for Biomedical Ethics and in the Department of Pediatrics. She has a secondary appointment in the Department of Psychiatry at Stanford University’s School of Medicine.]

Source: psychiatrictimes.com