Unlocking the potential of digital health: innovative approaches to evidence generation
Unlocking the potential of digital health: innovative approaches to evidence generation
Generating strong evidence for digital health solutions is a challenge for innovators. Novel approaches such as simulation, real-world evidence, and platform trials can help to clear this hurdle.
Key messages:
Conventional research methodologies, such as RCTs, are not always suitable for the evaluation of digital health solutions.
We are beginning to see a shift in attitudes about the importance of evidence in digital health. In a recent survey of over 50 top digital health investors, 94% indicated that a measurable return on investment (ROI) is “important” or “very important” for the success of digital health companies, while 79% said the same of clinical evidence and trials.¹
As evidence moves up the agenda in digital health, it will be important to adopt innovative and pragmatic approaches to evidence generation. Used alongside established methodologies, new approaches can help generate evidence that is robust and appropriate for the stage of development.
Emerging methods in digital health include using simulation, real-world evidence (RWE), and platform trials (PTs).
Simulation allows cost-effective evaluation of digital solutions in a safe environment using synthetic patient data.
Real-world evidence (RWE) is evidence generated from the vast amounts of real-world data (RWD) gathered by digital solutions. RWE is becoming increasingly important for decision makers for both regulation and reimbursement.
Platform trials (PTs) are an adaptive form of randomised controlled trial (RCT) that could enable effective comparison between various iterations of a digital solution.
Evidence generation presents a paradox for digital health innovators
Developers of digital solutions are familiar with the following paradox: healthcare providers require evidence supporting a solution before they will use it, but it is often very difficult to generate this evidence before a solution is being used. This scenario remains a major obstacle to achieving widespread adoption; although most innovators are becoming aware of the rising demand for evidence in this field, understanding the right types of evidence and most appropriate methodologies to pursue presents a real challenge.
Methodological challenges contribute to the evidence deadlock, but new approaches can help
Conventional research methodologies, such as randomised controlled trials (RCTs), are not always suitable for the evaluation of digital solutions. These trials are resource-intensive, demanding significant amounts of money and time. In comparison with drug trials—where RCTs are the gold standard—software development moves rapidly and is much more iterative in nature. Digital products, including those on the market, undergo frequent changes through updates and bug fixes. The timescale of traditional trials is largely out of step with this process, with RCTs lasting an average of five and half years until publication.² In addition, the typically stringent eligibility criteria of RCTs mean that results may not be representative of how a solution will perform when deployed in real-world populations. New evaluative approaches are required to address the unique complexity of digital technology.³ While conducting research for a recent white paper, we spoke to leading global experts who emphasised this. Promising solutions are emerging, including: the use of simulation, real-world evidence (RWE), and platform trials (PTs).⁴
Simulation
Simulation has emerged as a promising tool that enables safe, efficient and cost-effective evaluation of digital health solutions in a controlled environment. Simulation studies can help break the evidence generation deadlock, most often encountered in the early-mid stages of development of new solutions.
“Clinical simulation” allows observation of clinical tasks and responses to scenarios designed to reflect real-world practice in a low-risk environment. This is used widely in clinical education, for clinicians to practise and improve decision-making and technical skills, without compromising patient safety. The use of clinical simulation has been extended to evaluate digital health solutions. This is achieved by closely replicating their intended real-world use in a controlled environment. Remote, multi-site trials can be conducted at relatively low cost using virtual communication platforms. Simulation studies are highly scalable and flexible and study designs can easily be adapted to keep up with the frequent updates to digital solutions.
An additional benefit of simulation is the ability to use synthetic patient data. Realistic, synthetic datasets can be modelled using real data in a way that minimises privacy concerns while preserving the complexities of the data.5 Simulation studies may also allow researchers to test solutions with data representative of higher-risk patients, who are often excluded from traditional trials because of safety risks. It also allows for more extensive testing of subpopulation data helping to alleviate the risk of biases (e.g., ethnicity, gender) that are often encountered in other types of studies.
The utility of this evaluation method is limited primarily by the accuracy of simulations, which must be high-fidelity (mimic real-world settings closely) in order to produce reliable evidence. Ultimately, evidence generated in simulation studies is unlikely to be sufficient on its own to support decisions around regulatory approval for higher-risk solutions. However it is a pragmatic adjunct to established methods that can be used to generate evidence of reasonable strength.
Real-world evidence
Digital health solutions generate vast amounts of real-world data (RWD) during the course of their routine use. RWD can be analysed to generate insights equating to evidence, known as “real-world evidence” (RWE).
Up to now, the vast majority of RWD collected by digital health solutions has not been used to generate RWE. However, sustained progress in AI—particularly in the field of machine learning (ML)—is making it possible to analyse increasingly vast quantities of this data and gain insights previously unattainable. In comparison with RCTs, which may inform researchers about how an intervention performs for a specific group (under tightly controlled conditions), RWE can provide more certainty about how effective a solution is when deployed in the real world. An additional benefit is that real-world studies usually cost significantly less than RCTs.
There are limitations to real-world studies. Of course, solutions must be in use to generate RWD, and therefore RWE is not relevant for early-stage solutions. While real-world studies can provide powerful insights about overall effectiveness, RCTs are still best when it comes to identifying the effects of specific features or variables (establishing causal relationships). Furthermore, the generation of RWE is extremely dependent on the availability of data and data access is often a significant barrier to conducting these studies.
Regulators in several countries have begun to encourage the use of RWE to inform regulatory decisions in the post market phase. As regulated digital solutions (e.g., SaMD) are constantly updated, it is necessary to conduct ongoing postmarket surveillance and clinical validation of these solutions to ensure that they remain safe and effective despite any iterations.
Platform trials
Platform trials (PTs) are an adaptive form of RCT. While RCTs are described as “intervention-focused”, PTs are “disease-focused” (or “multi-arm”) trials, as they allow the testing of multiple different versions of a solution against a constant control. This level of adaptability makes PTs potentially very useful for the evaluation of digital solutions that are being updated regularly, as they can help determine which iterations of a solution are most effective. PTs, in theory, can allow more digital solutions (or versions of the same solution) to be tested efficiently in one study, allowing for the addition of new interventions, and recruitment of new patients, on an ongoing basis.
Despite the potential advantages, PTs can be extremely complex to design and execute. The multi-arm complexity and length of such trials would mean increased risks of introducing biases or confounding factors that could impact the validity of results. This complexity also necessitates significant levels of coordination between stakeholders (researchers, sponsors, and regulators), to ensure the trial is run according to a consistent protocol. Conducting this type of research would be particularly challenging for smaller organisations with limited resources.
Prova Health supports digital health innovators with evidence generation. To discuss how we can help with evidence generation for your digital solutions, email hello@provahealth.com
Dr Des Conroy is a Digital Health Consultant at Prova Health. As a medical doctor, he has worked in clinical practice in the UK and Ireland. He has experience developing and clinically validating artificial intelligence-based Software as a Medical Device (SaMD) products, and supporting their deployment at a global scale. At Prova Health, he has led research into evolving evidence standards and reimbursement models in digital health.
Dr Saira Ghafur is Co-founder and Chief Medical Officer of Prova Health. She is an honorary consultant Respiratory Physician at St Mary’s Hospital, London, and a digital health expert who has published on topics such as cybersecurity, digital health adoption and reimbursement, data privacy and commercialising health data. She is Co-founder of mental health start-up Psyma and holds a MSc in Health Policy from Imperial. She was a Harkness Fellow in Health Policy and Practice in New York (2017).
Gianluca Fontana is Co-founder and Chief Executive Officer of Prova Health. He has overseen the delivery of high-profile research, education and consulting projects, including the NHS Digital Academy, the REACT Covid-19 prevalence study, the World Innovation Summit for Health and the Centre for Health Policy of Imperial College London. He started his career as a management consultant at McKinsey & Company.
References:
1. PR Newswire, 2022. GSR Ventures Survey: Digital Health Investors Believe ROI and Clinical Validation Will be the Greatest Markers of Success in 2023. Available at: https://www.prnewswire.com/news-releases/gsr-ventures-survey-digital-health-investors-believe-roi-and-clinical-validation-will-be-the-greatest-markers-of-success-in-2023-301691869.html (Accessed: 5 June 2023)
2. Pham, Q., Wiljer, D. and Cafazzo, J.A., 2016. Beyond the randomized controlled trial: a review of alternatives in mHealth clinical trial methods. JMIR mHealth and uHealth, 4(3), p.e5720.
3. Guo, C., Ashrafian, H., Ghafur, S., Fontana, G., Gardner, C. and Prime, M., 2020. Challenges for the evaluation of digital health solutions—A call for innovative evidence generation approaches. npj digital medicine, 3(1), p.110.
4. Conroy, D., Fontana, G., Prime, M. and Ghafur, S., 2023. Generating evidence for digital health solutions. Available at: https://healthcaretransformers.com/digital-health/current-trends/generating-evidence-for-digital-health-solutions (Accessed: 15th March 2023).
5. Tucker, A., Wang, Z., Rotalinti, Y. and Myles, P., 2020. Generating high-fidelity synthetic patient data for assessing machine learning healthcare software. npj Digital Medicine, 3, 147.
6. US Food and Drug Administration, 2018. Framework for FDA’s real-world evidence program. Available at: https://www.fda.gov/media/120060/download (Accessed: 5 June 2023).