In the world of human complexity, particularly biological diversity, robust clinical trials play an essential role in the testing of new synthetic drugs before licensing and commercialization.
Now, some of us may raise an eyebrow at just how much “testing” is actually taking place, given that certain drugs known as “COVID-19 vaccines” were operation-warp-speeded into the arms of over 5 billion people worldwide.
Recruiting participants for clinical trials can be challenging, particularly for rare diseases. Patients with the underlying condition of research interest are usually the participants; they are randomly assigned either the new medicine under investigation or a placebo, with those receiving the latter being part of the “control group.”
Now, can you imagine a world where patients suffering from a disease are replaced with computerized data that mimic their unique characteristics and symptoms?
Lo and behold—introducing the “artificial patient.”
An artificial intelligence (AI) program generates an “artificial patient” after processing an exhaustive patient information database compiled from previous clinical trials.
Also known as a “synthetic patient,” it is a set of computerized data that, ideally, represents the distinguishing characteristics of a patient with the underlying disease of interest, e.g., common symptoms, weight, blood pressure, and even genetic factors.
If you’re feeling somewhat uneasy or downright outraged with the idea of AI-generated data replacing actual patients—even for the control group—you’re not alone.
However, know that there is growing interest in, and growing money injected into, the research and development of “synthetic patients.”
During the annual World Economic Forum (WEF) meeting in Davos, Switzerland, political, business, and cultural leaders worldwide gathered to discuss AI as a “driving force for the economy and society” that could “benefit all.”
Indeed, this particular theme occupied 32 sessions throughout the 5-day convention; it is, therefore, unsurprising to learn that the WEF is currently funding research on using AI to “create artificial patients with similar health information to real patients in clinical trials” in a “partnership” with the United Kingdom (U.K.) since 2019.
Those who support using AI to generate such medical data claim it could help overcome problems associated with:
Yet, the realm of human physiology, interwoven with mindset and attitude toward managing illness, is remarkably intricate and highly diverse. Even two individuals, such as genetically identical twins or parent and child, can differ significantly in their immune systems and susceptibility to diseases.
As explained in a previous article, the resultant output of an AI program reflects its underlying design but, above all else, the amount, quality, and diversity of data used in developing that program.
A large and diverse database containing thousands of medical records is needed to develop a reliable AI program. Thus, the ability to generate large-scale “synthetic patients” demands a database involving proper distribution of demographics, medical conditions, dietary intake—and previous and current lifestyle choices and social determinants of health, which are not typically recorded in medical data.
Still, proponents of integrating computer-generated data that resemble the profiles of disease-stricken patients into clinical trials will persevere whenever adequate funding is provided.
Given the arrangement between the U.K. and WEF to foster “a ‘regulation revolution’ that meets the challenges and opportunities of technological advances,” the Medicines and Healthcare Products Regulatory Agency (MHRA), which is the U.K. government’s agency and pharmaceutical regulatory authority, received over $950,000 to fund 12 to 18-month projects that are currently ongoing at two U.K. universities.
“In the future,” reads the project summary, “these approaches could be combined with, or even replace, real patient information.”
Well, isn’t that just great? Let’s do away with patients—real human beings as extraordinarily complex organisms—and replace them with AI-generated data that might supposedly “change the way clinical trials are performed in common and rare diseases, lowering their cost and improving how new treatments are tested.”
As part of this cozy arrangement to promote public-private partnerships, the country from which America sought independence became the first to partner with the highly influential WEF’s Centre for the Fourth Industrial Revolution. Indeed, let us not forget that the leader of the WEF once boasted, “We penetrate the [government] cabinets.”
And for WEF respecters like pharmaceutical giant Pfizer’s CEO Albert Bourla, having to rely upon the arduous, expensive, and time-consuming cost of recruiting and gathering patient data with informed consent might be somewhat frustrating—if not annoying.
In contrast, using “artificial patients” as a substitute for the control group can be a more convenient—and lucrative—option for pharmaceutical companies as the data is anonymous and lowers business expenses.
Using AI-generated data can also lead to faster clinical trials by reducing the time and cost required for enrolling participants compared to conventional trials. On this note, it may ensure that all recruited participants will comprise the experiment group and receive the new medicine under investigation.
As the WEF and U.K. government-backed project overview points out:
“[P]atients are either given a treatment or not by random selection. This can be challenging in some health conditions, as random assignment to a control group could deny patients access to treatments that could extend their life or improve symptoms.”
Thus, advocates may argue that “synthetic patient-led” control groups improve health equity in clinical research.
And yet, replacing real patient data with artificially generated “alternatives” is extremely irresponsible—and dangerous.
Research literature reports that 90 percent of clinical drug development fails, with at most 50 percent attributed to “lack of clinical efficacy” and 30 percent to “unmanageable toxicity.”
In an attempt to improve the success rate of drug development, “tremendous” effort has been reportedly “devoted” to selecting the most suitable participants through testing for both gene expression and the presence of specific genetic sequences used to predict the risk of disease. And yet, many of these candidates still fail during clinical phase I, II, and III studies.
Supporters of employing “artificial patients” in drug development may genuinely believe in its potential to help save lives. However, with time, they may realize the significant challenges that AI programs face in achieving the complexity of accurately representing a group of individuals, especially given high human genetic variability and in the context of rare diseases.
In a 2022 preprint by the U.K.’s MHRA—the very government agency in receipt of funding to promote AI for medical trials—the authors concluded with “two primary regulatory questions” as follows:
“[U]nder what circumstances, if any, would it be acceptable for AIaMD [artificial intelligence as a medical device] to be trained or tested upon synthetic data versus real data, and are there opportunities to use synthetic data to better validate or test AIaMD models?”
Rather than entertain the thought of reducing, or downright eliminating patients from clinical trials, scientists may want to focus on using AI to improve the process of targeting, marketing to, and selecting the best drug candidates through rigorous analysis of electronic health records or social media platforms, given informed consent.
Clinical research needs more suitable patients to test new medicine, not less.
Scientific findings in biomedical research are not always reproducible; thus, replacing raw, gruesome human physiology with “synthetic participants” in a clinical trial could lead to unreliable results on the effectiveness and safety of the new drug being tested.
Furthermore lies the danger of “biases” in synthetic medical data. A recent publication by a researcher in receipt of MHRA funding expressed that, despite the availability of large-scale patient datasets for developing AI programs, “the risk of biases being carried over to [synthetic] data generators still exists.”
In other words, such programs “can exhibit bias in various forms, including the under-representation of certain groups in the [original patient] data. This can lead to missing data and inaccurate correlations and distributions, which may also be reflected in synthetic data.”
Bottom line: AI-generated “artificial” or “synthetic patients” are a poor and potentially dangerous substitute for real human beings, especially patients with one or more underlying conditions.
The power of AI is dependent upon its algorithm and data, but much remains unknown about human biology, especially at a population level. It would be unwise to ignore this fact and assume otherwise; such arrogance could endanger those who are unaware of or underestimate the potential risks of this technology, leading to hazardous outcomes.
Second bottom line: human beings need to test medicine developed for human beings, and human beings need to serve as the control group to maintain consistency of the common denominator—being human.
Content syndicated from Dear Rest of America with permission
Agree/Disagree with the author(s)? Let them know in the comments below and be heard by 10’s of thousands of CDN readers each day!