This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Clinical & Medical Affairs
search
Artificial Intelligence

How Artificial Intelligence can be used to innovate clinical trial design

Posted by on 18 May 2020
Share this article

The novel coronavirus outbreak has decimated much of the world and its markets, with a second wave of the pandemic expected imminently and a new strain of the pathogen predicted to potentially hit us in the next 4-5 years. The pandemic has led to the implementation of social distancing and lockdowns, establishing a strong environmentally-friendly, more health-conscious social movement. This has also led to even more urgency to convert multi-site clinical trials into virtual or decentralized trials, where participants conduct the trial in the comfort of their own home staying in remote contact with clinical staff.

Currently the majority of clinical trials are operated out of large medical facilities, often based in the center of busy cities causing problems with accessibility for many participants living more than two hours from a trial site or who are too ill to travel. This level of physicality also compromises social distancing measures and creates serious issues in terms of patient diversity. Therefore, virtual trials capable of achieving high patient retention rates and accurate results, whilst providing maximum benefits for its participants, are highly desired.

Artificial Intelligence or AI is the practice of building computational systems capable of intelligent reasoning. AI is viewed by many as a magic bullet for the inception and optimization of completely decentralized trials as well as patient diversity, recruitment, and adherence. However, this technology is often convoluted by unstandardized definitions and the slew of software currently available on the market.

Simplifying healthcare AI

The first step is to 'unclutter' the terminology, people can often get bogged down by types, techniques, and methods under the umbrella of AI. So let's start by uncomplicating things - there are only a minute amount of AI architectures, even less when counting those applicable for healthcare. If you're currently overawed by massive lists of 'AI', the chances are you're viewing a table of software or apps all utilizing the same small number of AI mathematical models available, possibly using them in conjunction with each other.

For health the main types of AI used are:

  • Robotic Process Automation (RPA): a basic algorithm performing repetitive administrative tasks, sometimes not classed as AI.
  • Natural Language Processing (NLP): intelligent systems able to understand written language.
  • Machine Learning (ML): intelligent systems with the capability to learn from experience.
  • Computer Vision (CV): an ML system enabling computers to understand and label digital images and videos.
  • Transfer Learning (TL): an ML system capable of learning from a problem and applying the knowledge to a different yet related problem.
  • Neural Networks (NN): an older version of deep learning systems comprised of an assortment of ML algorithms modeled on the human brain.
  • Deep Learning (DL): layered artificial neural networks within the ML subset that can imitate the data processing function of the human brain, the most evolved AI system whose newest component comprises neuromorphic chips, neural networks with the capacity to spike in a similar way to their natural neuronal counterparts.

These systems are then mixed and matched depending on the task such as:

  • NLP and ML: intelligent systems with the ability to learn how to understand written language.

Disrupting clinical trials using AI

AI has the potential to disrupt every stage of the clinical trial process, from matching eligible patients to studies to monitoring adherence and data collection. Patient recruitment is a highly selective process, even where participants have volunteered to partake in trials, with the prerequisite that all candidates must be verified using intensive analysis of electronic records gained from medical facilities. NLP and ML methods are currently earmarked to improve processes such as electronic phenotyping, a patient recruitment method focused on reducing heterogeneity in clinical trials through the analysis of Electronic Health Records (EHR). It is thought AI could make electronic phenotyping so sensitive by gathering and dissecting relevant health data in EHR, they could mirror the results of a genetic test in the future.

I won't start reeling off stats for patient recruitment and adherence as we all know the figures are dire and currently costing hundreds of millions of dollars, so let's jump straight into the available and hypothesized solutions, as well as the causes. Public awareness of clinical trials and what they entail is incredibly low, affecting patient recruitment greatly, therefore fielding patient's EHR is often posited as the usual solution. This would ensure patient suitability by using NLP and ML to scan EHR for key phrases, different disease stages, and diagnostics, whilst providing individualized patient selection as well as meticulous quality checks for participant retention and adherence. However, this method is fraught with hurdles involving data protection and undefined information making up EHR.

Structured and unstructured data

IBM estimate medical data doubled every seventy-three days in the run-up to 2020, with Accenture predicting a 300% rise in health data between 2017-2020. Therefore, the long-feared big data tsunami has arrived, overpowering clinical staff, leaving crucial medical information washed up, missed and inaccessible in its wake. Luckily AI never fatigues, possessing the same standardized mood to go through masses of data, extracting salient patient details with ease, however, this is only when structured data is used and there is express permission to do so.

Structured data is information stored and displayed in a consistent manner using organized databases and spreadsheets with labeled fields such as weight, disease type, or dates. Unfortunately, it is currently approximated that a staggering 80% of all existing medical information is recorded as typed or written text, photos, radiological images, pathology slides, video, audio, streaming device data, PDF files, faxes, PowerPoint slides, and emails. This is known as unstructured data, highly valuable, yet unlabelled, difficult to store, search, and interpret. This unspecified material is usually converted into labeled structured data using DL, NLP, and CV, a time-consuming process that may lead to the clinical trial missing enrollment deadlines, a massive issue for the pharmaceutical industry. Therefore, another solution is needed here such as ensuring clinical trials give patients the most benefit and best experience possible, it is only in this way optimal results will occur. Let's discuss how to make this happen.

There is no doubt the future of gathering health data lies with wearables and mobile apps containing inbuilt personal data permission systems such as those found in Apple Health. However, the sensitivity of current mobile health devices limits the amount and precision of data collected, a problem for active clinical trials where medically approved diagnostics are a prerequisite to provide continuous data. Continuous data is where multiple data values and different types of measurements are taken continuously over a set range, in this case, clinical trial longevity. In theory, this is a wearable that doesn't switch off over many years monitoring multiple health data fields, hypothetically they should provide a range as opposed to a set point in time as the data is unceasing.

This continuous data is expected to precede the dawn of Continuous Learning (CL), an expanded version of TL using NN or DL, where the machine is progressively learning across infinite data fields and measurements, constantly updating and analyzing, endowed with the ability to pick up any discrepancies in milliseconds. CL is expected to produce incredibly sensitive and raw precision- data for participants, furnishing them with individualized whole-system information, as well as raising quality-control for adherence and endpoint results until wearables reach the correct level of exactitude.

Using AI and biometrics to raise patient adherence

The sensitivity of data collection and adherence using approved medical devices will improve through the use of synthetic biology to build DNA computers that live in the body, able to monitor the whole host system, sense disease, and release drugs on command. Easy-to-swallow pill trackers are also being investigated in regards to patient adherence, as are hard-to-fool nanopore wearables utilizing nanoscale precision for various analytes.

Readily available AI biometrics could be utilized to check dosage adherence where present-day urine samples or wearables may be easily tampered with. Biometrics allows a person to be identified based on a unique set of biological data specific to them and is thought of as a natural part of everyday life with many companies and personal products now employing these verification systems. Biometrics is not based on AI, however, DL, ML, or TL is usually integrated into this technology to train, analyze, and decode systems, making them much smarter and a lot quicker.

The main types of biometrics are based on:

  • Physical attributes = these are unique features specific to one person only, which remain unchanged throughout one's lifetime. Physically-based elements falling under this category include fingerprints, eye scans, digital sound codes sourced from ears, DNA codes, or palm vein technology.
  • Behavioral attributes = these idiosyncrasies rely on personality traits or patterns, and/or body language, varying from one individual to another. The characteristics falling under this classification are writing patterns, gait, gestures, keyboard strokes, or voice recognition.

Biometrics, in turn, can be mixed and matched with AI such as:

  • Behavioral biometrics and AI = integrating AI into behavioral biometrics adds multi-layered security, sensitivity, and analysis to the system. AI will learn from a person's behavior every time the machine authenticates them, adapting to minute changes in the user's patterns every session.

Regarding clinical trials, fingerprint technology could be made available for participants to sign into their records and results, as well as dosage verification systems. Most importantly this ensures only the patient can sign into their trial results outside of the main site, giving them peace of mind over their data. As people now use fingerprint technology multiple times a day to sign into their smartphone they should be fully accustomed to this added security, and may even be uncomfortable with their data left vulnerable without added protective measures.

Like a fingerprint, an iris scan also provides unique biometric data that is very difficult to duplicate and remains the same over a lifetime. An iris scan can sometimes be difficult for children or the infirm, however, AI can encode the iris recognition data into a barcode format to add extra security to portable carded-entry systems. This commonplace technology already exists as a biometric voter verification system requiring the insertion of a unique barcode card and a fingerprint to vote.

Friendly virtual avatars could also work well for patient adherence, innocuously interacting without judgment whilst evaluating patient behavioral attributes and patterns. The U.S. Department of Homeland Security has funded research for Discern Science International's deception detection tool named 'AVATAR'. This features a virtual border guard that asks travelers questions to detect any suspicious behavior consistently without fatiguing, with AVATAR's basic interview and decision-making facility taking seconds.

AVATAR has been tested since 2011 by border services in airports across the U.S., Canada, and the European Union in separate trials of different durations resulting in a deception detection rate of 80-85%, far outperforming human agents. AVATAR combines advanced statistics, ML, sensors, and biometrics flagging untruthful individuals or those posing a potential risk. It does so by filming a person’s responses whilst analyzing information including their facial expressions, tone of voice, and verbal responses, to flag deception signals.

Homeland Security state AVATAR can easily be adapted, allowing different interview content to be scientifically designed and tested for contrasting situations, and plan to commercialize the technology in multiple industries. Initial markets for the AVATAR technology will be at airports, municipal buildings, large stations, and sports stadiums, with plans to expand.

As their system is based on a kiosk it can be transported and installed in any location, perhaps a local hospital or doctor's office or other rented amenities of your choice, it is unclear whether this AI is transferable to personal computers. This aside the person's use of biometric technology could also denote a greater dedication to the trial; patients may react more positively to an interactive avatar as opposed to a real person watching them via a video system to check their adherence every day, which can be construed as quite Orwellian. Overall it is clear these biometric systems are quietly working themselves into every aspect of our lives both at work and at home.

Simulation of AI-based clinical trials to predict outcomes

Despite the many promising software solutions available there is no way to avoid the most crucial aspects of evolving AI here, namely, the simulation of tests or trials using open repositories containing ML datasets and/or non-identifying labeled patient data. This is a method used by other industries for many years with pharmaceutical companies quietly consulting and partnering with virtual reality producers known to have mastered AI simulation, even allowing ML to run their companies in some cases. Simulation and open repositories are the only way forward in improving clinical investigations whilst raising the level of AI integration in ever decentralized trial systems, needing no approval from patients to use their data when doing so.

There is a multitude of free simulation datasets easily accessible online such as the one found at OpenML, where ML files can be downloaded and uploaded. There are also countless opensource labeled medical databases such as this useful list built on Github by Dr. Andrew Beam, Assistant Professor at Harvard School of Public Health, as well as an open-access medical image library with a small number of ML datasets included, curated by Stephen R. Aylward, Ph.D. Of Kitware. These simulations guarantee you can still design, train, evolve and integrate AI into clinical trials that are so patient-centric the participants you do have permission to treat won't leave early and will readily adhere to treatment, all the while avoiding data protection restrictions through the use of these open datasets.

This is the first step in the solution, the good old-fashioned practice of consumer reviews and word-of-mouth to spread the news about great clinical trials participants find highly constructive, easy-to-follow, and wholly interesting whilst gaining unparalleled whole-body results to take with them.

Remember I mentioned a computerized induction into clinical trials for participants by way of online gamification, where the patients get to act out their trial virtually in my earlier posts? Well, the information from this individualized induction could not only be upgraded via immersive technologies such as Virtual Reality (VR), the results could also be plugged back into your AI simulations to help predict levels of patient adherence and endpoint results. Immersive technologies are also expected to greatly enhance the patient experience in clinical trials.

Integrating immersive technologies and AI into clinical trials

Immersive technologies or Extended Reality (XR) is a software emulating either the physical world or a simulated world via a headset feeding sensory information into the eyes whilst adding haptics and/or auditory feedback to immerse the user in an alternate reality. XR can be incorporated into AI, however, it is not based on artificial intelligence.

The main types of XR are:

  • Virtual Reality (VR): a simulated environment artificially introducing stimuli to the senses, with sight, sound, touch, smell, and taste all able to be synthetically stimulated.
  • Augmented reality (AR): digital content superimposed over a live stream of the actual real environment.
  • Mixed reality (MR): an integration of virtual content and the real-world environment enabling the interaction of both elements.

XR, in turn, can be mixed and matched with AI such as:

  • AR and AI: produces holographic telepresence technology where 3D holograms of live people can interact with each other in real-time, irrespective of their locations.

XR has also been proven to be a positive distraction technique with AR and VR content used to provide stress relief. This is achieved by applying ML, as well as facial, emotion, and voice recognition software to create an immersive game experience, adapting to the patient’s response and emotional state, to ease stress and anxiety. This could prove to be a game changer for patient retention in clinical trials.

The most obvious application for XR is holographic telepresence between participants and their clinical team making the interaction more personalized, immersing both parties into the meeting. Virtual diagnostics could also be incorporated to explain disease status and trial results in the comfort of the participant's home. It is hoped this XR will give rise to self-service tools and diagnostics capable of virtual interaction, utilizing same-day pickup drones, XR-based instructions, and virtual practice-runs, with this technology proven to enhance learning and knowledge uptake.

The patient's own home could also be integrated into the clinical training and observation via AR to completely immerse the individual into their trial. This could involve converting continuous data collected from wearables into whole body avatars of the participant's health status that can be checked at any time via AR or VR, making the experience a lot easier to follow and more interesting due to the ease of knowledge uptake. These virtual avatars may also give participants the option to shield their identity, as some patients may find it uncomfortable speaking with new and unknown medical teams, offering them anonymity after agreeing to give up so much personal information, so much of themselves.

There is also much interest in Brain-Computer Interfaces (BCI), an invasive implant in the brain paired with NN or DL to amalgamate the central nervous system with machines. BCIs are primarily used to integrate prosthetics into the nervous system transforming them into neuroprosthetics, allowing the patient to control artificial limbs or implants using thought alone. These machine-neural interfaces have also been used to restore the sense of touch to paralyzed human limbs and to help those patients without the capacity for speech to communicate.

This field has been slowly moving towards non-invasive BCI using electroencephalograms to reach circuits deep within the brain to control a robotic arm, opening many doors for trials investigating neurological disorders. These neuroscientific slanted investigations are paramount to clinical trial development as it is anticipated they will be the very first individualized trials due to past problems preventing a one-size-fits-all experience, rule-based diagnostics and treatment path caused by uncharacteristic behaviors and emotions exhibited by patients.

It is predicted non-invasive BCI with the capacity to record activity in the deep recesses of the brain will enable a more automated classification of patients with mental disorders through the identification of individual components and characteristics associated with their disease severity. These BCIs could also be incorporated into XR systems to stimulate regions contained within the brain involved in 3D and 4D devices, heightening immersion.

Through the incorporation of the above-mentioned technologies and mathematical frameworks, those who participate in clinical trials can expect a more holistic and immersive experience, providing opportunities for continuous learning whilst receiving quality individualized care with exclusive results that cannot be gained from any other medical facility. Personalized interactive medical reports, virtual health data and status visualizations capable of being uploaded or downloaded; including the introduction to high-end approved technology, some of which may not be readily available on the open market, ensure your participants gain the most from trials, raising adherence.

To conclude, it is these symbiotic changes involving patient-centricity and improved clinical trial design that will also afford the medical team with the most accurate results possible in larger datasets. This, in turn, should furnish the patient with the opportunity to carry forward an empowering and eye-opening experience, gifting greater ownership over their health and data.

ABOUT THE AUTHOR: 
Michelle Petersen is the founder of Healthinnovations, having worked in the health and science industry for over 21 years, including medical and scientific posts within the NHS and Oxford University. She has held positions in the field working in private, non-profit and academic laboratories where she taught Oxford undergraduates the spectrum of biological sciences integrating physics for over four years.

Healthinnovations is a publication that has reported on, influenced, and researched current and future innovations in health for the past decade. The success of this brand has resulted in Michelle Petersen currently being featured and indexed by numerous prestigious brands and publishers worldwide. You can follow her on Twitter at @OriginateHealth

Share this article