The 6th Annual Pain Therapeutics Summit, held 3-4 October in San Jose, California, US, and organized by Arrowhead Publishers and Conferences, brought together approximately 100 researchers from industry and academia to hear reports from the leading edge of pain drug development. The following report recaps two talks from the meeting that garnered a lot of discussion, exploring a concern that plagues any drug developer: the challenges of conducting good clinical trials.
Getting a new drug to patients requires a compound that is effective and the trial data to prove it. But energy is gathering around the idea that, currently, analgesic clinical trials are not up to the task. Instead, myriad factors depress the likelihood that a trial will detect a difference between the study drug and a placebo (the “assay sensitivity”). Thus, researchers fear, some trials may be derailing drug development rather than spurring it forward.
All clinical trials are tricky endeavors, but trials for pain face the special complication that pain is a subjective human experience, and human beings are complex creatures. People rate their pain imperfectly, for instance, and their experience of pain can be influenced by placebo effects. In addition, clinical investigators may unwittingly exert enormous influence on outcomes by bolstering participants’ expectations that the treatment will, in fact, ease their pain.
In response, efforts such as IMMPACT (Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials) and ACTTION (Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks) are bringing together academics, industry scientists, and regulators to probe study design, identify best practices, and recommend research priorities. And beyond study design, researchers are realizing that previously unexamined aspects of study conduct—including how well subjects understand the study, how accurately they are able to use pain rating scales, and how subjects and investigators interact—can dramatically affect a trial’s outcome and the reproducibility of results.
At the Pain Therapeutics Summit, two speakers addressed how trial design and conduct can make or break a new treatment, and outlined their efforts to improve the situation.
Considering concomitant analgesics
Trial design includes many factors that can profoundly impact observed drugs effects. There are at least a dozen study design elements that can influence trial outcomes, according to Nathaniel Katz of Analgesic Solutions, Natick, Massachusetts, US. In his talk, Katz highlighted one such factor: the frequent practice of allowing patients to take other analgesics alongside an experimental treatment. The concern is that concomitant analgesics reduce patients’ pain so that it becomes difficult to detect any effect from the study drug. But in a recent review, IMMPACT said the data so far have been mixed, with some trials and analyses suggesting that concomitant analgesics did reduce the measured pain relief from a study drug, and others finding that concomitant analgesics did not affect trial outcomes (Dworkin et al., 2012).
In his presentation, Katz, who sits on the IMMPACT and ACTTION committees, presented new, unpublished data indicating that concomitant analgesics can indeed be enough to make a good treatment look lackluster. He and his colleagues performed a meta-analysis of seven trials (1,425 patients total) testing a capsaicin patch for either post-herpetic neuralgia or HIV-associated neuropathy. They found that in trials where patients took concomitant analgesics, the patch provided only half as much pain relief compared to trials where concomitant analgesics were prohibited.
Katz’s findings may give trialists pause. Based on his data, the single factor of whether or not patients take additional pain relievers could be enough to make a trial fail, Katz said.
Problems with pain reporting
Even after researchers settle the issue of concomitant drugs and other matters of trial design, their challenges are far from over. “There’s no study design that’s so optimized that it can’t be defeated by poor study conduct,” Katz said.
In pain trials, much of the difficulty stems from the fact that outcomes are based almost entirely on patients’ subjective reports—and when subjects report their pain inaccurately, it becomes harder to detect a treatment effect. Researchers are familiar with that situation, but Katz showed something novel: Some patients are inherently better than others at reporting their pain, and it is possible to find out who they are.
In a small, unpublished study, Katz and colleagues assembled a group of patients with osteoarthritis of the knee, and asked them to rate the painfulness of experimental heat pulses applied to their hands. Using a visual analog scale, the subjects rated the pain of seven different temperatures, with each stimulus applied seven times. Some subjects were consistent, rating a particular temperature at the same pain level time after time. For other subjects, pain ratings were less consistent.
Katz and his colleagues then asked whether the subjects’ reliability in rating the experimental pain stimuli correlated with their ability to distinguish naproxen from placebo in a blinded treatment test. Averaged over the group as a whole, naproxen had the expected pain-relieving effect compared to the placebo, Katz said. But among the one-quarter of subjects who were most consistent in reporting experimental pain, naproxen showed a huge effect. In the other three-quarters, on average, the drug showed no pain-relieving effect.
Importantly, Katz said that based on other data such as measures of physical function, he thinks both better and worse pain reporters benefited from the drug—the top quarter were just better able to report that benefit.
Based on the study’s results, he said, pain researchers are diluting good data with bad in their trials.
If that is the case, then how can clinical investigators ensure that all of their subjects are contributing useful information? Katz said that in a real trial, it probably would not be acceptable to exclude three-quarters of potential participants because they were poor pain raters. But what might work, he said, is to train patients to hone their pain-reporting abilities before they start a trial. However, he said that remains to be studied.
Eliminating expectations
Researchers have also started to realize that trials conducted at just one or a few sites tend to yield bigger drug effects than those spread out over many sites—a phenomenon that Katz documented in a 2005 review (Katz, 2005). But exactly why does increasing the number of sites seem to degrade the signal? Katz and others have proposed that when trials involve many sites, and particularly multiple low-enrolling sites, variations in investigator training lead to inconsistent trial conduct and decreased sensitivity.
In his talk, Katz focused on one crucial factor: how clinical investigators interact with study subjects and shape expectations of relief. The positive expectations that investigators convey to subjects in drug trials, intentionally or not, may make placebo responses rise—and make it hard to discern true effects of the study drug.
Katz proposed that training clinical investigators to temper their behavior will help level out patients’ expectations, reduce placebo effects, and boost drug effects compared to placebo. To test that idea, his group conducted a small study in which the researchers enrolled 40 subjects in a mock drug trial and randomized staff and subjects to a training program aimed at neutralizing expectations, or to no special preparation. Katz presented unpublished data from the study that suggest the training did reduce patients’ expectations of treatment benefit. Now, the question remains whether reducing expectations will translate into greater sensitivity for drug effects. To find out, Katz said he is conducting a randomized, controlled trial to see whether the training affects the outcome of a trial of pregabalin treatment in patients with painful peripheral diabetic neuropathy.
Teaching trial conduct
One person who has been scrutinizing trial conduct—and doing something about it—is Neil Singla, head of Lotus Clinical Research, Pasadena, California, US. Lotus is a contract research organization that conducts trials at its 40-bed facility and coordinates trials at other sites. In his talk, Singla presented techniques and tools that his group is developing to evaluate trial sites and to educate both investigators and subjects.
Like Katz, Singla is convinced that much of the variability in trial outcomes arises from poor trial conduct—and that, to get good outcomes, study coordinators have to teach investigators and subjects how trials work. Towards that end, in the last year Singla and his company have developed a suite of training videos and quizzes that they use in their trials.
In his videos for patients, Singla covers basic trial concepts such as how to use pain scales. And, to combat inflated placebo responses, he tackles the topic head on. In one video (viewable on the Lotus website), Singla tells subjects that one reason why people sometimes report pain relief from placebos is that they “are just trying to be nice. They want to please their doctor and the research staff, and be supportive of the study team that is caring for them.” But there is no need to be “nice,” he tells patients, because they may be receiving either the drug or a placebo. “There is no right or wrong answer.”
After the videos, Singla follows up with quizzes to make sure patients understand the basic principles of placebo-controlled trials. For example, in his presentation Singla showed one question that asks patients how their judgments of treatments contribute to overall trial outcomes. What if the treatment they receive does not help their pain, and they rate it as “poor”? Will that hurt the trial because they gave the drug a bad mark, or will it help because it was an accurate answer? The quizzes give investigators a way to see how much their subjects understand, to discuss points that are unclear, and to consider excluding subjects who do not understand trial concepts.
Singla is also working on training for study doctors. Even experienced clinical investigators, he said in his presentation, have a hard time mustering the detachment needed to carry out proper research. As physicians, they are accustomed to giving patients the best possible care. But in a trial, the doctor may be providing a placebo, or an experimental drug that is ineffective. “It’s very hard for investigators to admit that to themselves,” Singla said. When doctors believe they are giving their patients a promising new treatment, they transmit that hope to the patients, which can boost patients’ responses to placebo.
In addition to educating investigators about maintaining their objectivity, Singla said he works to train and motivate investigators and study staff to improve other areas of trial conduct as well. Before a trial starts, he uses clinical case studies to train staff about issues such as how to decide whether a patient should be enrolled in the trial—particularly when patients fall in “gray areas” of the inclusion criteria. Then, while the trial is going on and data are still blinded, he monitors trial conduct by looking for incongruent data patterns that suggest subjects or investigators may be misunderstanding the protocol. When the trial is complete, he compares sites according to the effect sizes they measured for the study drug.
From theory to practice
One of IMMPACT’s prescriptions for improving trial sensitivity, outlined in its recent review, is the need to develop subject and staff training protocols to manage expectations and increase understanding of trial methods (Dworkin et al., 2012). Singla’s program is a real-world embodiment of those recommendations.
And so far, no one else is doing anything like it, said conference chair William Schmidt, NorthStar Consulting, Davis, California, US. Schmidt said he is working with Singla to create videos for doctors and patients in a Phase 3 trial Schmidt plans to start later this year in South Korea, of a novel dual inhibitor of cyclooxygenase (COX) and carbonic anhydrase (CA) from CrystalGenomics, Seoul, South Korea.
Schmidt was just one of many meeting attendees pleased to hear about Katz's and Singla’s work to rigorously improve clinical trials. Ronald Fletcher, of Jazz Pharmaceuticals, Palo Alto, California, US, said that the discussions about clinical trial design and conduct were a highlight of the meeting. Fletcher said that from his experience in testing drugs, he has seen first-hand that placebo effects present a huge problem. If investigators can eliminate confounding factors in their clinical trials, he said, the quality of their data will improve—enabling them to bring better drugs to patients.