Magazine Article | May 4, 2015

Measuring Quality In Clinical Trials: Why You're Probably Doing It Wrong

Source: Life Science Leader
Ed Miseta

By Ed Miseta, Chief Editor, Clinical Leader

Quality is undoubtedly one of the top concerns you will hear cited by pharma executives when it comes to clinical trials. When you talk to sponsors about what they look for in a service provider, quality is always at or near the top of the list.

In fact, discussions about “culture of quality” are pervasive in the industry, and it’s rare to attend a show or conference where this is not a popular topic of discussion. Where we may run into some disagreement is on how to best measure the level of quality in a clinical trial.

A few months ago when I decided to produce an article on measuring quality in clinical trials, the first person I turned to was Michael Howley. He has authored or co-authored several papers on the topic and has appeared in Outsourced Pharma, Clinical Leader, and Applied Clinical Trials. Howley has a B.S. in biology, an MBA, a Ph.D. in business administration, and currently serves as associate clinical professor in the LeBow College of Business at Drexel University. His passion is measuring quality in trials, and if he is right, you are probably doing it all wrong.

The way you measure the quality of a product is vastly different from how you would measure the quality of a service. I would certainly not measure the quality of a car I bought the same way I would measure the quality of a visit to my doctor. Unfortunately, many pharma companies may be guilty of making that mistake.

IF YOU’RE MEASURING QUALITY, YOU’RE PROBABLY NOT DOING IT RIGHT
Howley’s research and experience in this space have led him to believe that there is a right and a wrong way to measure the quality sponsors are getting from a CRO. “Clinical trials are very different from manufacturing,” says Howley. “In the pharma industry, most companies are manufacturers of a pill, but the clinical trial is a service. Assessing the quality of a service from a CRO must be done differently than assessing the quality of your CMO. That is the message I am trying to get out to companies. They need to think about quality differently because what they are currently doing is not working.”

There is a science that has developed over the last 30 years on how to measure quality in a service industry. The methodology has been successfully applied in other industries, with great improvements in efficiency, productivity, and reduced costs.

The process of developing measures to assess service quality in trials is well established. Still, Howley notes companies are free to develop their own. When developing an assessment, Howley recommends you:

  • Define what you are measuring
  • Decide what specific items will be measured (cost, productivity, reliability, etc.)
  • Assess the validity and reliability of what you’re measuring
  • Link what you’re measuring to the overall quality of the trial

“I have found that pharma performs well on the first two steps,” says Howley. “They do a pretty good job defining what needs to be measured, and an amazing job identifying items for what they want to measure. Where pharma companies stumble is on the last two steps. Companies do well when collaborating with each other, but do poorly when collaborating with statisticians and psychometricians.”

When performing the first two steps, sponsors might end up with 300 or 400 different measures, and decide to collect data on all of them. They then have to benchmark all of those metrics and then spend millions on a dashboard to tell them how they’re performing against the averages. According to Howley, they are spending millions on software to perform 1940s-type statistics. “They are taking averages and comparing them to the mean,” he says. “I tell them we can do better than that.”

METRICS ARE IMPORTANT, NOT RANKINGS
In conducting his research and the findings, Howley is quick to point out his focus is on monitoring, not producing rankings. In fact, he notes companies would refuse to share their data if they knew it would result in rankings. He also believes rankings would do nothing to improve the level of quality in trials.

By now you may be wondering why do all this work if not for the rankings. Howley’s hope is to be able to perform predictive analytics. “Our focus should be on monitoring the trial as it unfolds, looking at leading indicators that may eventually lead to degradation in quality,” he says. “We want to try and identify quality issues before the whole trial goes off the tracks.”

In the future, if a CRO receives an RFP and wants to bid on a trial, Howley’s research would allow the CRO to predict the quality of its trial. “Based on the data from 10,000 trials that we have in our database, we could predict what that final outcome would be,” he states. “While that may sound futuristic, it’s exactly what we do today in many other areas, including academia.”

Howley notes this is exactly what many districts are using to try and evaluate teachers. “Schools are using a value-added measurement system,” he says. “Given all of the variables of a class (socio-economic status, past performance on tests, etc.), administrators can predict what students will learn. This is how you get to the essence of performance, not by comparing a class to an industry average. What if the whole industry is below average? That statistic really doesn’t tell you anything. In clinical trials, a CRO could find itself in a situation where it is above average, but still underperforming.”

MEASURING THE QUALITY
It’s the customer who always determines service quality. In pharma, that is the drug sponsor. When a trial is outsourced, the sponsor assigns functional area executives to oversee specific areas of the trial. His measurement strategy has those managers evaluate how the CRO performed on areas they directly oversee. Some of the questions they could be asked are:

  • How did the project manager perform?
  • How was their general knowledge?
  • How was their knowledge of your specific trial?
  • How was their GCP and regulatory knowledge?

If he was attempting to evaluate recruitment, he might ask:

  • How well did they do enrolling patients?
  • How did they do on first and last patient in?
  • How did they perform in regard to keeping you informed on how enrollment was progressing?
  • How did they perform in enrolling patients who met your criteria?
  • How did they perform at retaining patients once they were enrolled?


Companies do well when collaborating with each other, but do poorly when collaborating with statisticians and psychometricians.

Michael Howley, PhD.
Associate Clinical Professor in the LeBow College of Business at Drexel University

 

“Those are informative questions that will provide more value than asking the number of days to enroll a patient,” says Howley. “Days to enroll is a common metric because sponsors believe it is a reliable statistic. I don’t agree that it is. Are people going to go back and look in their calendar for the exact date they started enrolling and then count forward? Even if you know it took 73 days to enroll a patient, is that good or bad? For a pediatric oncology trial, you would be a rock star. For an eczema trial, it’s terrible. But you see the problem here: Any metric whose meaning depends on the individual context inevitably lacks validity.”

The industry is still a long way from adopting uniform standards on quality, but Howley believes we are moving in the right direction. Sponsors are beginning to see the value in monitoring trials for signs of trouble, and increased access to data will help researchers like him provide better information to sponsors and CROs. “Sponsors will be able to find CROs that are predicted to do well in the types of studies they are conducting and at the same time, these predictive models could help CROs focus on the types of trials where they are most likely to succeed,” adds Howley. “We hope this will be a win for both sides.”