Magazine Article | December 24, 2014

Quality Agreements In Clinical Development: A Road Map Toward A Successful Partnership

Source: Life Science Leader

By Jonathan Lee, VP of development operations and Mary Chow, director of contracts management, Cerexa

With clinical research continuing the trend of outsourcing more clinical trial activities to clinical service providers such as CROs or other vendors, it has become critical for sponsors to ensure that the activities they have outsourced are performed in accordance with the sponsor’s quality expectations. Because of shortened development timelines, often after a rigorous selection process, the clinical service provider immediately starts work on the project without the benefit of both parties discussing their expectations of quality.

Based on the data collected in a 2011 Avoca Industry Survey of sponsor organizations, only about 65 percent of respondents had written quality agreements with their CROs. However, in the same survey, 94 percent of the respondents that used a quality agreement were satisfied with their CRO’s performance, while only 59 percent of the respondents that did not use a quality agreement were satisfied with their CRO’s performance. This supports the idea that establishing a quality agreement is crucial in building a strong relationship between sponsor and CRO.

Quality agreements are well-established in other industries such as manufacturing and finance; however, such agreements are just starting to be applied to the conduct of clinical studies. A number of sponsor companies are participating in various consortiums such as the Avoca Quality Consortium, which developed a standard quality agreement template with metrics. Some examples of these metrics, or Key Quality Indicators (KQIs), are turnover rate for key personnel, commitments to holding quarterly risk management meetings, number of corrective and preventive actions (CAPAs) implemented, and CAPAs resolved within the time frame specified in the CAPA.

The quality agreement provides an avenue and structure to set the expectations of the parties, identify any deviations from these expectations, and specify an escalation process to address/mitigate these deviations. The key to successfully negotiating the quality agreement is clear communication of expectations between the parties and the emphasis on each party having a sense of ownership over the study through mutually created, agreed-upon language.

We found that when introducing the quality agreement and associated metrics to our company, we first needed to evaluate as a team how it fit into our overall clinical service provider governance model. This led to discussions of our company’s expectation of quality, which was followed by significant negotiations regarding the proposed metrics. Briefly, our company’s clinical service provider governance model is centered upon a master service agreement (MSA) with each specific study’s scope of work described in an addendum. The quality agreement is executed as a separate contract which leverages the MSA, and it details expectations of quality with associated metrics for all projects outsourced to the specific clinical service provider. The metrics contained within the quality agreement represent items agreed-upon as indicators of quality aggregated across all of the outsourced studies. Each MSA addendum that describes each individual study has a service level agreement (SLA) that contains agreed-upon expected service level, penalty, and bonus language for that particular study.

We found our quality agreement discussions to be an enlightening process, which created dialogue with our vendors and CROs, building more meaningful relationships among the parties. For example, one metric in our quality agreement required the reporting of all critical audit findings from regulatory authorities or sponsors within a contracted region regarding a contracted service. The specific clinical service provider’s QA representative stated that they were unable to agree to this metric due to their obligation to maintain confidentiality of their clients. This led to a discussion about the root of the metric, which was to provide assuredness that a robust process is followed when assessing the potential impact of critical audit findings upon other programs within the region or other programs which use the same service. We explained to the QA rep what we would expect of a robust process, and the clinical service provider reassured us that they had SOPs and work practices (WPs) which governed the assessment of potential impact upon other programs. However, while the essence of what we described was detailed within their SOPs & WPs, the QA representative agreed that their process could be enhanced to meet our expectations. This led to a revision of their SOPs and WPs.

Another example of how our quality agreement negotiations facilitated collaboration was our discussion with a CRO around the commitment to have, at a minimum, a quarterly risk management meeting to identify risks and set up mitigation plans. In this discussion, we were able to introduce the CRO to the usage of the failure mode and effects analysis (FMEA) tool in risk management for our studies. Both parties committed to setting up a process to manage risk in the quality agreement, where the specific details were to be discussed in future meetings.

Two common objections raised by vendors during the negotiations of the metrics included how sponsors perceived that the required resources to collect and manage the metrics may deter from vendor’s actual performance of their study-related tasks and how certain metrics may be impacted by forces outside of vendor’s control. In the first instance, we made an effort when negotiating the metrics to continually assess what metrics the clinical service provider typically collects, therefore only pursuing additional metrics if we felt it was absolutely necessary. When considering the metrics which may be influenced by forces beyond a vendor’s direct control, we were careful to allow “carve outs,” which included ”acts of god,” natural catastrophes, and unforeseen regulatory authority changes. Aside from these common objections, both parties acknowledged that the goal of the quality agreement metrics is to help assess whether or not there is a pattern across all of our studies that can be learned from and applied to ongoing and new studies, in order to avoid a repeat of common sponsor/vendor grievances.

Critical to negotiation success and implementation of the quality agreement and metrics was the participation of senior management from each company and their respective clinical and quality teams. This was important because it identified what each party thought was important in conducting a quality trial.

The collaborative approach of establishing a quality agreement creates a sense of ownership of the project by all parties involved, with the common goal of conducting a quality study. This will ensure a “win-win” situation where both parties are committed to meeting their responsibilities without finger pointing and the focusing of one-way faults on the vendor for not meeting sponsor standards. As a result of the implementation of a quality agreement, a sponsor has a study that meets its expectations, and the vendor has gained the trust of the sponsor, hopefully leading to more successful future collaborations.