Magazine Article | September 29, 2011

How IT Helps Solve Pharma's R&D Problems

Source: Life Science Leader

A Q&A With Matt Segall, CEO, Optibrium

What do pharma executives need from Information technology?
To some extent this depends on the definition of the term “executives.” Senior executives need to make long-term strategic decisions about allocation of resources to projects, therapeutic areas, and technologies. They also need to make go/no-go decisions on projects at key transition points (e.g. hit-to-lead, preclinical development candidate selection, and clinical candidate selection). At very early target selection/validation stages, bioinformatics and systems biology can help to mitigate risk due to biological mechanism (particularly for novel targets) by maintaining a balanced portfolio that targets multiple biological mechanisms, where possible, for a therapeutic area. In the early stages, IT can help to monitor the progress of projects, for example, assessing the quality of compounds against the project’s target product profile (i.e. the profile of properties required in a successful compound). This can help to identify projects where making progress is challenging and when it is unlikely that a project will achieve its objectives. In later stages, data from physiological or clinical trial simulations can help to identify potential risks to the clinical success of a compound, helping to make decisions about selection of projects for clinical trials.

At the level of project leader the challenge is to use all of the available data effectively to make confident decisions and quickly achieve a project’s objective. Management and visualization of the data, for example by LIMS (lab information management system) and database and graphing software, is necessary but not sufficient. The quantity, complexity, and uncertainty inherent in drug discovery data means that help is required to guide the decisions being made. People find it challenging to make drug discovery decisions involving complex, uncertain data, particularly when there is a lot at stake. Therefore, further analysis, using decision analysis techniques, can help to balance the many, often competing, requirements for a safe and efficacious drug.

One important source of data for the selection and design of high-quality compounds is from predictive models. These allow the likely properties of compounds to be predicted prior to synthesis, allowing decisions to be made regarding which chemistries to focus synthetic and experimental efforts on to yield the highest chances of success. Predicted data has significant uncertainty due to predictive error, making a probabilistic approach all the more valuable in order to use this information effectively. It is also important that a model’s ”domain of applicability” is well understood. Models are typically not able to make confident predictions for chemistry that differs significantly from those used to train the model. Predictions for compounds outside of the domain of applicability should be used with extreme caution or disregarded.

What are the key forces driving the adoption of in-silico technologies by the life sciences industry?

The cost of discovery and development of a new drug continues to soar, while the failure rate of compounds reaching the clinic has remained stubbornly slow over the last decade. In an attempt to “fail fast, fail cheap,” drug discovery organizations have pushed property assessments earlier and earlier in the pipeline, using in-silico and high throughput in-vitro methods. For example, the proportion of compounds failing in clinical trials due to poor pharmacokinetics was estimated to be approximately 40% in the late 1990s, prompting the adoption of “early ADME (absorption, distribution, metabolism, and excretion)” technologies to weed out compounds with poor properties in early discovery; this also leads to the adoption of ”rules of thumb,” such as Lipinski’s rule of five. The result is that the proportion of failures attributed to poor pharmacokinetics has dropped to approximately 10%. However, the overall failure rate in development is essentially unchanged. Now, a larger proportion of failures is due to toxicity, resulting in a more recent drive to introduce approaches for early toxicity screening.

Unfortunately, another result of introducing more barriers to the progress of compounds in drug discovery has been an increase in the cost and time of drug discovery and a decrease in the productivity. Furthermore, an unanswerable question is, “How many opportunities to identify good drugs have been missed due to incorrectly eliminating compounds due to mismeasurement or misprediction?” This opportunity cost of discarding valuable compounds may outweigh the cost of late-stage failure in many cases.

Therefore, in-silico technologies must help to make better decisions to reduce waste, shorten timelines, and improve the overall quality of compounds reaching the clinical development phase.

What are the crucial issues that perhaps impede the use of computer models and simulations during drug discovery?
Paradoxically, the issue that most impedes the effective use of computer models and simulations during drug discovery is that some people trust models too much, while others do not trust them enough (or at all, in some cases).

Computer models and simulations are sources of data that help with the selection and design of compounds. However, there is a large degree of uncertainty in the predictions of models in drug discovery; prediction of a biological property within an order of magnitude is often the best that can be currently achieved. Contrast this with many models used in engineering design where the behavior of a new design can be predicted with an accuracy of a fraction of a percent. Thus, computer models in drug discovery cannot be used to design the perfect compound, but instead, to bias the odds in favor of selecting a successful compound. Furthermore, it is risky to simply filter out compounds based on predicted data due to the inherent uncertainty, as the opportunity cost of discarding a valuable compound due to a misprediction may be high, unless there are many possible alternatives.

Conversely, some discount the use of models and simulations altogether due to the fact that they are not perfect. This neglects the valuable information they provide for the prioritization of compounds and the experimental effort to study them. It is commonly forgotten that all data used in drug discovery comes from a model, whether an in-silico, in-vitro, or animal model of the ultimate human patient. Each type of model has different characteristics regarding its cost, relevance, and reliability, but all have important roles to play.

What are the benefits of using in-silico tools and how to get THE most out of them?
In-silico tools can be used to guide the selection of compounds throughout the drug discovery process. It is essential to use them within the context of the drug discovery project’s objectives and, as discussed above, to give them the appropriate degree of weight in the decisions being made. A high-quality lead or candidate drug must have a balance of many properties, including potency, selectivity, ADME (absorption, distribution, metabolism, and excretion), and safety. Therefore, the weight given to the results from predictive models must reflect both the importance of that property to the overall success of the project as well as the confidence in the predicted value.

In-silico models can also help to guide the design of compounds with improved properties. A model captures information about the relationship between the structure of a compound and its properties. Sometimes this can be simply interpreted; for example, a docking model helps to visualize the 3-D relationship between a ligand and the pocket of a protein into which it binds, guiding the design of compounds with increased binding affinity. However, even so-called ”black box” models such as quantitative structure activity relationship (QSAR) models contain information about the relationship between a compound’s structure and its properties (as the name suggests). This can be much more difficult to extract, but techniques such as the Glowing Molecule implemented in StarDrop can be used to interpret and visualize these relationships to show regions of a compound that have a significant influence on a predicted property and where changes are likely to significantly improve the compound.

Models can also help to select and design the most appropriate experiments to conduct in order to select compounds and mitigate risks. For example, a model prediction that suggests a potential issue for a compound may not be sufficient to reject the compound, but could indicate that an experimental measurement of that property should be prioritized to confirm or refute the prediction instead of postponing the experiment, leading to an expensive late-stage failure. In-silico models also can be used to save wasted effort screening large numbers of compounds through unnecessary assays where a model prediction indicates what the risk is. Similarly, in the later stages, clinical trials simulations taking into account population variability can be used to design trials that will demonstrate efficacy with statistical significance and also have the statistical power to detect rare adverse events, improving the safety of potential drugs.

Can you provide an insight into the in-silico technology competitive landscape?
The in-silico technology landscape can be broken down into several regions:

  • Information management, e.g. LIMS, ELN (electronic laboratory notebook), and databases: These help to gather and manage compound data providing convenient access to scientists.
  • Data sources for exiting compounds: This is now dominated by open platforms.
  • Data visualization: These software applications allow visualization of chemical or biological data.
  • Computational chemistry tools: These are designed for expert computational scientists and allow a variety of different modeling activities.
  • ADME models: These predict ADME properties for compounds.
  • Model building: These allow predictive models to be built from experimental data sets.

 

How can the right computational tools and techniques benefit the drug discovery projects ­— including earlier introduction of drugs to the market?
Computation tools and techniques can help to reduce wasted effort, improve timelines, and increase the quality of compounds resulting from drug discovery projects in the ways described above. These will help to reduce cost, improve productivity, and accelerate the time to market for new drugs.