By Manish Sood
Let’s acknowledge some hard realities: The population at large is aging, the market needs newer therapies, and while the R&D pipeline must stay vibrant, the price tag keeps climbing. In fact, the cost of introducing new drugs, along with medical devices and systems used to deliver them, is on a constant upward spiral.
In this ongoing transformation in life sciences, the single most important factor in developing and executing strategies to move the industry forward is. . .data. However, the term is now so broad as to be a cliché. Not all data is the same, and it’s time to put a sharper focus on those aspects of data gathering and optimization that perhaps need a major overhaul.
That gets us directly the elephant in the room: third-party data. We all know it’s vital—few organizations can organically generate and collate all the data they need. And yet, while so much else has changed, the process for generating third-party data has remained essentially unchanged for decades.
Why is this? And what can we do about it?
To put the problem in context, physician information changes all the time; new practices emerge on a regular basis; licenses for any number of functions reach their expiration date. For these and other reasons, organizations must constantly acquire third-party data—sometimes to move processes forward, at other times just to keep up.
This is why pharmaceutical companies and others in the chain develop detailed selection criteria featuring demographics and a range of other filters. They then buy Health Care Professional (HCP) lists, Health Care Organization lists, and many other kinds of data. Information technology teams take over, uploading the data using ETL (Extract Transform and Load) tools. This is done on a semi-regular basis, everything from once a week to once a month.
In other words, the data that is far from real time—in fact, it’s invariably stale and/or inaccurate. Sadly, this deficiency is taken for granted. Many companies basically accept that access to real-time data is out of reach, particularly within the context of applications that a frontline business user needs most often.
The manual processes don’t help either. For example, conglomerates with big budgets—which enable large marketing and sales teams—often reach out to customers in person or via email. In this digital age, that’s a crude but still effective way to gain first-hand knowledge to supplement out-of-date or otherwise inaccurate data. But then they keep it to themselves—there’s no easy mechanism to provide updates to the third-party provider that offered the data in the first place. Meanwhile, those third-party providers would love such an arrangement—they could receive and verify those updates from their customers, while having a virtual army of data-quality experts working for them. With constant collaboration, the entire system can benefit.
However, as with so many other technology-enabled capabilities, there have been advances. In the new generation of master data management, the data actually does flow every way in real time. And this is changing the way data is used.
Some current technology providers enable third-party data partners to deliver data directly to frontline business users in sales, marketing and compliance. Those users can search for and access data in real time, based on specific criteria, with nothing more sophisticated than Google-style search queries. They can even preview the data in a ‘try before you buy’ mode, and comparison shop between different vendors. Once they find the data that suits their criteria, they can purchase it through a single click, just like shopping on Amazon.
In this age of ‘likes’ and ‘dislikes,’ companies can also collaborate on the quality of the data, and convey their opinion back to the provider in exchange for possible credits. It sounds like a way to just sound off, but the process actually helps the provider get more up-to-date data, another avenue to license the content and gain greater exposure, and avoid sending the customer bad data, in batches over and over again. That’s exactly what happened with traditional methods—without corrective mechanisms, the mistakes kept being repeated.
Also remember that third-party data doesn’t function in a vacuum—enterprises combine what they’re getting from these providers with social media content, the information already in their applications, and the legacy data within their own networks. So in addition to reliable and real-time data, they need the ability to seamlessly blend it with all other sources to uncover relationships between people, products, places and more.
Most importantly, easy (but authorized) access to clean data is best enabled through enterprise-class applications that business users use every day. There must be a rock-solid foundation of modern data management with full security, privacy and auditability.
With that in mind, here are some questions that life-sciences companies need to ask their data providers. Some are self-explanatory, while a few deserve greater context.
There’s now so much data swirling around the life sciences ecosystem that taking it for granted has become the norm. However, the data must meet multiple criteria before it truly serves its purpose. For a start, it must be accurate, up to date, easily accessible while secure from outside forces, customizable and relatable to other data streams.
Third-party data always has been and will remain essential to this ecosystem. But it’s time for an upgrade.
About the author
Manish Sood is the CEO of Reltio (www.reltio.com), the creator of data-driven applications. Prior to founding Reltio, he led product strategy and management for the Master Data Management (MDM) platform at Informatica and Siperian. He is the co-author of the patent that revolutionized MDM through a global business identifier. During his career, Manish has architected some of the largest and most widely used data management solutions utilized by Fortune 100 companies today.