Patient Summit USA

Oct 10, 2016 - Oct 11, 2016, Philadelphia

Kill the buzzword: Make patient centricity a reality with a health-focused commercial model

How A Unifying ‘Language Of Medicine’ Has The Potential To Make (And Break) Healthcare.

The unintended impact of healthcare data interoperability.



Consider a world in which we all understand one another; a world in which we not only speak the same language but also have the vocabulary to comprehend those whose working lives are profoundly different to our own. In such a world, economists could communicate with politicians; politicians could communicate with voters; voters could even discuss Minecraft with their children. Whether or not you consider this prospect attractive, it’s not a scenario that seems likely to come about any time soon. The world is now too complex, and our lives too short, for any one individual to master it all. Even Einstein, despite his extraordinary grasp of physics, still needed help with his maths.

In fact, ‘too much knowledge’ can be problematic even within a single field, including medicine. In the19th century, medical understanding was sufficiently slender that a single physician could command it all; yet managing a serious condition today can require specialist doctors, specialist nurses and even specialist operators of medical machinery. Often, it’s not only a question of visiting different departments within the same institution but different institutions or even different countries.

Hardly surprising then that regulatory authorities charged with licensing medicines, and the national health authorities responsible for coordinating patient care, have sought to digitize their worlds. Here, ‘Digitize’ can mean the creation of unstructured, electronic text (an email or PDF, for example) or the creation of more organized, structured databases (more like an Excel file). While the former have greatly facilitated communication between healthcare professionals, the latter can also be read by machines. Machines are far better suited to handling large volumes of data, but far more reliant on everyone using the same syntax and vocabulary. Computers, it seems, need a common language of medicine. They need ‘interoperable’ data. 

There is now so much healthcare information being exchanged that a shift toward greater healthcare interoperability is as inexorable and as necessary as was specialization within the medical profession during the 20th century. And while, in one sense, this is a response to the challenges of the modern healthcare system, in another, it is likely to be the force that shapes it. 

Pharma’s grand opening

The pharmaceutical industry’s experience with interoperability centers on making the clinical trial data associated with any given product more accessible and interpretable to regulators (conspicuously, the CDISC standard). Currently, however, we’re now seeing the roll-out of initiatives directed toward standardizing information across products/manufacturers and making that information much more widely available, most notably:

• Transparency initiatives that require the results of clinical trials (or even their clinical study reports in the case of EMA’s Policy 70) to be made available to the public;
• The Identification of Medicinal Products (IDMP) standard, designed to provide rigid descriptions of drug composition, manufacture, licensing and clinical characteristics;
• Serialization (also known as ‘Track-and-Trace’) programs requiring that unique identifiers be assigned to individual packets of medicines (i.e. a number for each box, not just each batch)

While the stated purpose of these initiatives (chiefly, to facilitate meta-analysis, improve pharmacovigilance and to combat falsification of medicines) have unanimous support in principle, the huge cost associated with their implementation has led some to question whether they can ever deliver on their investment. In reply, it’s necessary to consider some of the other, unintended consequences of their introduction. Given the myriad stakeholders among whom our health data must now flow (see thedatamap.org for a dizzying perspective), there are many opportunities to do so.

Being more choosy

Earlier this year, Australian regulators censured the makers of the over-the-counter painkiller, Nurofen, describing the practice of targeting identical formulations toward different ailments (Nurofen for migraine, for back-pain, for period-pain etc.) as misleading. Of course, the amount of ibuprofen contained in Nurofen is detailed within the packaging and labeling, as are the ingredients of all pharmaceuticals, but in this unstructured, non-digitized format the information is not readily available for analysis or comparison. However, given an exhaustive and interoperable dataset of product compositions, it would take mere hours to create an app that identifies the various alternatives and allows savvy consumers to side-step premium branding.

 Would interoperability in prescribing data promote the use of generics? It seems highly likely that cash-strapped payers would seek to exploit the possibility.

If this possibility seems unlikely to be too impactful, it probably won’t be. As evidenced by the continued market-diversification of toasted flakes of corn, few of us are likely to monitor what goes into our shopping-baskets so assiduously. However, high-cost prescription medicines may be a different story. While it’s true that national health policy rules play a significant role in determining whether prescribers or dispensers can ‘substitute’ innovator products for generics or biosimilars, lack of awareness of these alternatives may be just as influential. Would interoperability in prescribing data promote the use of generics? It seems highly likely that cash-strapped payers would seek to exploit the possibility. 

As it happens, it’s prescribing and dispensing, and being able to automate and monitor this process with a high degree of precision, where some of the greatest disruptions is likely to be observed. Once it becomes possible to reliably track who prescribed a medicine and for what, who shipped that medicines and where, and who dispensed that medicine and to whom, all sorts of possibilities present themselves.

At the very beginning of this chain, the increasingly complex and time-consuming nature of the prescribing process makes greater usage of 'automation' seem inevitable. Multiple factors need to be taken into account when devising a prescription – diagnosis, comorbidity, age, sex, weight, ethnicity, metabolizer genotypes, disease genotypes, lab results, and concomitant medications – thus requiring considerable time to resolve and raising the possibility of errors. The process would be immeasurably easier if a machine could automatically compare a patient’s clinical particulars against the prescribing rules for various medicines and thereby generate a recommendation.

This idea of tailoring treatments to individuals (‘personalized medicine’) has become a hot topic in its own right, but it is important to note that it will not be possible on a large-scale without greater data interoperability. The FDA has made some headway into this territory, albeit not specifically for this purpose, with its structured product labeling (SPL). Likewise, the EMA previously initiated but did not progress, its own project to create an electronic summary of product characteristics (SPC). With the coming of the IDMP in Europe and beyond, we can expect to see the resurgence and advancement of such initiatives. Certainly, e-prescribing is one of the use cases identified for IDMP (even it this is more orientated toward the physician-dispenser interaction, rather than the process of selecting an appropriate treatment).

Watching the watchers

The issue of prescribing raises another way in which interoperability may prove disruptive: off-label tracking. Irrespective of the indication that a medicine is actually approved to treat, physicians are free to prescribe that medicine ‘off-label’ for whatever condition they choose. There are qualifications to this, most significantly the influence of what various payers are prepared to reimburse, but the central tenet remains true. Furthermore, tracking of this off-label use, although varying between countries and depending on whether the care is public or private (there is almost no centralized tracking of usage in private care), is far from complete. What are the consequences if such information could be gathered globally, accurately and cheaply?

One such consequence could well be the sporting community coming under closer scrutiny. It is well known that some medications are used more widely off-label than they are on-label, including growth hormone and the ‘blood-doping’ agent erythropoietin. Unless organized crime is involved (e.g. smuggling and betting-related activity), the use of banned substances by athletes is a professional misdemeanour, dealt with by sports governing bodies, rather than a legal one. But given the recent scandals and the apparent impotence of these agencies, there is growing pressure to criminalize drug use in sport. Germany did exactly that earlier this year, with others likely to follow suit. Under this circumstance, the case to give law-enforcement agencies access to the data would be compelling.

And if law enforcement agencies are making use of this information, regulators and health policy architects must surely do so too. Usage is a vitally important component of post-authorization monitoring and one that may influence whether or not authorization can be granted at all. Adaptive licensing – the proposal that a narrow indication might be approved earlier in a drug’s development and then expanded and/or reassessed as new evidence becomes available – has two principle obstacles. The first is that payers may not accept the reduced evidential weight; the second is that even if the risk-benefit is acceptable in the initial, narrow indication, unsupported use off-label might be extensive. Were regulators to have instant, reliable oversight over who was prescribing such medicines and for what, at least one of these obstacles could be ameliorated. 

The big issue

Of course, adaptive licensing is positioned to focus on rarer diseases. Are there any conceivable benefits for more widely used medications? Likely enough, yes, and one of these - the rise of anti-microbial resistance (AMR) – is considered by some to pose as great a threat to humanity as global warming. The gravity of the situation has been perhaps best conveyed by the director of the WHO, Margaret Chan. Opening a EU conference in February 2016, Chan described AMR as a 'slow-moving tsunami' and highlighted only the most recent of the many 'alarm-bells' sounding within the medical and research communities: the emergence of a new resistance-mechanism to colistin.

Colistin has been around for more than half a century but fell out of use in the 1970s due to documented nephrotoxicity. The development of multidrug-resistance to other antibiotics has since necessitated its resurgence, despite these ill-effects, as the 'antibiotic of last resort' for a number of life-threatening infections. The implications of research observing AMR to such drugs are stark: if we lose colistin, even minor operations could result in fatalities. A sobering review commissioned by the British Government projects that AMR will account for more deaths than cancer by 2050.

Attempts to address AMR, however, including incentivizing pharma companies with legislation (such as the US’s 2012 GAIN Act) and even paying doctors not to prescribe them (as has been seen in the UK) have not been successful. Faced with such intractable risk, the time must surely come when we concede that the carrot is ineffective and that only the stick can bring about change. If so, regulators will need a reliable way of tracking how these drugs are being used in order to devise and enforce appropriate action. This would be an immensely difficult task requiring international cooperation, but the groundwork is being laid: IDMP is an international standard and one which can be applied to both medicinal and veterinary products, both of which are implicated in AMR.

HCPs unite!

As noted earlier, it’s not only pharmaceutical regulators who are pushing for greater interoperability; so too are healthcare providers (HCPs) and those who orchestrate them.

In the early 1990s, a technical standard called HL7 Version 2 began to permeate hospital IT systems, providing the format by which blood results are transferred from laboratory to Emergency Room, for example. Despite being a milestone on the road to full interoperability, Version 2 is actually quite a ‘loose standard’ since it must accommodate some vastly differentiated IT systems globally. Such flexibility cannot support the ultimate goal, an entirely paperless and interconnected healthcare system, and so a more rigid but expandable successor was developed.

This successor, HL7 Version 3, is in the very early stages of adoption by HCPs, most significantly via the roll-out of V3-based electronic healthcare records (EHR) in the US. And yet, despite the US government’s good intentions, many physicians say that administering their newly installed EHR systems is so time-consuming that it is preventing them from actually spending time with patients. A 2013 study by Hill et al found doctors in a busy emergency department to be spending 44% of their time on data entry.

If echoed across other departments and hospitals, such figures are indefensible. Healthcare is facing an increasingly challenging business model and for clinical expertise to be wasted on data entry is not sustainable. It seems it is HCPs who are in greatest need of interoperability and their cause would certainly be aided if the pharmaceutical industry, with all its might and the (comparative) ease with which it may implement new technology, was supporting the same standards.

Encouragingly, HL7 V3 is indeed already in use in pharma. It supports multiple regulatory process in the US, such as SPL and drug establishment listings, and underpins the individual case safety report (ICSR) used around the world. The standard will now – via IDMP – come into processes as diverse as clinical trial authorizations, marketing applications, GMP inspections,serialization and of course electronic prescribing. Thus, by implementing IDMP and serialization, the pharmaceutical industry is facilitating interoperability projects aimed at driving efficiency and freeing up capital within their own client-base.

A web of health data

But why stop there? Everyone recognizes the growing importance of ‘real world evidence’ – data collected from patients outside structured clinical trials - and that information from HCPs, pharma, payers and patients must, therefore, coalesce. Can our efforts to further interoperability encompass all four of these stakeholders and not just the first two?

In answer to this question, EHR records will, of course, be utilized by insurance companies – this is already happening. As for patients, the healthcare apps market continues to bloom, with more than 150,000 available at last count (IMS Health). Stitching everything together will not be an easy task, but HL7 has been working on FHIR (Fast Healthcare Interoperability Resources), a framework that will allow health IT developers to readily develop products which can understand one another. While the normative edition of FHIR will not be formally released until early next year, manufacturers are already begun incorporating it into their wares, meaning the market may feel its impact sooner than expected.

What major obstacles or threats ought we to consider before we enter a new era of digitized, interconnected health? Well, while the ecosystem must be open enough for us to realize the expected benefits in efficiency, costs, treatment outcomes and research; the availability and use of patient-level data must be closely monitored. This is not merely a question of personal privacy, but an issue with the potential to impact the very viability of our healthcare model. This is because health data is increasingly detailed, increasingly available and of increasing predictive value, which translates to an increasing potential to undermine the health insurance market. A 2016 study by Blenner et al., for example, found that, of 211 diabetes apps studied, 81% had no privacy policy whatever.

It seems unlikely that the market could reach an acceptable and sustainable equilibrium on its own, but a vigilant government could legislate to overcome such problems, just the US’ GINA Act and similar (less focused) European laws have almost totally outlawed the use of genetic information in the calculation of health-cover premiums.

Whatever the solution to the insurance question turns out to be, we in the pharmaceutical and healthcare industries should not delay our interoperability objectives awaiting its creation. The interests of insurers, patients, HCPs, and pharma have always been entwined; it now seems that those interests rely on their systems coming together also.


About the Author:

Tom MacFarlane is a drug development professional who is as passionate about innovation in information science and regulatory science as he is medical science. He has held positions within both industry and CROs, having previously worked as a senior consultant within Parexel's Integrated Product Development practice, and now leads Regulatory Affairs on behalf of Accenture Life Sciences.  



Patient Summit USA

Oct 10, 2016 - Oct 11, 2016, Philadelphia

Kill the buzzword: Make patient centricity a reality with a health-focused commercial model