At risk of stating the obvious, I’ll point out an accepted truth in healthcare: sharing health data is hard. It is highly personal, highly confidential, highly protected, and yet this data is mission-critical to healthcare providers. Because of the added security needed surrounding healthcare data, the slow speed of data exchange can seem painfully archaic and nearly impossible to overcome.
As we are all patients ourselves, we understand how our data is spread across many different systems: an electronic health record (EHR) at one physician, a different EHR at a specialist, and a pharmacy record at CVS. All of these systems will use slightly different standards in how they enter and interpret our data, from how they store our street address to how they format clinical notes. Without common standards or a common data framework across these many different systems, getting different systems to understand and agree upon the meaning of a string of text is a seemingly impossible feat.
Integration engines have stepped in to this ecosystem as a middle layer that translates the many different standards and data to come up with a common understanding of that data. That integration engine is likely managed by an IT team that is also managing many large-scale and complex implementations in the health system. Because of the amount of resources required for a perfectly-tuned integration engine (many) and the resources typically available within a provider organization (few), aspects of the engine that work just fine will likely not be prioritized for follow up.
One of the biggest culprits of being left as “good enough” is the patient matching component of a provider organization’s data infrastructure. Patient matching – or the process of ensuring data is correctly matched to a patient – is a large challenge when looking at data across a variety of data sources. A single instance of a misspelled patient name held in a hospital, lab, pharmacy, physician or public health data system could cause the integration engine to view this patient as a new person, therefore creating a duplicate patient medical record.
When duplicate patient medical records are introduced, there are numerous downstream effects: a patient’s payment may not be posted to the correct record, causing that patient to be doubled-billed, or a critical piece of a patient’s history could be missing from the duplicate record, causing missteps in diagnosis and treatment plans. A potentially worse problem is if the integration engine determines that two John R. Smiths are actually the same person and overlay clinical information from one patient onto the other patient’s record – this could cause extensive privacy, legal, and clinical issues.
Now that the risks of failed patient matching are clear, how to solve it? It is best to implement an enterprise master patient index (EMPI) that can sit with the integration engine to ensure that all patient matching decisions are correct, eliminating those duplicate patient medical records or false patient overlays.
An EMPI that is addressing the vast scale of patient matching decisions posed to an integration engine will work best when it itself has faced patient matching at a massive scale. A new approach to patient matching, called Referential Matching, is a good solution for these types of challenges.
Referential Matching combines big data and cloud technologies with sophisticated algorithms that create an “answer key” to patient matching challenges across the United States. Referential Matching is able to identify the differences between those two John R. Smith records to ensure that they are separated as two distinct records.
To learn more about this groundbreaking approach to patient matching, read the landmark report from The Pew Charitable Trusts that recommends organizations should use Referential Matching.