November 22, 2011 | 3
This past week, I was jolted out of my chair by news that a Pfizer-led group plans to buy access to patient data in hospitals. My initial reaction was anger, on a variety of levels: as a researcher, as one who is increasingly wary of the reach of huge corporations, and as an individual.
Actually, it is not just Pfizer doing this—a new consortium called the Partnership to Advance Clinical Electronic Research (PACeR), includes Merck, Roche, Johnson & Johnson, Bayer, Hoffman-La Roche, Quintiles, and Oracle. Their pitch sounds very reasonable, with a noble goal of speeding clinical research and bringing new medicines to market. The focus of the article describing this initiative in Business Week aptly described the business advantages: Delays in drug development are estimated to cost $1 million per day. More rapid enrollment and clinical trial completion will increase the time a drug remains on patent—read, profitable—for the pharmaceutical sponsor. It also helps the sponsor company remain more competitive compared to its rivals. And hospitals stand to earn $75 million annually in exchange for patient data. What could possibly go wrong?
There is only one little thing standing between the companies and patient data—concerns about the Health Insurance Portability and Accountability Act of 1996, or HIPAA, which includes onerous privacy protections for patients. I suppose that HIPAA has value, at least in its good intent to protect patient privacy, and its boost to job security for medical records clerks, accountants, attorneys, and the cottage industry of trying to explain the rules. Otherwise, I have yet to see the value and, as a physician and clinical researcher, I have had only negative experiences with it.
Background: HIPAA and HITECH
HIPAA, for the uninitiated, prevents disclosure of Protected Health Information (PHI) which is defined as “information that can be linked to a particular person (i.e., is person-identifiable) that arises in the course of providing a health care service.” “Individually identifiable health information” is information, including demographic data, that relates to:
and that identifies the individual or for which there is a reasonable basis to believe it can be used to identify the individual. Individually identifiable health information includes many common identifiers (e.g., name, address, phone number, birth date, Social Security Number, medical record number). If you have health insurance, you immediately waive all of these privacies in order to file any claim. Ironically, it seems health insurers are the most likely to abuse personal health information by asking intrusive questions and denying claims or care.
Health Information Technology for Economic and Clinical Health Act (HITECH Act) is part of The American Recovery and Reinvestment Act (ARRA, also known as the “stimulus package”), that provided $10 billion for “scientific research and facilities” through September 2010. One of the specified intents of the HITECH Act was to facilitate health outcomes and clinical research. Healthcare providers are being pushed into using electronic medical records. Medicare reimbursements to providers will increase significantly if there is “meaningful use” of the electronic medical records (EMRs), defined as data used for health purposes (e.g., public health, quality reporting, or research), and decrease if there is not “meaningful use.”
It seemed like a good idea…
Electronic medical records do have advantages for research, particularly for timely recognition of adverse events that might otherwise remain undetected in postmarketing surveillance. An example is using EMRs to identify patients with genetic mutations that are associated with specific serious adverse events.
As PACeR recognized, EMRs have the particularly promising potential to help identify and recruit study participants. Inclusion and exclusion criteria are becoming increasingly restrictive, resulting in expected accrual rates of less than one patient per month on many trials for even common illnesses. However, lab data can be successfully and efficiently used to screen large numbers of prospective patients. For example, University of South Carolina researchers screened 7,296,708 lab results from 69,288 patients, identifying 70 potential candidates who met automated criteria, 3 of whom ultimately participated in the trial. Since current research regulations preclude a third party from alerting an investigator about a potential study volunteer without that patient’s advance consent, however, the researchers developed a compliant but convoluted work-around with the IRB—similar to the process PACeR is trialing now. At USC, if the lab identified a potential subject, the ordering physician was notified of the patient’s potential eligibility. Then the ordering physician had to decide whether to make the effort to contact the patient to obtain permission to contact the clinical trial staff and then to follow through.
Screening health information is also particularly promising at sites that conduct multiple trials because it can alert investigators to multiple opportunities and guide patients to the most appropriate study. One solution to the various obstacles is to incorporate alerts about possible clinical trials into the EMR used at the time of a patient’s encounter with a physician. While still cumbersome, this method has the advantage of reminding physicians about trials while minimizing the additional work for them. It also overcomes HIPAA concerns because the physicians communicate directly with their patients.
But then reality sets in…EMRs
Electronic medical records may be a boon for hospital reimbursement and administrators, but appears to be a nightmare for physicians and patients. In my experience:
However, EMRs also pose unique problems for research. Privacy issues have received the greatest attention. These affect researchers’ ability to review records, recruit patients, and monitor study participants. Confusion also results from the different consent requirements of different groups and because the standard consent clause that allows the sponsor’s representatives to review the records does not meet the HIPAA rule’s requirements.
EMRs also pose problems for research monitors, both because the monitors have limited access to data stored electronically and because of problems verifying that the data have not been altered. The electronic date and time stamped audit trails are important here. While log-on names and passwords are not supposed to be shared, this is probably commonly done during monitoring visits since there is no other practical way of getting timely access to read-only records for auditing.
It gets worse with HIPAA and research…the Unintended Consequences
HIPAA requirements are extremely difficult to understand and subject to misinterpretation, and mistakes carry the chilling spectre of disproportionately high penalties. Even the feds understood the need to be able to identify potential subjects in order to do research, so they put in a carve-out, allowing “waiver of authorization” with the IRB’s approval. (Full details are available in the Code of Federal Regulations Title 45—Public Welfare).
HIPAA, the ill-considered privacy rule, has had several unintended consequences (beyond the nuisance factor), the most serious of which is its negative impact on research. While those of us in the trenches immediately and directly felt the burden, a report from the Association of Academic Health Centers (AAHC), The HIPAA Privacy Rule: Lacks Patient Benefit, Impedes Research Growth, affirms our suspicions about its chilling effect on research.
Let me share my own experiences with HIPAA and research. I already knew that HIPAA really hurt the numbers of volunteer referrals from my local hospital. For example, even when the Institutional Review Board (IRB) provided a “carve-out” allowing us to be alerted about potential patients for a sepsis study, many hospital staff members had knee-jerk “we can’t tell you anything” reactions, fearing for their jobs. Some staff fomented misunderstandings about HIPAA seemingly deliberately, as one way of derailing a study. Mostly, HIPAA caused rampant confusion that cost us a number of potential patients, which is especially painful given that qualified candidates were as rare as hen’s teeth—as they often were for the studies I generally got asked to do, with an expected participant accrual of 1-2 per month.
This past summer, I went to India to volunteer at a hospital and to try and help them with their self-identified problem with tuberculosis. There was considerable debate as to whether or not IRB approval was necessary—my infectious disease colleagues felt it was not, as it was part of a public health initiative and the “research” was no different than that conducted every day in public health departments. The social science types at the U.S. university I was working with all insisted we obtain IRB approval, a time-consuming and, in some settings, expensive process. (Many IRBs levy an administrative charge of $1-2,000 per study). And the folks in India could have cared less, nor did they understand the fuss, as there is next to no patient privacy in their crowded facility, nor was it culturally relevant. All they wanted was help caring for their patients.
As mentioned above, an Association of Academic Health Centers (AAHC) study confirms these subjective findings, that the HIPAA rules are unclear and are subject to misinterpretation. Many researchers don’t understand a waiver of authorization can be provided by the IRB. As the AAHC notes in HIPAA Creating Barriers to Research and Discovery, “The fear of regulatory punishment is driving IRB, Privacy Officer and Organizational decision-making in clinical research.” The fear of liability dissuades many other parties from supporting research and distracts everyone from the goal of helping to develop new treatments. In addition, valuable personnel time and money are wasted on the unnecessary and excessive new administrative burdens.
Another example of HIPAA regulators run amok was that of the Office for Human Research Protections (OHRP). The OHRP recently extended privacy rules to “research” done as part of infection control and quality improvement activities. In an irrational and counterproductive move, it closed down research at Johns Hopkins University and a network of hospitals throughout Michigan regarding the use and efficacy of a checklist in reducing life-threatening hospital-acquired infections. The data from each hospital were deidentified before being sent to Hopkins for analysis, yet the OHRP ruled that individual consents were required. See an excellent and scathing review by Dr. Atul Gawande for details.
Studies have demonstrated the dramatic reduction in recruitment rates for research since HIPAA was introduced. One University of Pittsburgh study cited by the AAHC showed recruitment was slashed by more than 50 percent after HIPAA. Similarly, a University of Michigan study showed volunteer consents dropped from 96 percent to 34 percent after HIPAA. An American Society of Clinical Oncology paper report that a “reliance on consent impedes valuable research…sometimes causes physicians and entire hospitals to opt out of research.” Sometimes it seems the only beneficiaries of HIPAA are insurers, from whom we ironically have no privacy. The AAHC report concludes, “Finally, the patient whom HIPAA is designed to protect does not appear to recognize, understand, or care about this complex law as it applies to research.”
There was an interesting review of the HIPAA complaints that were related to clinical research between 2003 and 2007. Of the 32,487 privacy complaints to the Department of Health and Human Services during this period, guess how many were related to clinical research? A whopping 17! Intriguingly, the author also extrapolates that, if obtaining a HIPAA consent takes 5 minutes, and a research site’s time is postulated as $60/hour, this translates to at least $10 million dollars per year spent just to obtain this cumbersome, and often misunderstood, authorization.
A report from the prestigious Institute of Medicine, Beyond the HIPAA Privacy Rule: Enhancing Privacy, Improving Health Through Research, further expounds on HIPAA’s unintended interference with research and gives several recommendations, concluding that the Common Rule’s (Federal policy, codified in the Code of Federal Regulations 45 CFR part 46) human research protections be applied to interventional clinical research and that there be new federal oversight of the information-based research.
Problems with PACeR
Access to all this patient data would be extraordinarily helpful to companies to enable them to identify sites for research and even specific patients to target. The initial trial of this model begins this month in 13 hospital systems in New York, working in collaboration with PACeR.
Now since this identifying data can’t legally be shared with drug companies, PACeR has come up with a nifty work around, similar to that at the University of South Carolina, but on a much larger scale. PACeR will pay hospitals to be an intermediary. For between $50,000 and $200,000 per query, hospitals will search their database of medical records to identify patients who fit a particular protocol and give the company information about who the patient’s doctor is—without the specific PHI. The physician would then contact their patient and get consent to release any PHI.
This type of procedure for contacting patients is a cumbersome and time-consuming one for the physician in the trenches. In my setting, it would be unworkable for a variety of reasons, including the uncompensated time of the primary physician, the hassle factor, and the narrow time window for enrollment on trials for acute infections. In addition, many physicians are not familiar with either the needs of research or the benefits to their patients. This would put a large administrative burden on the physician’s practice, both in playing the middleman, and since there are also logs of release of PHI that must be maintained. Add this to the pressure already on physicians to be “productive” and see patients quickly in a brief encounter. It seems the only ones profiting under the PACeR model are the corporations—certainly there is no direct benefit to the patient. This seems akin to the exploitation of patients like Henrietta Lacks, done without either consent or compensation.
Of broader concern is, of course, mistrust over industry’s access to vast amounts of health data. Although this would be deidentified, there have been too many reports in the news of breaches of security, exposing large amounts of private health information on the internet.
Another source of my mistrust is the June Supreme Court decision in Sorrell v IMS Health, in which the Supremes, “by a 6-3 vote struck down a Vermont law that barred pharmacies, drug makers and others from buying or selling prescription records from patients for marketing purposes. Vermont’s physicians had sought passage of the law, arguing that their prescriptions were intended for private use of patients and should not become a marketing tool.” So much for patient privacy.
One of my other concerns is that PACeR appears to tilt the playing field towards a few giant pharmaceutical companies. As an individual researcher, I am frustrated that, because of HIPAA, I can no longer access data I need to recruit patients in a timely fashion. And, lacking industry’s deep pockets, I have neither the clout nor funds to buy this access. Nor can I even do chart reviews to describe patient outcomes. Frankly, rather than have this type of industry-hospital consortium, I would rather see HIPAA revamped to allow better access to all to data for research, if the data is held in a secure manner. Living in a small rural community, I’m not sure that I would even require IRB approval (cumbersome and costly) for things like record reviews of patients with a particular condition or public health issues. One other alternative to consider would be having all patients be offered a release on hospital admission or on an office visit, to indicate if their data would be accessible to researchers, with the appropriate privacy safeguards. This would be important, as now many potential volunteers are lost, especially on acute care studies, due to time constraints in the enrollment criteria.
So on the one hand, we have the push from the government and insurers to have electronic medical records and health outcomes research (HITECH Act), the Sentinel Initiative for postmarketing surveillance of electronic medical records for adverse events, and Medicare reimbursements linked to “meaningful use” (i.e., providing data) of the EMR. On the other hand, we have the specter of HIPAA and more draconian penalties for breaches of personal privacy. Now we have industry making deals with hospital systems to buy data. While I have misgivings about this approach, there needs to be better access to medical records for research, given appropriate safeguards regarding privacy and permissions for reuse. We need to find a way to boost the current dismal participation rate in clinical trials—less than 5 percent—if we will succeed with medical research in the U.S.
With the growing consensus gathered from clinical researchers, reviews of patient complaints, surveys of academicians and now the imprimatur of the nation’s leading scientists that HIPAA is not only failing to provide any protection for clinical research subjects but is increasing research costs and probably reducing participation, we can only hope that reason will prevail, and the HIPAA rules will be eliminated for clinical research.
Cartoons: from Rogue Medic
Previously in this series: