Introduction

Health information exchange (HIE) encompasses a variety of technological approaches to improve provider access to patient information collected and maintained by other organizations [1]. By facilitating access to timely and comprehensive patient information, HIE is an intervention intended to address the threats to quality, safety, and efficiency posed by inaccessible or missing information at the point of care [2]. Due in part to substantial public and private funding [3, 4] and encouraging public policies [5, 6], HIE activity is growing [7, 8], and is an increasingly important component of the business of health care [9].

Research to date has largely relied upon secondary data sources to examine adoption and utilization [10, 11, 12], or has examined single HIEs [13, 14, 15, 16]. More recently, research has begun to compare multiple HIEs within or across states to examine process and outcomes [17, 18]. However, recent systematic reviews of the literature have been fairly critical, concluding that the existing evidence base for HIE as an effective intervention to change utilization, cost, and quality is insufficient [13, 23, 21], and is falling short of expectations around the ability for data stored in HIEs to facilitate population level data.

The less than desirable status of the evidence is a disappointment, for it would appear a fertile environment for HIE informative research and evaluation exists. For example, the Office of the National Coordinator’s $540 million State HIE Cooperative Agreement Program to create the technical infrastructure necessary for HIE in every state included evaluation requirements [4]. Even localities and states have specifically funded HIE evaluation and research [22]. Furthermore, at last count, the Agency for Healthcare Research and Quality (AHRQ) reports funding more than 100 HIE research and development projects [23]. Also, the number of organizations facilitating HIE has proliferated during the 2000s, increasing the opportunities for research and many of these efforts had connections to academic research institutions.

Why then, despite these apparent opportunities, do the number of empirical studies on the impact of HIE remain relatively few when compared to the financial investments in HIE? [19, 20] Why are qualitative, survey-based, and descriptive studies much more common? [19] Although valuable, such studies do not necessarily advance the science of whether or not HIE will be a strategy to support better health. The answers to such questions are the type of day to day and on the ground information about the process of research and evaluation that often does not get reported in the literature. Insights into successful, and less than successful, HIE research and evaluation studies would be of immense practical and theoretical guidance to researchers, funders, and those working to facilitate HIE.

This study aims to explore the characteristics of the HIE research and evaluation environment. More specifically, qualitative interviews were conducted with HIE researchers, evaluators, and organizational leaders to provide context and insights into the activities, situations, and experiences of HIE research and evaluation. Findings from this study can be used to identify strategies and approaches to strengthen future work in this area.

Methods

Through qualitative data collection, we obtained the perspectives of individuals who lead HIE research and evaluation projects. We also obtained perspectives from leaders of HIE facilitating organizations which serve as the sites or subjects in research and evaluation projects.

Sampling and Recruitment

The total sample included 23 key informants (19 researchers or evaluators and four leaders of HIE efforts). Through a combination of individual interviews and a focus group, the key informants represented academic, public sector, and private sector organizations. We used a convenience sampling approach aimed at achieving saturation, and identified potential participants based on contributions to the HIE literature and professional association memberships. Consideration was given to ensure that the sample included participants with experiences from geographically diverse areas of the United States, local and national HIE evaluations, researchers working with community and Enterprise HIEs, and HIE efforts in various stages of development. For example, several of the evaluators had worked with HIE efforts in place for multiple years prior to the HITECH Act, while other’s work began with efforts that had started more recently. The remaining key informants (n=4) were leaders of HIE efforts. These practitioners were identified with assistance of the Healthcare Information & Management Systems Society. Many of the evaluators and practitioners had worked with more than one HIE effort or project, so interviews reflected experiences with over 25 different HIE evaluations. Given these considerations, a list of 28 potential participants was compiled and these individuals were invited to participate in the study via email.

Data Generation

Interviews were conducted between October 2014 and January 2015. Interviews were semi-structured (i.e., based on a set of general discussion questions with flexibility to let topics and discussion evolve naturally) and encouraged informants to recount their experiences with HIE evaluations (see Appendix A). Prompts were employed to redirect participants to the general discussion topics if necessary. All interviews were conducted via telephone with a minimum of two members of the research team on each call. All interviews were recorded with permission and were transcribed prior to qualitative analysis. Interviews averaged 39 minutes.

In addition to individual interviews, a focus group that included six participants was held with the assistance of the Evaluation Working Group of the American Medical Informatics Association. Two focus group participants had been previously interviewed individually and served as a member- check, or a validation or expansion, or both, of preliminary findings. A variation of the semi-structured interview guide was used to facilitate the focus group discussion.

Analysis

Three authors (author initials will be added following blinded peer review) jointly read a subset (one third) of transcripts and employed inductive coding to identify tentative themes reflecting the barriers, challenges, and enablers associated with HIE evaluations. Potential themes were then discussed in an iterative manner among the coders to reach consensus on code definitions. Thus, a coding dictionary including seven different themes emerged from the data. This dictionary was then applied to the remaining transcripts, which were divided among the coders. As necessary, discussions were held to resolve any discrepancies in coding. Inter rater reliability ranged from 0.47 to 0.58 across coders (0.40–0.75 is a good agreement range) [25]. Data was managed and coded in NVIVO 10 [25]. Ethical approval was granted by the institutions of the two lead researchers on this study (university names to be added following review). All participants provided consent to be recorded and participate in this study.

Results

Our analysis revealed theme saturation, and we report common themes and challenges across HIE evaluations. These were grouped into seven themes including HIE maturity data availability, goal alignment, cooperation, data quality methodology and health policy.

HIE Maturity

Several interviewees noted that work with mature HIE efforts (that is, where the technology was fully developed, system usage was widespread, and information systems were populated with data) facilitated robust outcome studies. Not surprisingly immaturity along any of these dimensions placed evaluations at risk for failure. As one statewide HIE evaluator observed, “You don’t have much to evaluate if the implementation doesn’t proceed very deeply.” Other respondents described maturity problems as “the rate limiting step” or that HIEs “didn’t get that far” or were “too underdeveloped” to evaluate.

HIE maturity reoccurred as a limitation to research and evaluation across the majority of participants. However the most common maturity problem was an insufficient HIE usage level. As described by one participant, “The big obstacle we had with the evaluation is that they build the technology and they started to employ it out, but they haven’t gotten people to actually use it yet.” Likewise, a federally funded evaluation of the impact of HIE on quality safety, and efficiency “wound up being able to basically do none of that work…When it did finally get implemented, usage was a major issue.”

Data Availability

HIE research and evaluation studies required diverse types and sources of data: clinical indicators, patient- level demographics, claims, characteristics of users, system usage statistics, descriptions of participating health organizations, and more. These types of data were necessary for measuring both the intervention (e.g., HIE activity) and the outcome (e.g., costs, readmissions, quality of care). Somewhat ironically evaluating technologies intended to improve data aggregation and sharing among different organizations faced significant data availability challenges.

Sometimes the data simply did not exist. An HIE leader stated “what you think would exist does not exist. In many cases [vendors] can’t produce the data you would want.” Often in the face of no available quantitative data, researchers moved to qualitative approaches or descriptive studies as alternatives. Other times, data were not in a usable format or even understandable. One evaluator reported how “most of the data we got was high- level and it wasn’t really clear, even then, what the metrics we got were.” Evaluators felt that for a rigorous evaluation with strong inferences about causality, “granular” data are necessary.

Even more challenging, researchers recounted how they knew the data the evaluation required existed, but that obtaining the data was difficult or prohibited. For example, data are often stored and managed by a technology vendor As a result, the HIE organization that is being studied or evaluated does not always maintain direct access to the data. Data ownership is organized like this for various reasons, including the amount of insurance and security needed to protect individual patient health information. However, HIE vendors that own the data have no direct obligation to researchers contracted by an HIE organization to evaluate the HIE. One researcher with multiple evaluations experiences explained: “Rarely does the [HIE] vendor back off and not want to charge us to do something [like query the data],. I wish vendors would say it’s in the patient’s, it’s in the population’s best interest to create these interfaces and share data freely so that we can improve health, but people are often concerned primarily with their bottom line.”

It was frequently noted that good working relationships (e.g., trusted or established relationships) and formal partnership with HIE organizations, for example being at an academic medical center that participated in the HIE effort, helped address data accessibility issues. However, a lack of a formal tie, or funding to pay for data access prohibited some researchers from working with HIEs or even some investigations from occurring.

Goal Alignment

Several researchers repeated or echoed the idea that the challenges of HIE were less technological, but rather the result of the various objectives and needs of the different stakeholders. HIE evaluation projects all involved the HIE organization, the HIE vendor, policymakers, and funders to different extents. Each of these stakeholders had their own different priorities and objectives, but were also accountable to their own internal stakeholders (e.g., organizational boards, organization or agency leaders, congress, taxpayers). With such varied stakeholders, the different goals, needs, and accountabilities eventually came into conflict with research. More than one informant recounted how their goals as researchers did not align with the goals of the HIE organization: “[R]esearch seems to be a low priority for the HIEs at this point. The HIEs are so busy trying to get the operations going, trying to get a product that’s going to be useful, trying to improve usability, all of this sucks all of the air out of the room…because operational work is so challenging I think [research] ends up falling by the wayside…”

Cooperation

Cooperation, the willingness of all stakeholders (i.e. HIE organizations, evaluators/research teams, third party vendors) to collaborate, was another socio-political determinant of successful research projects. Cooperation for researchers primarily was needed for access to data and to people. Researchers also needed HIE organizations to work with them to establish the necessary legal agreements and protections to access patient information, but they indicated that they had no leverage and often struggled to get vendors to share data.

Alternatively, successful research often hinged on leveraging existing relationships for cooperation in evaluation efforts. A researcher with multiple publications on quantitative evaluations noted that having previously established relationships was a major facilitator He or she said, “trust is the number one issue…getting access to the information and to the people didn’t get thwarted by concerns about what [we] were going to do with their data.” Similarly, another evaluator noted, “Our group has worked with a lot of these HIEs extensively for a variety of evaluations, so it wasn’t hard to know the right person to talk to.” He or she went on to note that government agencies and funders can help foster cooperation. “[T]here was [a state official] who was requesting the evaluation. There were several phone calls where they were essentially mediated by this state official who was well respected and who everybody knew…who was at the table was largely successful because this particular official was there.”

Data Quality

Researchers indicated that when they could get data, they had concerns about its accuracy, completeness, and validity. For example, one evaluator said, “Data quality issues persist in the exchange I am working with…and I think it’s the biggest barrier, the biggest threat to the success of these projects. Lack of comprehensiveness, lack of concordance, just major data quality issues where what the exchange thinks it has is far different from what it actually has when you start drilling into it…” Another recounted how in a less than successful project, “we couldn’t always tell exactly what all the elements even meant. Generally, [the] data were almost entirely meaningless.” Similarly, a different participant stated, “You can build the most robust health information exchange and have complete access to all the data and put it into the most scalable and most adaptable and flexible analytics tool imaginable, but if the data [being collected in the HIE] is crap you’re not going to be able to find anything in your evaluation.”

Methodology

Several interviewees voiced major concerns surrounding the methodological approaches and theoretical underpinning to model and measure the complexities involved in HIE research. “The causal chain is often convoluted and distal. Linking HIE to changes in outcome is difficult to do.” However, identifying valid and reliable data concerning HIEs seems to be a key issue plaguing this area of research. For example, one interviewee noted that “surveys can’t ask complex, technical questions because users just don’t understand the technology”. Adding to this point, a different interviewee highlighted the lack of reliability of key concepts: “Definitions of HIE are so different that it is hard to do comparisons.” Another evaluator added to this concern over reliable measures: “What does it mean to do a ‘look-up’? Should we be looking at every single data element that was brought up to the screen level or should we say that if you clicked on the screen and stayed on the screen for more than ten seconds, you saw everything?”

In addition to expressing issues about reliability of concepts and measures, an HIE leader questioned the validity of measures given the wide variety of organizational context: “There are all of these metrics out there trying to measure the size and capacity of the health information exchanges [number of HIE registered users, number of HIE logons in a defined period],. I always struggle with those because those questions seem on the outside like they should have clear answers but they don’t and everybody answers them differently. So when you really start to evaluate the size or the efficacy of these exchanges, everybody uses a wide variety of different measures.”

Health Policy

Two health policies directly influenced HIE research and evaluation activities: the State HIE Cooperative Agreement Program and the HITECH Act. While evaluation was a requirement for state HIEs that received federal funding from the Office of the National Coordinator for Health Information Technology (ONC), the actual guidance for evaluators was perceived as minimal and variable. For example: “I think [ONC] wanted, in some sense, to let a thousand flowers bloom. They did want to know what the impacts were on utilization and outcomes and some knowledge about processes, but we did not get a specific list….” In addition, “The major change in terms of the methodology in the cooperative agreement grant was after the first year, ONC shifted from supporting health information exchange the noun to encouraging health information exchange the verb. That really meant that our methodology had to change from just talking about [HIE] use.”

While not directly an impetus for evaluations or a source of research funding, the pervasiveness of Meaningful Use even influenced HIE research as it dictated organizational priorities. As one HIE leader favorable to evaluation noted, “I think that meaningful use has hampered our ability to do evaluation and that is because all of our hospital partners have been so busy with meaningful use, trying to comply with meaningful use, or focused on meaningful use compliance. In my opinion, some of our innovative research ideas have been grounded to a halt because we’ve had to shift gears to focus on meaningful use.”

Discussion

Across a myriad of approaches to HIE, those conducting HIE research and evaluation in the United States face similar challenges. Overall, these findings suggest that the ingredients necessary for successful and informative research on the ability of HIE efforts to improve efficiency and quality are frequently absent – with uncertain data quality research and day-to-day operations risk being mired in the “garbage in, garbage out” (e.g. if the data that goes into the HIE is meaningless or not valuable, what comes out is also meaningless or not valuable) morass of uncertainty. Variations in methodology are appropriate for identifying different phenomena, features, and situations. However our current methods, particularly in the area of measurement, are too underdeveloped to support generalizability and translation of findings [21, 26]. Even more critical, immature organizations with little usage have effectively nothing to evaluate and without available data, research cannot take place [27, 28]. Even if this were not the case, organizational factors, such as misaligned goals and lack of cooperation, as well as policy factors, can hinder research and evaluation. If these are the challenges inhibiting the HIE evidence base, what options exist? In the current political and economic environment, practical options are limited. New sources of funding specifically dedicated to HIE are not likely and federal policies are specific to data sharing for clinical or public health reporting purposes, not to supporting research.

First, in relation to the findings about methodology the research and evaluation community can take concrete steps to address the challenges described around causal frameworks, constructs, and measures. One approach would be for the HIE community to follow the example of others by convening panels to support research agenda development and conceptual thinking for secondary data [29, 30]. Additionally HIE researchers and evaluators could make better use of existing metrics. Recently, the ONC commissioned a report on measurement issues, which includes numerous examples of metrics in use by various organizations and evaluators. While metric lists by organizations like the ONC [31] may not be the last word on measurement, such a compendium provides a starting point for existing approaches already in use. Already there are calls for greater attentiveness to designs, levels of measurement, and conceptual frameworks for outcome evaluations [21, 26], but our findings suggest the research and evaluation community can be more attentive to the glaring need for greater conceptual clarity about HIE in general. Our interviewees’ experiences differed on multiple dimensions: state and local; community and enterprise; large and small; with and with academic participation; and with significant public funding and without. Inherent to these differences is variability in research capacity. For example, Enterprise HIEs predominately leveraging DIRECT Secure Messaging do not enable population health analytics, yet community HIEs that store data in a central repository can support research efforts [32, 33]. Nevertheless, despite variation, the experiences of our sample of evaluators were similar. As a research community, we do not have either a good handle on this variation, nor a clear method of categorizing this information. The need for better categorization of HIE efforts is only growing as Enterprise HIEs and vendor-based solutions (e.g., Epic’s Care Everywhere and CommonWell Health Alliance) are becoming more common.

Second, integrating academic researchers and evaluators into HIE organizations’ regular operations and planning activities through formal partnerships could mitigate the cooperation challenges and issues with goal alignment noted in the findings. Such arrangements may not directly change the problems of misaligned goals among all players, but at least researchers may be able to navigate these differences better by being attuned to the nuances and politics of the organization. Tighter collaboration works: multiple early HIE efforts were closely aligned with research institutions and recent efforts have also seen such partnerships be productive [34, 35]. Furthermore, the benefits of such partnerships would not be one-sided. Researchers would also be better versed in organizational objectives and challenges to be able to conceive applied research questions that meet multiple stakeholder goals. Also, academic expertise could help bolster the HIE’s informatics and analytic services necessary to support population health activities.

Third, in regard to the findings about data availability, we propose the idea of a nationwide research database composed of HIE information similar to that of AHRQ’s Healthcare Cost and Utilization Project (HCUP) State Inpatient Databases. HCUP is a widely successful and important data source for influential health services and policy research [36]. This recommendation is obviously a long-term goal, especially given the federated nature of HIEs today, but sufficient parallels exist for HCUP to serve as a model. For example, HCUP did not begin with all 50 states, but participation in the State Inpatient Databases has grown and evolved over time. Currently many HIE organizations are still too immature to be reasonably expected to participate in such an effort. However, with the increase demand to support population health, many HIE organizations are developing analytic and reporting databases. Moreover, such a project could leverage the nation’s growing cadre of robust exchanges, and provide guidance to these less developed HIEs. An aggregated HIE database that consists of standardized and harmonized data elements such as we propose here also portends management related challenges. HIE organizations do not follow the same clearly delineated geographical organization of the HCUP State Inpatient Databases (and even the National Inpatient Sample), leading to a question concerning at what level data should be organized and managed. Adding to this issue is the fact that the State Inpatient Databases data are not free, but come at a cost to both AHRQ and researchers. Questions remain about who would host this proposed database. Fortunately forums exist [e.g., Strategic Health Information Exchange Collaborative (SHIEC)] to organize these key questions and potential partners in the process of developing such a database.

Lastly, current funding practices are not aligned with current health policy to produce optimal science. Given the strategic importance placed on HIE in improving the health of the nation [6], comparatively few sources of federal support are now available to researchers and evaluators to independently assess if that strategy is working, or how it can be improved. The apparently supportive environment does not stand up to close scrutiny Broadly considering the federal agencies with the most interest in HIE and health information technology reveals limited resources to conduct quality research: AHRQ faces political jeopardy with alarming frequency the National Library of Medicine is among the smallest of the National Institutes of Health, and the ONC’s grants are predominately geared towards implementation, workforce training, or standards development (not outcomes research). Even within AHRQ, the HIT-specific portfolio of funding opportunities is specifically limited to one mechanism (R21). Such constraints are not easily overcome. However, some opportunities for better alignment exist. For example, the longitudinal, population-level electronic patient records included in the Patient-Centered Outcomes Research Institute’s(PCORI) Clinical Data Research Networks (CDRNs) and Patient-Powered Research Networks (PPRNs) are, in effect, specialized HIE efforts. PCORI could explicitly encourage participation from HIE organizations (note: some HIE organizations already participate in CDRNs). Such an act would increase the types and nature of data available to CDRNs/PPRNs and include HIE organizations in the activities of a better-funded federal agency focused on outcomes research.

Additionally, any future state or federal funding to technology vendors or HIE organization needs to emphasize public accountability. HIE repositories are populated with patient information captured by federally-subsidized EHRs and many of the dollars paid by HIE organizations to vendors came from public funding mechanisms. As a condition of public funding for health information technology vendor contracts could be required to include provisions requiring data be made accessible in a timely useful, and complete manner. Asserting that public accountability to all those that have been supported by public funding is critically important as the locus of HIE activity shifts from public, non-profit organizations to private, enterprise HIE efforts [30]. As has long been the case, there are concerns about privacy and regulatory issues as well as competition within health care markets, issues that have been discussed in the EHR literature previously and for which yet no resolution has occurred [37, 38].

The policy environment influencing EHR adoption and the HIE landscape continues to evolve as the Centers for Medicare and Medicaid Services have recently announced that the Meaningful Use program for eligible providers will be subsumed into the Merit-Based Incentive Program in order to align quality improvement and alternative payment models with EHR adoption, use, and impact goals [39]. This policy may help to enable exchange across a wider range of provider types, including those previously ineligible for Meaningful Use incentives [40].

Limitations

We conducted a qualitative study of researchers involved in HIE evaluation and research. Although considerations were made for regional and HIE diversity among participants, our sample did not include all researchers involved in HIE work. As such, findings from this study do not reflect the experiences of all HIE researchers and evaluators.

Conclusion

HIE research and evaluation involves considerable time, effort, and coordination between various stakeholders. Moving forward, lacking better standardization, funding, and partnerships, HIE research will likely continue to be focused on single HIE organizations and unrefined metrics – leaving the field of health services research to offer underwhelming comment and analysis on the impact of HIE. Alternatively, the evolution of approaches to HIE may present opportunities given that some models (e.g. Community HIEs) are more inclined towards research activities. On the one hand, this outcome can produce robust evaluation of the impact of HIE. Yet, on the other hand, evaluation focused on one approach to HIE ignores the alternative approaches (e.g. Enterprise HIE) for comparison.

Our intent with this study was to identify the challenges faced by researchers in order to help HIE organizations better design their programs to facilitate evaluation activities. The topics highlighted in this investigation apply to all approaches to HIE, but also could benefit health care research as a whole. Policy makers can clarify issues around privacy concerns that inhibit data access, set sufficient timelines to observe effects, and allocate sufficient funding to offset data collection costs. Finally, this study highlights important areas requiring further research such as developing methodologies to properly model the effect of HIE activity on patient outcomes.