Greg Hunt's plan to reduce hospital admissions won't work if he can't measure successes and failures
- Written by Joan Henderson, Senior Research Fellow (Honorary)., University of Sydney
The controversial issue of hospital funding will be up for discussion again today as state health ministers meet at the COAG Health Council. Earlier this week, The Australian newspaper reported the federal health minister, Greg Hunt, might consider a ten-year funding deal with the states, rather than the normal five-year agreement. But this would depend on states agreeing to some of his proposals to reduce unnecessary spending and improve outcomes.
Read more: Remind me again, what’s the problem with hospital funding?
These proposals are questionable. Hunt’s plan is reportedly to pay GPs for preventing chronically ill patients being hospitalised and to fine hospitals for re-admissions that could have been avoided. Deciding who gets the carrots and who gets the sticks is a brave endeavour. Even with the best evidence, attributing an “avoidable” hospitalisation to care provided by either GPs or hospitals overlooks patient co-operation.
Will GPs be paid for advising patients to reduce drinking, quit smoking and eat more healthily even if the patient ignores them and becomes yet another heart attack admission to hospital? Will the hospital’s legal expenses be paid when a patient who should be re-admitted isn’t because of scolding accountants?
And, most importantly, it’s unclear who will make these determinations and how “better outcomes” can be be measured. This is because it is impossible at the moment to measure the outcomes of health care in Australia.
Information all over the place
That’s not to say we don’t have data. Data exist for care provided by hospitals, GPs, specialists and allied health professionals, but in separate patient information systems and clinical registries – or both. To measure the outcomes, all this care must be compiled and assessed together, using reliable data.
In the hospital system, each state and territory is responsible for data collection. Clinical coders are employed to work with national minimum data sets and standard classifications. But coders can only work with the information doctors and nurses provide, and limitations of missinghospital data are well documented. Hospital collections are funded by governments and costs are included in annual budgets.
Read more: Proposed health data report misses many of the marks
When it comes to general practice, there is no mandatory routine data collection. Medicare has information about attendance patterns, visit frequency and GP service items, but no details about the content of these visits – such as what conditions were managed or how each was managed.
Practices operate in silos, keeping their own records about their own patients. As with hospitals, patients may receive care from different practices, creating multiple records for the same individual in multiple facilities.
Given 87% of Australians visited a GP at least once in 2015-16, why there is no publicly funded, routine data collection is a good question. The Bettering the Evaluation and Care of Health (BEACH) program actively collected nationally representative data from GPs for 18 years. But the program lost funding in 2016 and data collection ceased, although the BEACH data are still current and available.
Electronic health records
Collecting data from GPs’ electronic health records seems a practical solution. It’s timely, cost-effective and, with data-extraction tools available, should be reasonably simple. The National Prescribing Service (NPS) is using this method in its MedicineInsight project, to produce data for quality improvement (for practices) and for aggregated data to inform government policy.
However, producing valid, reliable data from these records is anything but simple. Only about 71% of GPs have completely paperless patient records. The rest use a mix of electronic and paper records (25%) or paper records only (4%), which influences how representative the data may be.
Unlike research projects with clear participant denominators, the number of patients will differ depending on which day a data extraction is performed and the definition used to identify current patients.
There’s no regulation of GPs’ electronic health records. GPs use about eight different software products, but there are no nationally agreed and implemented standards for these. They have different data structures, terminology and classification systems (or none) and different data elements, labels and definitions.
Read more: Money given to GPs from ending the Medicare rebate freeze should target reform
There’s no standardised minimum data set to specify what data should be recorded at every patient encounter. There are no data links between conditions and the management actions taken.
Links are crucial for managing outcomes. For instance, how can you assess care provision for diabetes if the care and condition aren’t linked in the records?
The problem of missing data
As with hospital collections, missing data are a problem. Extraction tools cannot extract what isn’t in the record. The absence of some data elements is easy to identify, such as a blank “age” field. But if a diagnosis, medication or test order is not entered, there’s no way to tell it’s missing.
Test results are also easy to miss. While there is a standard messaging language for health systems, its use isn’t mandatory. Many practices receive results by email or paper, scan and attach them, rather than directly populating the appropriate fields in the record.
The few published studies from MedicineInsight acknowledge the limitations of data completeness and accuracy in the electronic health records. The frustrations the researchers must be experiencing is justified given the number of years of calls to introduce standards to resolve these problems.
The true measurement of outcomes needs a system-wide approach, starting with a person-based health record that includes standardised data from all health providers. We are a long way from having reliable evidence to support the carrot-and-stick decisions being proposed.
Joan Henderson was a member of the BEACH research team from 1999 until the program ceased in July 2016.
Authors: Joan Henderson, Senior Research Fellow (Honorary)., University of Sydney