Health care quality refers to a level of value of any health care resources as determined by some a measurement. The goal of health care is to provide medical resources of high quality to all who need them. Researchers use many different measures to attempt to determine health care quality, including counts of a therapy’s reduction or lessening of diseases identified by medical diagnosis, a decrease in the number of risk factors which people have following preventive care, or a survey of health indicators in a population who are accessing certain kinds of care.
The quality of the health care given by a health professional can be judged by its outcome, the technical performance of the care and by interpersonal relationships.
“Outcome” is a change in patients’ health, such as reduction in pain, relapses, or death rates. Large differences in outcomes can be measured for individual medical providers, and smaller differences can be measured by studying large groups, such as low- and high-volume doctors.
“Technical performance” is the extent to which a health professional conformed to the best practices established by medical guidelines. The presumption is providers following medical guidelines are giving the best care and give the most hope of a good outcome. Technical performance is judged from a quality perspective without regard to the actual outcome – so for example, if a physician gives care according to the guidelines but a patient’s health does not improve, then by this measure, the quality of the “technical performance” is still high.
Patient Satisfaction in Healthcare
Since the 1980s, interest in the measurement of patients’ satisfaction with their healthcare experiences has increased, following reports that high patient satisfaction is associated with better health outcomes. This has not been universally accepted, however, and the debate over using patient satisfaction ratings as a quality-of-care marker continues.
In March 2012, a study published in Archives of Internal Medicine made a controversial contribution to the debate. Joshua J. Fenton, MD, MPH, and colleagues at the University of California, Davis, reported the results of their analysis of data from more than 50,000 adult patients indicating that the most satisfied patients (highest patient satisfaction quartile relative to the lowest quartile) were 12% more likely to be admitted to the hospital and had both total healthcare expenditures and prescription drug expenditures that were 9% higher. Most perplexing to many readers at the time, these patients were also 26% more likely to die.
Among the strengths of the study were its nationally representative sample and adjustment for potential confounders, such as sociodemographic characteristics, insurance status, availability of a usual source of care, chronic disease burden, health status, and first-year healthcare utilization and expenditures. These adjustments were the basis for some immediate and later criticisms of the study, which cited other studies that found higher patient satisfaction associated with favorable outcomes, including lower inpatient mortality rates. In reply, Dr. Fenton and his coauthors pointed out that when they excluded the sickest 5% of patients who reported high satisfaction from their analysis, the association between higher satisfaction and mortality grew even stronger.
Since publication, Dr. Fenton’s findings, particularly the mortality association, have been cited many times in the medical and lay press as evidence in support of the view that patient satisfaction is not associated with quality-of-care outcomes, including a recent article reprinted by Medscape that attracted many comments in agreement. The study also continues to garner criticism from supporters of the opposite view.
Two years after the study was published, Dr. Fenton spoke with Linda Brookes, for Medscape, about its findings and how they have been (mis)interpreted, along with his views on measurements of patient satisfaction.
Patients’perceptions about health services seem to have been largely ignored by health care providers in developing countries. That such perceptions, especially about service quality, might shape confidence and subsequent behaviors with regard to choice and usage of the available health care facilities is reflected in the fact that many patients avoid the system or avail it only as a measure of last resort. Those who can afford it seek help in other countries, while preventive care or early detection simply falls by the wayside. Patients’voice must begin to play a greater role in the design of health care service delivery processes in the developing countries.
Measuring and reporting on patient satisfaction with health care has become a major industry. The number of medlinearticles featuring “patient satisfaction” as a key word has increased more than 10-fold over the past two decades, from 761 in the period 1975 through 1979 to 8,505 in 1993 through 1997. Patient satisfaction measures have been incorporated into reports of hospital and health plan quality,1, 2 and armies of consultants make a good living selling software packages to health care providers eager to assess their customers’ reactions by telephone, fax, and modem.
If patient satisfaction is to take its place alongside morbidity, mortality, and functional status, several critical measurement issues must be addressed. First, scale developers and end-users need to be clear about what they are measuring. “Patient satisfaction” is not a unitary concept but rather a distillation of perceptions and values. Perceptions are patients’ beliefs about occurrences. They reflect what happened. Values are the weights patients apply to those occurrences. They reflect the degree to which patients consider specific occurrences to be desirable, expected, or necessary.
Most contemporary measures of patient satisfaction employ hybrid questions that assess perceptions and values simultaneously. An example is, “How satisfied were you with the amount of time the doctor spent with you today? (extremely, . . . not at all)?” In responding, patients must first estimate the amount of time they spent with the doctor, compare it with an internal standard, and then provide an overall judgment. Such hybrid questions have the virtue of linguistic economy but make it difficult to distinguish perceptions from values. Given these semantic vagaries, a patient who receives poor care but has low standards may report the same satisfaction as a patient who receives good care but whose standards are unreasonably high. In the ambulatory instrument developed by the Picker Institute (Boston, Mass.), patients are not asked about “satisfaction with communication,” but rather, “Did the provider explain what to do if problems or symptoms continued, got worse, or came back?” Responses to questions of this type are not readily summed or averaged, and Cronbach coefficients for the data are often low. Nevertheless, what is lost in scalability is gained in interpretability. If I were told that my patients’ adjusted satisfaction score was a full standard deviation below the mean for all practitioners at my clinic, I’d be upset, but I wouldn’t know what to do about it, and I probably wouldn’t change how I practiced. On the other hand, if I learned that 40% of my patients didn’t know what to do if their symptoms returned, I might give my approach to providing follow-up instructions some scrutiny.
Despite the advantages of disaggregating patient satisfaction into its component parts, most research studies have treated satisfaction as a “black box” that predicts certain outcomes (e.g., plan disenrollment) and is in turn predicted by certain antecedents (e.g., practice size). Opening the black box can reveal new relationships.4 In this issue of the Journal, Zemencuk et al. report on a survey of 652 patients and 105 physicians in four primary care sites in Michigan and Ontario.5 The survey asked separately about patients’ desires (what they wanted) and expectations (what they thought would occur over the near term). Although there were no cross-national differences in patient desires, American respondents were significantly more likely than their Canadian counterparts to say they “expected” mammography; Pap, prostate-specific antigen, and cholesterol testing; and breast and rectal examinations. It remains for larger, more generalizable studies to explore whether these differences reflect general cultural factors or priming by experience. However, by creating definitional boundaries between patient desires and patient expectations, the authors discovered patterns in the data that would otherwise have been lost.

It is becoming increasingly important in the healthcare setting to treat patients as consumers and measure their satisfaction with medical services rendered. As such, patient satisfaction should be considered an important output of a country’s healthcare system, basically reflecting the stage of its development. In particular, the work has analysed the impacts of personal relationships, promptness and tangibility on student satisfaction. The findings imply that all three factors significantly affect patient satisfaction, with personal relationships having the strongest impact. Such results suggest that healthcare providers should encourage their doctors to devote more time to their patients and show genuine concern for patients’ problems if they wish to improve overall satisfaction of their patients with the delivered services.

• Donabedian, A (23 September 1988). “The quality of care. How can it be assessed?”. JAMA: the Journal of the American Medical Association 260 (12): 1743–8. doi:10.1001/jama.1988.03410120089033. PMID 3045356.
• • Lau, Rick. “The role of surgeon volume on patient outcome in total knee arthroplasty: a systematic review of the literature”. BMC Musculoskelet Disord 20: 1290–8. PMID 3534547.
• • Neumayer, LA (1992). “Proficiency of surgeons in inguinal hernia repair: effect of experience and age”. Ann Surg. 18 Suppl 1: 27–30. PMID 1357742.
• • Birkmeyer, JD (27 November 2003). “Surgeon volume and operative mortality in the United States”. N Engl J Med 349 (22): 2117–27. doi:10.1056/nejmsa035205. PMID 14645640.
• • “Doctors Do Better when They Do Procedures Often”. Retrieved 12 December 2014.
• • Brook, R. H.; McGlynn, E. A.; Cleary, P. D. (1996). “Measuring Quality of Care”. New England Journal of Medicine 335 (13): 966–970. doi:10.1056/NEJM199609263351311. PMID 8782507. edit
• Chassin, M. R. (1998). “The Urgent Need to Improve Health Care Quality: Institute of Medicine National Roundtable on Health Care Quality”. JAMA: the Journal of the American Medical Association 280 (11): 1000–1005. doi:10.1001/jama.280.11.1000. edit

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s