- Retrospective studies are designed to analyse pre-existing data, and are subject to numerous biases as a result
- Retrospective studies may be based on chart reviews (data collection from the medical records of patients)
- Types of retrospective studies include:
- case series
- retrospective cohort studies (current or historical cohorts)
- case-control studies
STATISTICAL ANALYSIS USED IN RETROSPECTIVE STUDIES
- Unadjusted, univariate, ‘simple’ or ‘raw’ analysis
- Compare outcomes between treatment and control group
- Used if treatment and control group are selected by a chance mechanism
- Stratified analysis
- Divide all patients into subgroups according to a risk factor, then perform comparison within these subgroups
- Used if only one key confounding variable exists
- Matched pair analysis
- Find pairs of patients that have specific characteristics in common, but received different treatments; compares outcome only in these pairs
- Used if only a few confounders exist and if the size of one of the comparison groups is much larger than the other
- Multivariate analysis
- More than one confounder is controlled simultaneously, if a larger number of confounders needs to be adjusted for computer software and statistical advice is necessary
- Used if sample size is large
- No statistical analysis
- Simple description of data
- Used if sample size is low and other options failed
ADVANTAGES OF RETROSPECTIVE STUDIES
- quicker, cheaper and easier than prospective cohort studies
- can address rare diseases and identify potential risk factors (e.g. case-control studies)
- not prone to loss of follow up
- may be used as the initial study generating hypotheses to be studied further by larger, more expensive prospective studies
DISADVANTAGES OF RETROSPECTIVE STUDIES
- inferior level of evidence compared with prospective studies
- controls are often recruited by convenience sampling, and are thus not representative of the general population and prone to selection bias
- prone to recall bias or misclassification bias
- subject to confounding (other risk factors may be present that were not measured)
- cannot determine causation, only association
- some key statistics cannot be measured
- temporal relationships are often difficult to assess
- retrospective cohort studies need large sample sizes if outcomes are rare
SOURCES OF ERROR IN CHART REVIEWS AND THEIR SOLUTIONS
From Kaji et al (2014) and Gilbert et al (1996):
- Chart review inappropriate for study question
- establish whether necessary information is available in the charts
- establish if there are sufficient charts to perform the analysis with adequate precision
- perform a sample size calculation
- Investigator conflict of interest or bias
- Declare any conflict of interest
Provide evidence of institutional review board approval
- Submit the data collection form, as well as the coding rules and definitions, as an online appendix
- Declare any conflict of interest
- Patient sample is non-representative
- Case selection or exclusion using explicit protocols and well described the criteria
- Ensure all available charts have an equal chance of selection
- Provide a flow diagram showing how the study sample was derive from the source population
- Needed variables are not in the records
- define the predictor and outcome variables to be collected a priori
- Develop a coding manual and publish as an online appendix
- Chart abstraction is not systematic (misclassification bias)
- Use standardized abstraction forms to guide data collection
- Provide precise definitions of variables
- Pilot test the abstraction form
- Presence of missing or conflicting data
- Ensure uniform handling of data that is conflicting, ambiguous, missing, or unknown
- Perform a sensitivity analysis if needed
- Abstractors biased or not blinded
- Blind chart reviewers to the etiologic relation being studied or the hypotheses being tested. If groups of patients are to be compared, the abstractor should be blinded to the patient’s group assignment
- Describe how blinding was maintained in the article
- Abstractors not sufficiently trained
- Train chart abstractors to perform their jobs.
- Describe the qualifications and training of the chart abstracters.
- Ideally, train abstractors before the study starts, using a set of “practice” medical records.
- Ensure uniform training, especially in multi-center studies
- Abstractors not sufficiently monitored
- Monitor the performance of the chart abstractors
- Hold periodic meetings with chart abstractors and study coordinators to resolve disputes and review coding rules.
- Chart abstraction unreliable
- A second reviewer should re-abstract a sample of charts, blinded to the information obtained by the first correlation reviewer.
- Report a kappa-statistic, intraclass coefficient, or other measure of agreement to assess inter-rater reliability of the data
- Provide justification for the criteria for each variable
SOURCES OF ERROR FROM THE USE OF ELECTRONIC MEDICAL RECORDS
Potential biases introduced from:
- use of boilerplates (a unit of writing that can be reused over and over without change)
- items copied and pasted
- default tick boxes
- delays in time stamps relative to actual care
References and Links
- CCC — Case-control studies
- Gilbert EH, Lowenstein SR, Koziol-McLain J, Barta DC, Steiner J. Chart reviews in emergency medicine research: Where are the methods? Ann Emerg Med. 1996 Mar;27(3):305-8. PMID: 8599488.
- Kaji AH, Schriger D, Green S. Looking through the retrospectoscope: reducing bias in emergency medicine chart review studies. Ann Emerg Med. 2014 Sep;64(3):292-8. PMID: 24746846.
- Sauerland S, Lefering R, Neugebauer EA. Retrospective clinical studies in surgery: potentials and pitfalls. J Hand Surg Br. 2002 Apr;27(2):117-21. PMID: 12027483.
- Worster A, Bledsoe RD, Cleve P, Fernandes CM, Upadhye S, Eva K. Reassessing the methods of medical record review studies in emergency medicine research. Ann Emerg Med. 2005 Apr;45(4):448-51. PMID: 15795729.
Chris is an Intensivist and ECMO specialist at the Alfred ICU in Melbourne. He is also the Innovation Lead for the Australian Centre for Health Innovation at Alfred Health, a Clinical Adjunct Associate Professor at Monash University, and the Chair of the Australian and New Zealand Intensive Care Society (ANZICS) Education Committee. He is a co-founder of the Australia and New Zealand Clinician Educator Network (ANZCEN) and is the Lead for the ANZCEN Clinician Educator Incubator programme. He is on the Board of Directors for the Intensive Care Foundation and is a First Part Examiner for the College of Intensive Care Medicine. He is an internationally recognised Clinician Educator with a passion for helping clinicians learn and for improving the clinical performance of individuals and collectives.
After finishing his medical degree at the University of Auckland, he continued post-graduate training in New Zealand as well as Australia’s Northern Territory, Perth and Melbourne. He has completed fellowship training in both intensive care medicine and emergency medicine, as well as post-graduate training in biochemistry, clinical toxicology, clinical epidemiology, and health professional education.
He is actively involved in in using translational simulation to improve patient care and the design of processes and systems at Alfred Health. He coordinates the Alfred ICU’s education and simulation programmes and runs the unit’s education website, INTENSIVE. He created the ‘Critically Ill Airway’ course and teaches on numerous courses around the world. He is one of the founders of the FOAM movement (Free Open-Access Medical education) and is co-creator of LITFL.com, the RAGE podcast, the Resuscitology course, and the SMACC conference.
His one great achievement is being the father of two amazing children.
On Twitter, he is @precordialthump.