Transmitting Temporal Understanding in Residency Training

By Eran Bellin, MD

Time Time Time

See what’s become of me

— Paul Simon, “A Hazy Shade of Winter”

Medical residents are survivors, spending more than a decade running a punishing mind gauntlet. SATs, MCATs, Boards Part 1 & 2 are time-pressured, multiple-choice exams evaluating functionally defined competencies. What comes out of this mental meat grinder are people trained to spasmodically and rapidly associate one of four responses with singular rightness.

There is a pedagogical cost to this process—a cost manifest with a discomfort with uncertainty and an expectation that rightness is clear and instantly recognizable by those who are worthy. The elect are known by their success. The notion of iterative revelation of approximate truth through ongoing observation, experimentation, and evaluation is not philosophically reinforced.

Now, in residency, we do want to encourage technical proficiency in the practice of specific clinical skills and patterns of diagnostic thought, but interestingly the Accreditation Council for Graduate Medical Education has stated as a clear goal the objective to nurture in the trainee the ability to think creatively about the totality of care delivered through the health care delivery system. The two standards, “Practice-Based Learning and Improvement” and “Systems-Based Practice,” explicitly enjoin program directors to engage their residents in ongoing observation of patient populations to assess the adequacy of clinical interventions and evaluate the need for modification.

Among the experiences we make available at Montefiore to our residents is Clinical Looking Glass training. Clinical Looking Glass is a user-friendly interactive software application sitting atop information from our electronic medical record. It permits clinicians to create cohorts of patients identified by fixed demographic characteristics as well as by clinical events temporally related to each other. These patients are then followed to user-defined outcomes. Analyses allow residents to evaluate the overall quality of care, identify those patients in need of remediation, and ask research questions in a de-identified mode that protects patient privacy.

The tool makes it easy to apply sophisticated temporal criteria referencing other events singularly or in referential chains. As an example: “Find all the patients who…

  • had a myocardial infarction in 2010
  • were started within 30 days post-discharge on clopidogrel
  • were started on a proton pump inhibitor within 30 days of clopidogrel.”

The cohort would then be followed from the prescription of proton pump inhibitor forward to some outcome, such as mortality. The comparison group would include patients with a myocardial infarction, subsequently started on clopidogrel, but never treated with a proton pump inhibitor. The start date for elapsed time outcome evaluation would be from the date of clopidogrel going forward.

With these two cohorts, we could compare the effect of the proton pump inhibitor on subsequent mortality. Does the PPI interfere with the hepatic activation of clopidogrel, compromising its protective effect? Leaving aside for a moment the potential biases, the notion that we can empower residents with the ability to transmute our medical records into cohorts of patients, to ask and answer clinical questions written upon our local population, is remarkable. In minutes, trained residents can replicate published studies.

“With great power comes great responsibility,” Voltaire wrote. At Montefiore, we spend a good deal of time teaching residents and medical students to understand temporality—how a question must account for notions of precedence and blackout periods to properly qualify the patients in the desired cohort.

A common error will help us understand precedence.

We ask residents to compare patients seen in 2013, with and without diabetes, who suffered a heart attack; and to follow those patients from the myocardial infarction to determine the relative risk of readmission over the next year.

Some mistakenly build a cohort with two logical requirements: presence of heart attack in 2013 and presence of diabetes in 2013, setting the start date from which follow-up is to begin at the date of heart attack.

Why is this wrong?

Well, you want to make sure that the presence of diabetes precedes the date of the heart attack, so that what you are evaluating is indeed a diabetic with a heart attack and not someone who has a heart attack and later develops diabetes. The fact that both the myocardial infarction and the diabetes were acknowledged in 2013 does not establish the required temporal precedence of diabetes. Clinical Looking Glass’s powerful temporal operators allow you to specify that the heart attack was preceded by a diagnosis of diabetes.

Another example will define the need for a blackout period.

When asked to find diabetics who had a timely repeat test of HemaglobinA1c, residents will sometimes look for diabetics and then look for the existence of a HemoglobineA1c within a year of the diabetes diagnosis.

Why is this wrong?

Well, suppose you identified the patient as a diabetic from a HemoglobinA1c and required a repeat test within a year. The repeat test could have occurred the next day, or the next week, but too soon after the first test to give any meaningful information about the impact of therapy. You really need to wait a period of time—a blackout period during which no repeat study is considered relevant until adequate time has elapsed for a test to be meaningful.

These logical errors are easily corrected; the residents develop a facility for understanding how to model time properly to ask and answer the real question of interest.

Equally important is that residents are empowered to turn their curiosity into answerable questions. From a values perspective, we are modelling longitudinal care responsibility. Beyond the transactional singleton therapeutic encounter, we are in a meaningful long-term relationship with our patients. We can and should review their progress as a group, identify those who have not achieved mutually sought clinical objectives, and reach out to them to remediate—evidence of our notion of accountable care.

bellinEran Bellin, MD, is a Professor of Clinical Epidemiology and Population Health and Medicine at the Albert Einstein College of Medicine and Vice President of Clinical IT Research and Development at Montefiore Information Technology. He will be releasing his book, Riddles in Accountable Healthcare, in March 2015. He can be reached at eranbellin@google.com.

Leave a Reply

Your email address will not be published. Required fields are marked *