This Education Corner concludes the three-part series on the Healthcare Educators’ Triple Aim. In “Outcomes: Healthcare Education’s First Aim,” we learned about the first of three “aims” of healthcare education: the identification of real-world, behavioral outcomes. In the follow-up AEIS workshop, we learned how to apply this to generating measurable outcomes. In “Objectives: Healthcare Education’s Second Aim” and in the subsequent AEIS workshop, I provided a similar treatment of the generation of measurable five-component objectives. In this concluding article, I will address the issue of writing good assessment items, which is where the full power of the first two aims is realized.
The familiar student lament that “they always teach us things they don’t test, and test us on things they don’t teach” arises from the failure to complete the first two aims. If we have not specified the learning outcome, our assessment items, while potentially good, are more likely to measure the wrong kind of learning. Likewise, if we have not written a five-component objective, or SLOAT (the acronym formed from the first letter of each of the five components), we are more likely to choose the wrong means or mode of assessment. This is even more critical when we consider that we may not always be the instructor responsible for the objective or the assessment, as when a course we teach is inherited by another faculty member down the line. To see how this plays out, imagine that we have inherited the following objective for a course:
“Use population health data to identify health issues facing a community.”
Which of the following would be an appropriate assessment item for this?
The answer is, of course, that all of them could be appropriate, depending on what the intended level of learning and performance is. The objective, as written, is not specific enough to guide our assessment. Note that there might be any number of other objectives for which each of these assessments might be appropriate. The point is not whether these assessments are “right” but whether they measure the intended learning outcome. Here is an example of a SLOAT for the same objective:
Given (1) a data set of population health data and access to biostatistics software, the learner will (2) generate (3) a report by (4) writing that (5) highlights the most pressing health issues for that population and provides the rationale for whether intervention is warranted or not.
The first component, the situation, specifies what learners will have in front of them when they demonstrate the competency (learning outcome) you have in mind. The fact that the student is given a data set and biostatistical software suggests both should be used during the assessment.
This is further supported by the second component, the learned capability verb, or LCV. Because each learning outcome is associated with its own LCV, we know in this case that the LCV “generate” indicates that this objective is about “problem solving” (an intellectual skill). Therefore, learners should be “generating” a solution to a problem (rather than telling us about a solution in a short-answer response or identifying a solution provided to them via a multiple-choice question, for example).
The third component (the object) goes hand in hand with the LCV (e.g., generate a report) and tells us that the solution to the problem should be in the form of a report. However, a report could take a number of forms. Should that report be verbal to a group of community leaders, written for a government agency, or published as an article in a journal?
The fourth component tells us the answer: “by writing.” Now we know the report should be in written form, which further limits the variety of assessment items we might generate. Finally, the fifth component (the tools, constraints, and conditions of the performance) tells us at what level and in what manner the assessment should be evaluated or judged. In this case, we see that the written student report must address the most pressing issues and include a rationale for whether an intervention is warranted or not.
Five-component objectives make it nearly impossible to write inappropriate assessment items. This in turn is what allows us to analyze our curriculum to find out what is working and what is not. Performance on assessment items that measure our outcomes tells us whether the instruction intended to teach that objective needs work or not. Without the triple aim, assessment data may not reflect intended learning and the data, therefore, become almost useless in revising and improving curricula.
The healthcare educators’ triple aim removes 40% of the error involved in “traditional” assessment
and puts us on the right path for curriculum design and evaluation.
Of course, there is a lot more to writing assessments, including such things as creating good distractor answers in multiple-choice questions, developing rubrics for assessing work products or real-world performance, and determining how many items are needed for mastery. I will focus on these as well as other related skills in the AEIS Workshop on Assessment at noon on Wednesday, February 8. For more information on attending this workshop in person, participating live from another campus, requesting a workshop on your campus, or to set up a consultation with Education Resources for teaching or education scholarship assistance, contact Shae Samuelson at (701) 777-6150.
Richard Van Eck, PhD
Associate Dean for Teaching and Learning
Founding Dr. David and Lola Rognlie Monson Endowed Professor in Medical Education