The last two years of pandemic teaching, and in some cases experiences even before that, have resulted in many Duke faculty reconsidering their assessment practices and policies with an intent of being more flexible, more equitable, and more focused on assessment for learning. Thanks to Della Chambless, Shai Ginsburg, Amanda Starling Gould, and Bridgette Hard for sharing their experiences here; feel free to send us your examples.
Della Chambless, Lecturing Fellow, Romance Studies
My colleagues and I have redesigned important components of our lower level Italian courses as part of a Carry the Innovation Forward grant we received in 2021-22. One of the things we’re working to refine is the assessment structure of the courses, implementing Integrated Performance Assessment (IPA). The IPA is an approach designed for language classes, but the premise is relevant for other disciplines as well. The key is that IPAs are assessments designed to tightly align with standards-based performance-focused learning objectives, which are integrated into the course themes and unit topics. Student performance on the assessments is measured using scoring rubrics that rate performance as “meets expectations,” “exceeds expectations,” or “does not meet expectations.”
In the case of Elementary Italian 2, one of the unit themes is housing. At the end of the unit students are evaluated on their mastery (as appropriate for this level of proficiency) of each of three standard modes of language communication – Interpretive, Interpersonal and Presentational. Examples of the tasks are given in the table below.
|Interpretive task||Students watch a video and read an article on how to effectively conduct a housing and roommate search. They answer questions which are designed to evaluate their comprehension of the aural and written input.|
|Interpersonal task||Students simulate an interview in which one is looking for a place to live and the other has a room to offer. They discuss their habits and personal living styles, in addition to specific aspects of the accommodations in order to determine if they are a good match.|
|Presentational task||In groups students develop a video blog in which they imagine they want to rent a room in their apartment to an Italian exchange student. The blog consists of a tour of their living space and an introduction to themselves: routines, interests and habits.|
Our use of the IPA as an end-of-unit assessment is closely tied to an asynchronous module in which students meet weekly in small groups and engage in collaborative activities involving the three modes of communication outlined above. In this sense the IPA is to be viewed as the culmination of a sequence of largely collaborative learning tasks, based on meaningful authentic interactions in the second language. We believe that by systematically weaving both practice and assessment into authentic activities we can create a more comfortable, less stressful and more engaging learning environment for our students.
I think faculty in other disciplines could apply an assessment structure analogous to the IPA by clearly defining categories of “performance” or skills demonstration in their course, with associated learning objectives and assessment tasks (assignments), and by developing appropriate standards of performance (rubrics) for each of the assignments.
Shai Ginsburg, Associate Professor, Asian and Middle Eastern Studies
In “Games and Culture” (ISS 188), we designed the entire course as a game. Students have divergent paths towards achieving their goals: our assignments focus on either analysis, research, design or creativity. Students thus can show they master course materials using divergent media and skills. For each assignment, we provide a quick feedback so that if students discover they did not do as well as they had expected, they can try the same assignment again or follow a path that requires other skills. The quick feedback also tells students where they are grade-wise, allowing them to project their final grade, and plan their workload ahead of time.
We are using an automated feedback system and grade projector, Gradecraft, so running the class, providing feedback, and projecting the grade actually does not take too much of our time. Most importantly, we make it almost impossible for students to get a perfect grade for each assignment, but students also realize very quickly that they don’t need to get a perfect score to get an A for the entire class: just to show they master the required skills and materials well enough. This leads them to feel much more secure in trying their hands at new things. And it makes them much happier, as our evaluations show.
Amanda Starling Gould, Senior Program Coordinator, Franklin Humanities Institute
Over many years of designing and redesigning the undergraduate course “Learning to Fail” (I&E 252, co-taught with Dr. Aaron Dinin from Fall 2018 – Spring 2020) I’ve developed a form of collaborative assessment that productively involves the students in their own performance evaluation. Following lessons on how we learn – and in this class, how we learn to fail – as well as on how what we grade reflects what we value, and on the difference between ‘students’ and ‘learners’, students work both individually and collectively to design their own assessments, rubrics, and measures of success.
The rubrics they create for their projects are rigorous, astute, and infused with their own learning goals. The exercise of crafting their own rubric also informs their group project-building and guides their teamwork. That I then use these rubrics to grade their projects is only a minor outcome of the exercise: what they gain along the way in setting together individual and collective team goals and in defining success for the work they’ll do together are critical skills for innovators and entrepreneurs and creatives of all sorts. At the end of the term, students provide their own honest assessment, based on a guided prompt I’ve adapted from Dr. Jesse Stommel, and grade themselves on their own participation. This year, inspired by Dr. Max Liboiron, I am also formalizing our practice of giving ‘collaborator shoutouts’ so that students can name classmates who’ve enhanced their learning experience.
Why collaborative grading? Because traditional grading might not actually serve us, and because learning to fail requires an environment in which we can thoughtfully experiment without the fear of grades holding us back; it requires space for us to be learners. And because students say things like “I was able to express my true opinions and thoughts without the fear of getting a “bad grade” and this made my experience completing the assignments much more enjoyable and beneficial.” And that’s what we’re here for, yes? Since moving toward and refining this system, I’ve seen students thrive more fully than they did when the course was based on points, policies, and an arbitrary notion of ‘standard’ performance. And I’ll not go back.
Bridgette Hard, Associate Professor of the Practice, Psychology and Neuroscience
In Psychology 101 (Introduction to Psychology), it is normal for students to struggle a bit on tests given the breadth of material in the class, so we offer a second chance opportunity for students to learn the material and earn a better grade. Students who show good faith effort to learn by showing up regularly to class get to treat the final, cumulative exam as “optional.” This means they can skip it, or can study the material again and use their final exam grade to replace a lower midterm grade. Most students improve their grade with this method and reap the learning benefits of revisiting the material again at the end of the term.
These are just a few examples of the thoughtful ways Duke and DKU faculty are implementing flexible assessment practices in their courses. If you would like to share an idea or practice from your course to be highlighted in a future DLI blog post, contact us.