Boix Mansiala et al (2009) propose a rubric for interdisciplinary writing. In doing this they note that much progress has been made in terms of ‘how’ to assess, but that ‘what’ to assess remains problematic. Their rubric lays out four dimensions (purposefulness, disciplinary grounding, integration and critical awareness) and rates students at four levels (naive, novice, apprentice and master). However, despite being ‘theoretically grounded’ and ’empirically tested’ the rubric seems as subject to problems with reliability and validity as other rubrics. Like other rubrics it is only as good as ‘what’ it tests, how relevant that ‘what’ is to the task and desired outcomes and how reliably it can produce the same grades by different assessors and / or at different times.
The authors had four assessors mediate and assess several times to refine the rubric. This makes it the best product that the four professionals in question could come up with. Four is a small sample to enable claims about reliability. Validity was tested by comparing the results of students doing interdisciplinary studies to those studying a single discipline. The interdisciplinary students fared better suggesting that the rubric was appropriately measuring some of what the rubric authors wanted to measure. This still leaves unanswered the question (perhaps the unanswerable question) of whether the rubric effectively measures the students’ competence and effectively sorts them into various levels of achievement.
Rubrics are an excellent idea in many ways, but even with the best of intentions and with expert design and input, they remain flawed and are unable to lay strong claims to high levels of reliability and validity. To dispense with them would be to toss the baby out with the bathwater. The question becomes how can we best leverage them.
Boix Mansilla, V., Duraisingh, E.D., Wolfe, C.R.; Haynes, C (2009) Targeted Assessment Rubric: An Empirically Grounded Rubric for Interdisciplinary Writing. The Journal of Higher Education, Vol. 80(3) 334-353