Fundamental Assessment Principles …

As his title suggests McMillan sets out to elucidate the basic principles of assessment. He highlights the role of professional judgment ‘whether the judgment occurs in constructing test questions, scoring essays, creating rubrics, grading participation, combining scores, or interpreting standardized test scores, the essence of the process is making professional interpretations and decisions’ (p 2). It is often assumed that teachers are professionals and thus they can write and mark tests. Indeed they can, but the degree of validity and reliability may be severely compromised if the teachers aren’t aware of possible hazards.

The role of professional judgment is a fascinating one. Look at just three areas; test questions, rubrics and the weighting of the various components. There is often limited scrutiny of test questions. At times questions may not be a close approximation of what the students have learnt or they may target a trifling component of what students have learnt. Rubrics are a problem too, they may not optimally represent what should be measured and again there may be little scrutiny of the creation of a particular rubric. Another issue is the weighting allocated to individual question or sections of a test paper; some components of a test are often weighted more heavily than others and this may not be an optimal representation of the importance of various components of a course. In these cases teachers are doing their best, but may not have sufficient knowledge to be making the judgments they make.

McMillan talks about the tensions involved in assessment. In particular he highlights the opposition between constructed-response assessments and selected-response assessments. Classic examples of this opposition would be a written essay and multiple choice questions. Clearly the latter is easier to mark and can be marked more consistently while the former is more difficult to mark and consistency will vary. The former is more likely to be able to lay claims to validity while the latter is more able to lay claims to reliability. So which approach should a test-writer opt for?

Perhaps the most interesting point he raises is one which is very easy to understand, but isn’t talked about much; ‘assessment contains error’ (p. 3). Indeed it does. If we have anything less than 100 per cent validity and 100 per cent reliability, then we have error. No tests measure up to 100 per cent in both areas therefore we have error. McMillan notes ‘typically, error is underestimated’ (p. 3).

Another principle he covers is that ‘good assessments use multiple methods.’ So in the case of the tension between constructed-response assessments and selected-response assessments a fairer approach might be to include a bit of each.

Finally back in 2000 McMillan was warning that despite the upsides there is potential danger in the use of technology in assessment. In particular he refers to companies developing tests with insufficient evidence of reliability and validity and with little thought to weighting, error and averaging.

================

REFERENCES
McMillan, J.H. (2000) Fundamental Assessment Principles for Teachers and School Administrators. Practical Assessment, Research & Evaluation Vol.7 (8) 2-8 Retrieved July 6 from http://PAREonline.net/getvn.asp?v=7&n=8

Advertisements

About Top English Score

English Test Expert
This entry was posted in professional judgment, reliability, validity, writing assessment issues. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s