Stephen Huggett: Multiple choice exams in undergraduate mathematics

S. Huggett, Multiple choice exams in undergraduate mathematics, The De Morgan Journal, 2 no. 1 (2012), 127-132.

From the Introduction:

In addition to a rigorous practical test called the general flying test, candidates for a private pilot’s licence have to pass written exams in subjects including meteorology, navigation, aircraft, and communications. These written exams are multiple choice, which seems appropriate. The trainee pilots are acquiring skills supported by background knowledge in breadth not depth, and this can be tested by asking them to choose the right option from a limited list under a time constraint. It is not necessary, of course, for pilots to understand the underlying theoretical concepts.

In contrast, students of mathematics are certainly expected to understand underlying theoretical concepts. To a certain extent, this understanding can also be tested using multiple choice exams. Clearly, mathematicians need skills too, of which one of the most important is the ability to perform calculations accurately. This can also be tested using multiple choice exams.

Given that no one method of assessment is good for all of the understanding and skills expected of a student, one should use a variety of different assessment methods in a degree programme, including things such as vivas, projects, and conventional written exams. There is no claim here that multiple choice exams can do everything!

Read the rest of the paper.

Using Adaptive Comparative Judgement to Assess Mathematics

Ian Jones & Lara Alcock
Loughborough University

Adaptive Comparative Judgement (ACJ) is a method for assessing evidence of student learning that offers an alternative to marking (Pollitt, 2012). It requires no mark schemes, no item scoring and no aggregation of scores into a final grade. Instead, experts are presented with pairs of student work and asked to decide, based on the evidence before them, who has demonstrated the greatest mathematical proficiency. The outcomes of many such pairings are then used to construct a scaled rank order of students from least to most proficient.

ACJ is based on a well-established psychophysical principle, called the Law of Comparative Judgement (Thurstone, 1927), which states that people are far more reliable when comparing one thing with another than when making absolute judgements. The reliability of comparative judgements means “subjective” expertise can be put at the heart of assessment while achieving the sound psychometrics normally associated with “objective” mark schemes.

Until recently comparative judgement was not viable for educational assessment because it is tedious and inefficient. The complete number of required judgements for producing a rank order of $$n$$ scripts is $$\frac{n^2-n}{2}$$.However the development of an adaptive algorithm for intelligently pairing scripts as more judgements come in means the number of required judgements has been slashed to around $$6n$$.

David Wells: Response to the paper “What should be the context of an adequate specialist undergraduate education in mathematics?”, by Ronnie Brown and Tim Porter

D. Wells, Response to the paper “What should be the context of an adequate specialist undergraduate education in mathematics?”, by Ronnie Brown and Tim Porter, The De Morgan Journal  2 no. 1 (2012), 85-98. The link to the paper by Brown and Porter.

Further comments are welcome — all posts on this blog are open to comments.