# A response to Aaron Sloman

A response to: Aaron Sloman, Is education research a form of alchemy?

David Wells

Aaron Sloman after a very distinguished fifty-year career is currently the still active Honorary Professor of Artificial Intelligence and Cognitive Science at the University of Birmingham. His conclusions in this article, as he admits,

“are not the theories of a specialist researcher in education or educational technology”

but are strictly from his own AICS perspective, plus his experience of teaching programming and AI and related subjects to undergraduate students.

My perspective is essentially that of a primary and secondary mathematics teacher who has long taken an interest in educational research, so I was immediately suspicious of his title, which he attempts to justify like this:

# Gender differences in mathematics anxiety

A new study published today in BioMed Central’s open access journal Behavioral and Brain Functions [Devine A, Fawcett K, Szűcs D, Dowker  A (2012), Gender differences in mathematics anxiety and the relation to mathematics performance while controlling for test anxiety. Behavioral and Brain Functions (in press; still was not online at time of writing).] reports that a number of school-age children suffer from mathematics anxiety and, although both genders’ performance is likely to be affected as a result, girls’ maths performance is more likely to suffer than boys’.

# Aaron Sloman: Is education research a form of alchemy?

This piece by Aaron Sloman in ALT News Online may be of interest:

First three paragraphs:

Alchemists did masses of data collection, seeking correlations. In the process they learnt a great many useful facts – but lacked deep explanations. Searching for correlations can produce results of limited significance when studying processes with an underlying basis of mechanisms with astronomical generative power. But this correlation-seeking approach characterises much educational research.

Accelerated progress in chemistry came from developing a deep explanatory theory about the hidden structure of matter and the processes such structure could support (atoms, subatomic particles, valence, constraints on chemical reactions, etc.). Thus deep research requires (among other things) the ability to invent powerful explanatory mechanisms, often referring to unobservables.

My experience of researchers in education, psychology, social science and similar fields is that the vast majority of the ones I have encountered have had no experience of building, testing, and debugging, deep explanatory models of any working system. So their education does not equip them for a scientific study of education, a process that depends crucially on the operations of the most sophisticated information processing engines on the planet, many important features of which are still unknown. [Read more...]

# Distribution of abilities

From the Abstract of the paper The best and the rest: revisiting the norm of normality of individual performance, Ernest O’Boyle Jr. and Herman Aguinis:

We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers,
entertainers, politicians, and amateur and professional athletes. Results are remarkably consistent across industries, types of jobs, types of performance measures, and time frames and indicate that individual performance is not normally distributed—instead, it follows a Paretian (power law) distribution. Assuming normality of individual
performance can lead to misspecified theories and misleading practices. Thus, our results have implications for all theories and applications that directly or indirectly address the performance of individual workers including performance measurement and management, utility
analysis in preemployment testing and training and development, personnel selection, leadership, and the prediction of performance, among others.

I am all in marking exams just now and am not able to look into the paper carefully.

Some thoughts though.

The general thesis of non-normality is fine but it is like claiming that the Earth orbits the Sun.

Calibrated measures, such as IQ and many psychological tests, almost by definition are normal in populations (in fact, in the population for which they are developed). There are underlying assumptions, of course, but the point is that they are taylored to be such. Also, exam performance may be calibrated to look like normal by appropriate choice of the set of questions and allocation of marks.

Looking at Study 1, for example, it is ridiculous to talk about normal distribution. Firstly, before looking at the data we know that they are non-negative and generally with small values. The methodology of selecting “leading” journals” makes the numbers even smaller. Before critisising I should think more but a selective procedure invalidates basic assumptions about validity of normal approximation.

The histograms shown towards the end of Study 1 clearly show this. These histograms should have been put at the beginning of the study. The mean and the standard deviation are almost meaningless for this kind of data (counts, maximum at low values). If anything, the histograms suggest starting with exponential or Gamma, and discard the normal outright. Pareto is fine, as well.

Also, using $$\chi^2$$ to evaluate the quality of the fit is primitive.

By the way, the fact that Pareto is better than normal does not show that it is any good. Any of the distributions mentioned above will be better than normal.

As it happens, I recently a second year Assignment for Practical Statistics, where students do this kind of thing (fitting exponential distributions, evaluating the fit). They would not have got good marks for using $$\chi^2$$ test. QQ-plots and Kolmogorov Smirnov type tests are much better.

Students certainly do not get good marks if they simply fit a distribution and do not evaluate the quality of the fit.

# Imaging study reveals differences in brain function for children with math anxiety

Scientists at the Stanford University School of Medicine have shown for the first time how brain function differs in people who have math anxiety from those who don’t.
A series of scans conducted while second- and third-grade students did addition and subtraction revealed that those who feel panicky about doing math had increased activity in brain regions associated with fear, which caused decreased activity in parts of the brain involved in problem-solving.

The paper itself: Christina B. Young, Sarah S. Wu, and Vinod Menon. The Neurodevelopmental Basis of Math Anxiety. Psychological Science OnlineFirst, published on March 20, 2012 as doi:10.1177/0956797611429134. A pdf file is avalable here.

# Symposium on Mathematical Practice and Cognition II

Extended deadline: submissions due Friday 2nd March, 2012

This is a sequel to the Symposium on Mathematical Practice and Cognition held at the AISB 2010 convention at de Montfort University, Leicester, UK. The Symposium on Mathematical Practice and Cognition II (http://homepages.inf.ed.ac.uk/apease/aisb12/home.html) will be one of several forming the AISB/IACAP World Congress 2012 (http://events.cs.bham.ac.uk/turing12/), in honour of Alan Turing.