Practice at “Guesstimating” Can Speed Up Math Ability

A person’s math ability can range from simple arithmetic to calculus and abstract set theory. But there’s one math skill we all share: A primitive ability to estimate and compare quantities without counting, like when choosing a checkout line at the grocery store. Practicing this kind of estimating may actually improve our ability to do the kinds of symbolic math we learn in school, according to new research published in Psychological Science, a journal of the Association for Psychological Science.
Previous studies have suggested a connection between the approximate number system, involved in estimating, and mathematical ability. Psychological scientists Elizabeth Brannon and Joonkoo Park of Duke University devised a series of experiments to test this association.
The researchers enrolled 26 adult volunteers and had them complete 10 training sessions designed to hone their approximate number skills. On each of these training sessions, the participants practiced adding and subtracting large quantities of dots without counting them.
They were briefly shown two arrays of 9 to 36 dots on a computer screen and then asked whether a third set of dots was larger or smaller than the sum of the first two sets, or whether it matched the sum.
“It’s not about counting, it’s about rough estimates,” explains Park, a postdoctoral researcher at Duke.
As participants improved at the game, the automated sessions became more difficult by making the quantities they had to judge closer to each other.
Before the first training session and after the last one, their symbolic math ability was tested with a set of two- and three-digit addition and subtraction problems, sort of like a third-grader’s homework. They solved as many of these problems as they could in 10 minutes. Another group of control participants took the math tests without the approximate number training.
Those who had received the 10 training sessions on approximate arithmetic showed more improvement in their math test scores compared to the control group.
In a second set of experiments, participants were divided into three groups to isolate whether there had been some sort of placebo effect in the first experiment that made the approximate arithmetic group perform better. One group added and subtracted quantities as before, a second performed a repetitive and fast-paced rank-ordering with Arabic digits, and the third answered multiple choice questions that tapped their general knowledge (e.g., “which city is the capital of France?”)
Again, the people who were given the approximate arithmetic training showed significantly more improvement in the math test compared to either control group.
“We are conducting additional studies to try and figure out what’s driving the effect, and we are particularly excited about the possibility that games designed to hone approximate number sense in preschoolers might facilitate math learning,” Park said.
Park and Brannon can’t yet isolate the mechanism behind their effect, but the research does suggest that there is an important causal link between approximate number sense and symbolic math ability.
“We think this might be the seeds — the building blocks — of mathematical thinking,” Brannon said.

###
Press release available on the APS website.
This research was supported by a James McDonnell Scholar Award, a grant from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, and a Duke Fundamental and Translational Neuroscience Postdoctoral Fellowship.
For more information about this study, please contact: Joonkoo Park at joonkoo.park@duke.edu.
The APS journal Psychological Science is the highest ranked empirical journal in psychology. For a copy of the article “Training the Approximate Number System Improves Math Proficiency” and access to other Psychological Science research findings, please contact Anna Mikulak at 202-293-9300 or amikulak@psychologicalscience.org.

A response to Aaron Sloman

A response to: Aaron Sloman, Is education research a form of alchemy?

David Wells

Aaron Sloman after a very distinguished fifty-year career is currently the still active Honorary Professor of Artificial Intelligence and Cognitive Science at the University of Birmingham. His conclusions in this article, as he admits,

“are not the theories of a specialist researcher in education or educational technology”

but are strictly from his own AICS perspective, plus his experience of teaching programming and AI and related subjects to undergraduate students.

My perspective is essentially that of a primary and secondary mathematics teacher who has long taken an interest in educational research, so I was immediately suspicious of his title, which he attempts to justify like this:

Continue reading

Gender differences in mathematics anxiety

A new study published today in BioMed Central’s open access journal Behavioral and Brain Functions [Devine A, Fawcett K, Szűcs D, Dowker  A (2012), Gender differences in mathematics anxiety and the relation to mathematics performance while controlling for test anxiety. Behavioral and Brain Functions (in press; still was not online at time of writing).] reports that a number of school-age children suffer from mathematics anxiety and, although both genders’ performance is likely to be affected as a result, girls’ maths performance is more likely to suffer than boys’.

Continue reading

Aaron Sloman: Is education research a form of alchemy?

This piece by Aaron Sloman in ALT News Online may be of interest:

First three paragraphs:

Alchemists did masses of data collection, seeking correlations. In the process they learnt a great many useful facts – but lacked deep explanations. Searching for correlations can produce results of limited significance when studying processes with an underlying basis of mechanisms with astronomical generative power. But this correlation-seeking approach characterises much educational research.

Accelerated progress in chemistry came from developing a deep explanatory theory about the hidden structure of matter and the processes such structure could support (atoms, subatomic particles, valence, constraints on chemical reactions, etc.). Thus deep research requires (among other things) the ability to invent powerful explanatory mechanisms, often referring to unobservables.

My experience of researchers in education, psychology, social science and similar fields is that the vast majority of the ones I have encountered have had no experience of building, testing, and debugging, deep explanatory models of any working system. So their education does not equip them for a scientific study of education, a process that depends crucially on the operations of the most sophisticated information processing engines on the planet, many important features of which are still unknown. [Read more…]

Distribution of abilities

From the Abstract of the paper The best and the rest: revisiting the norm of normality of individual performance, Ernest O’Boyle Jr. and Herman Aguinis:

We revisit a long-held assumption in human resource management, organizational behavior, and industrial and organizational psychology that individual performance follows a Gaussian (normal) distribution. We conducted 5 studies involving 198 samples including 633,263 researchers,
entertainers, politicians, and amateur and professional athletes. Results are remarkably consistent across industries, types of jobs, types of performance measures, and time frames and indicate that individual performance is not normally distributed—instead, it follows a Paretian (power law) distribution. Assuming normality of individual
performance can lead to misspecified theories and misleading practices. Thus, our results have implications for all theories and applications that directly or indirectly address the performance of individual workers including performance measurement and management, utility
analysis in preemployment testing and training and development, personnel selection, leadership, and the prediction of performance, among others.

 

I am all in marking exams just now and am not able to look into the paper carefully.

Some thoughts though.

The general thesis of non-normality is fine but it is like claiming that the Earth orbits the Sun.

Calibrated measures, such as IQ and many psychological tests, almost by definition are normal in populations (in fact, in the population for which they are developed). There are underlying assumptions, of course, but the point is that they are taylored to be such. Also, exam performance may be calibrated to look like normal by appropriate choice of the set of questions and allocation of marks.

Looking at Study 1, for example, it is ridiculous to talk about normal distribution. Firstly, before looking at the data we know that they are non-negative and generally with small values. The methodology of selecting “leading” journals” makes the numbers even smaller. Before critisising I should think more but a selective procedure invalidates basic assumptions about validity of normal approximation.

The histograms shown towards the end of Study 1 clearly show this. These histograms should have been put at the beginning of the study. The mean and the standard deviation are almost meaningless for this kind of data (counts, maximum at low values). If anything, the histograms suggest starting with exponential or Gamma, and discard the normal outright. Pareto is fine, as well.

Also, using \(\chi^2\) to evaluate the quality of the fit is primitive.

By the way, the fact that Pareto is better than normal does not show that it is any good. Any of the distributions mentioned above will be better than normal.

As it happens, I recently a second year Assignment for Practical Statistics, where students do this kind of thing (fitting exponential distributions, evaluating the fit). They would not have got good marks for using \(\chi^2\) test. QQ-plots and Kolmogorov Smirnov type tests are much better.

Students certainly do not get good marks if they simply fit a distribution and do not evaluate the quality of the fit.

Imaging study reveals differences in brain function for children with math anxiety

Scientists at the Stanford University School of Medicine have shown for the first time how brain function differs in people who have math anxiety from those who don’t.
A series of scans conducted while second- and third-grade students did addition and subtraction revealed that those who feel panicky about doing math had increased activity in brain regions associated with fear, which caused decreased activity in parts of the brain involved in problem-solving.

The paper itself: Christina B. Young, Sarah S. Wu, and Vinod Menon. The Neurodevelopmental Basis of Math Anxiety. Psychological Science OnlineFirst, published on March 20, 2012 as doi:10.1177/0956797611429134. A pdf file is avalable here.

Symposium on Mathematical Practice and Cognition II

Extended deadline: submissions due Friday 2nd March, 2012

This is a sequel to the Symposium on Mathematical Practice and Cognition held at the AISB 2010 convention at de Montfort University, Leicester, UK. The Symposium on Mathematical Practice and Cognition II (http://homepages.inf.ed.ac.uk/apease/aisb12/home.html) will be one of several forming the AISB/IACAP World Congress 2012 (http://events.cs.bham.ac.uk/turing12/), in honour of Alan Turing.

Continue reading