Learning Preferences

Learning preferences are one of my pet peeves. Originally introduced as a classroom management tool (vary your presentation to help maintain interest), they have become one of the foundations of modern education. Everyone has a preferred learning style. I don’t argue with this. But what does a preference have to do with ability. As a cognitive scientist, I have never been able to understand the scientific basis for a learning preference making learning easier.

The evidence that has been gathered clearly shows that there is no scientific justification for the concept, and in fact, demonstrates that it can be detrimental to learning.

Combined with the mindset research done by Dweck, and you can see that learning styles can be extremely harmful to a person’s education.  Dweck found that a person’s beliefs have a massive influence on their motivation. As an example, if a person believes that they are poor at mathematics, they will put in minimal effort to improve their ability, because they are poor at math. It doesn’t matter what the reality is, if they believe it, they won’t try.

Now, label a child as a kinaesthetic learner, and (if they believe it) they will no longer put in effort to learn material presented in a visual or auditory modality. They are a kinaesthetic learner, and they can’t learn any other way. It doesn’t matter what the reality is, only what the child believes. This applies to labels in general (think: types of intelligence), and not just to learning styles.

Here’s a great take on learning styles.

Advertisements

Double Marking

I was reading about double marking in a book on assessment (Bloxham & Boyd) and found myself wondering why evidence is ignored when recommending best practise. Double marking is extremely expensive, and there is no evidence that it accomplishes anything other than to reassure both staff and students that the system is fair (Hand & Clewes, 2000).

So, what are the arguments against double marking?

1) Cost. Double marking doubles marking loads. In an era of mass education and diminishing budgets, can we afford to double mark?

2) Regression to the mean. We already have problems with markers gravitating to some average mark when they evaluate student work. Double marking means that markers tend to avoid either very high marks for brilliant work or very low marks for extremely poor work.

3) Marking criteria problems. There are well documented problems in the use of marking criteria, with the usual tagline of “we must be more explicit” when determining what the criteria actually mean. However, I have not seen anyone actually discuss the limits on human cognition in using a marking criteria matrix. Human memory can hold between 5 and 7 items. Most marking criteria have at least five explicit dimensions with a number of levels of performance attached to each dimension. If I were to produce a simple matrix with five dimensions and only five levels of performance (very slim given most markers want a range of 100 marks), I have 25 cells in a matrix that I should keep in mind while reading and evaluating a given students’ work. It is no wonder researchers keep finding very low validity and reliability when they look into the application of marking criteria.

4) Personal bias. Although this is related to marking criteria, it is large enough to be considered a problem in its own right. How important is the overall story that is being told in the essay, report, dissertation etc.? How important are the individual sections? How important is the grammar? punctuation? formatting? Different markers value different aspects of the work to different degrees (Read, Francis & Robson, 2005). Am I wrong for valuing critical thinking more than the look and format of a piece of work? Is my colleague wrong for focusing on the style of presentation? All of these things are important, but to think that any one of us has the superhuman ability to evaluate all of them to the same degree is foolish.

Double marking is an expensive security blanket that obscures marking and encourages mediocracy.

Read, B.,  Francis, B., &  Robson J. (2005) Gender, ‘Bias’, Assessment and Feedback: Analysing the Written Assessment of Undergraduate History Essays. Assessment and Evaluation in Higher Education, 30,3, 243-262.