Cognitive Development and Higher Education

Cognitive development across the lifespan throws up an interesting problem for us here in Higher Education.There is fairly widespread agreement that Piaget got his developmental stages pretty close to the mark as he described how people develop from infancy through to adulthood. Although there is some argument about the details, with some adjustments that have been made here and there, the basic premise has pretty well stood the test of time.

There is fairly widespread agreement that Piaget got his developmental stages pretty close to the mark as he described how people develop from infancy through to adulthood. Although there is some argument about the details, with some adjustments that have been made here and there, the basic premise has pretty well stood the test of time.

The quandary faced by the higher education community lies in the final stage of cognitive development proposed by Piaget. The formal operational thinking stage that emerges at adolescence. As a person develops through their childhood, a normally developing child will reach a cognitive developmental milestone, acquire whatever skills that are attached to that stage of thinking, and move on.

As an example, as a young child, one of the stages is called egocentrism. Simply put, in this stage (finishes at about age four), a child thinks that everyone sees and experiences the world the same way that they do. If a child in this stage is viewing a scene and they were to ask you about something they were seeing, they wouldn’t be able to conceive the concept that you were not able to see exactly what they were, regardless of where you are. However, once a child passes through the stage, that doesn’t happen again in their lifetime. I doubt very much that you have experienced this recently because once the stage is passed it is simply the way you think.

This type of fairly linear developmental pattern holds true for virtually every cognitive developmental stage that we go through. However, this is not true of the final, formal operational thinking stage. Although the ability to think in a formal operational stage emerges during adolescence, thinking in this way requires teaching and practice. This is the only stage of cognitive development that is this way. All of the rest of the stages we simply acquire, but the formal operational thinking stage only bestows on us the ability to think that way, not the thinking itself.

Why is this a quandary for higher education? Because the higher part of higher education refers to the thinking that has to be developed for the expression of formal operational thinking. It doesn’t just happen, it has to be taught and practiced. We tend to call this thinking critical thinking and expect that our students arrive with this ability in place and ready to be fully expressed during their higher education. When it doesn’t happen, we are filled with disappointment and blame the secondary school system or the students themselves for not being prepared.

The research demonstrates to us that only a few (about 10%) of the adult population are ever fully equipped with formal operational thinking skills – whether or not they have received any higher education. Between 30% and 40% of the population lack the ability to engage in this type of thought completely. The remaining 50 to 60 percent have some formal operational thinking skills ranging from barely demonstrating that they have any to usually, but not always using them.

Given that we are now educating about 40% (or more) of the general population, how can it be that we are only seeing about 10% able to consistently use formal operational thinking skills to solve problems and analyze information? Because our model of “sit down, shut up, face the front, memorize, and regurgitate” used in 90% (or more) of the higher education classrooms neither teaches or requires the use of formal operational thinking skills.

The skills I’m talking about would include some of the following:

  •  a desire to seek, patience to doubt, fondness to meditate, slowness to assert, readiness to consider, carefulness to dispose and set in order; and hatred for every kind of imposture (Bacon 1605) 

  • the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action 
(Paul, 1987) 

  • self-guided, self-disciplined thinking which attempts to reason at the highest level of quality in a fair-minded way (Elder)
  • the mental processes, strategies, and representations people use to solve problems, make decisions, and learn new concepts (Sternberg, 1986, p. 3) 

  • the propensity and skill to engage in an activity with reflective skepticism 
(McPeck, 1981, p. 8) 

  • reflective and reasonable thinking that is focused on deciding what to believe or do (Ennis, 1985, p. 45) 

  • thinking that is goal-directed and purposive, “thinking aimed at forming a judgment,” where the thinking itself meets standards of adequacy and accuracy (Bailin et al., 1999b, p. 287) 

  • judging in a reflective way what to do or what to believe (Facione, 2000, p. 61) 

  • skillful, responsible thinking that facilitates good judgment because it 1) relies upon criteria, 2) is self-correcting, and 3) is sensitive to context (Lipman, 1988, p. 39) 

  • the use of those cognitive skills or strategies that increase the probability of a desirable outcome (Halpern, 1998, p. 450) 

  • seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth (Willingham, 2007, p. 8).
  • purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or conceptual considerations upon which that judgment is based (Facione, 1990, p. 3)

I have written extensively about the state of higher education today, but our failure to deliver on our historical core purpose beggars belief. We can do better than this.


How could we take something as natural and wonderful as learning and turn it into education?

Advertisements

Thought and Power

In July, I wrote about learning thresholds, and how we could use technology to define and attain learning thresholds. I was reading Diane  Halpern’s “Thought and Knowledge” yesterday, and came across a passage that explains my thinking about learning thresholds, memorization, and critical thinking. It reads:

…thought is powerful only when it can utilize a large and accurate base of knowledge (page 5).

The preceding part of that line is also important in the context of learning in today’s world “…knowledge is powerful only when it is applied appropriately”.

Never before has the world had a greater need of people who can use critical thinking skills, and never before have we had a greater paucity of critical thinking skills – when compared to the total number of ‘educated’ individuals.

The large and accurate base of knowledge has taken precedence over everything else. And, as the sheer amount of information continues to increase to amounts truly unimaginable at a human scale, the obsession with the large and accurate knowledge base threatens to overwhelm, us with multitudes of memorizers who have no concept of thinking.

My problem is not with ‘a’ large and accurate knowledge base, but with ‘the’ large and accurate knowledge base. Those charged with the preparation of the next generation of thinkers have often spent years, if not decades, accumulating and conceptualizing their sliver of the world, and are rightly called experts in their fields. However, that expertise, in no way, prepares them to teach novices how to think. And the current state of affairs in higher education doesn’t allow for subject experts to become learning experts. These experts who focus on more and more information,with more and more classes, programmes, degree schemes, areas of study focus so intently on their field of study (as they are rewarded to do) that they have no conception of what the problem is. Except that they know that their students are not becoming the experts they think they are training them to be.

Nowhere is this better illustrated than in the recent findings that over 95% of university leaders thought their graduates were well prepared for the world of work, while about than 10% of business leaders agreed (see below). We are so out of touch with reality, we are rapidly losing the credibility that we are banking on to carry us through the disruptive innovation digitization has landed us in.

We can, and need to do better.

I found the evidence – and here it is:

“…(in) Inside Higher Ed‘s 2014 survey of chief academic officers, ninety-six percent said they were doing a good job – but… in a new survey by Gallup measuring how business leaders and the American public view the state and value of higher education, just 14 percent of Americans – and only 11 percent of business leaders – strongly agreed that graduates have the necessary skills and competencies to succeed in the workplace.”

Evidence & HE

One of the real challenges I face in trying to convince people that there are better ways to approach education is an attitude towards evidence that I don’t understand. I was talking to one educator about the evidence from psychology about how to motivate students to engage in their academic studies. Her response puzzled me, but it is something I have heard before and since. She said: that’s all right if you believe in that kind of stuff. When I asked about the stuff she was referring to, she said she was referring to research, as (according to her) we all know, researchers can find any outcome that fits their agenda.

Needless to say, that was the most extreme example of the dismissal of evidence, but certainly not a rare one.

In my research methods class, when I used to talk to the first year students about rational thinking and evidence, I used an audience response system to poll the students about various aspects of their understanding. One of the questions I used to ask was:

Should the major decisions in our society be based on (a) solid evidence gathered using the best research methods available, or (b) feelings, beliefs and just “knowing” when something should be a certain way?

As it was during a lecture on rational decision making, of course I would get 98% responding with “a” as the appropriate response.

I then showed the following slide.

Slide23

During this slide, I explained to the students that a placebo-controlled randomised study is about as good as it gets in the clinical scientific world, and that the homeopathic society was saying that the best science couldn’t measure the effects of homeopathic medicine. I then repeated the question:

Should the major decisions in our society be based on (a) solid evidence gathered using the best research methods available, or (b) feelings, beliefs and just “knowing” when something should be a certain way?

To my surprise (the first year I did this) those responding with “a” dropped to about 55%. These are students who enrolled  in University to obtain a BSc in psychology from one of the five top psychological research departments in the UK. Suddenly, there was something they wanted to believe in, and the idea of using science to answer a question wasn’t that important to them.

I have always hoped that by the time the students graduated with their degrees, they would, once again put science and evidence back into a premier place for answering questions in about the world. And yet, I have my doubts.

The Right Answer

Roger Shank wrote something last week that I think is worth looking at:

Math and science are meant to teach thinking (or so it is said). They could actually teach thinking of course, but when the scientific questions are given to you, and the right answers are taught to you, science ceases to be about observation, experimentation, hypothesis creation, and reasoning from evidence, and becomes memorization to get good scores on multiple choice tests.

Does constantly coming up with the right answer mean that we don’t learn to think. I can expect individuals who are uneducated to undervalue the power of rational thinking and the scientific method; evidence, to the uninitiated, is nothing better than opinion. But an education, at the very core, should about thinking, rational thinking, and critical evaluation of evidence. If a person has been trained to understand the process and rigour that accompanies the proper application of the scientific method, and the strength of properly obtained evidence,  how can scientific findings be something that you can simply dismiss as though they were nothing more than opinions.

Scientific discovery has laid the foundation for much of what we enjoy in the world today. However, conservative influences in society, just as in the past, use whatever power is at their disposal to ensure that science only supports the worldview that is already established. Delivering well educated, thinking individuals is needed to counterbalance the antiscientific influence that has arisen in recent years. Unfortunately, well educated has come to mean great memorisation.

I would suggest that our obsession with content and getting the right answer has meant that rational thinking has become an optional extra in HE.

 

How have we made something as exhilarating as learning as oppressive as education?

Behaviourism and Learning

I know that a number of psychologists will tell you that behaviourism and learning are just two names for the same thing, and they might be right is some way, but the learning I am talking about isn’t based on rats running a maze, but students learning higher thinking skills. After all, higher thinking skills is what the higher stands for in Higher Education. The rats running a maze is where the behavioural principle come from that I want to focus on. And for those who thought there was a cognitive revolution in psychology, behaviourism is still very much alive and kicking (as it should be – even though I am a party to the revolution). The principles are still as valid today as they were when they were articulated 50 years ago.

So what does behaviourism have to say about learning (as in education)? Actually, it can tell us quite a bit. I have been reading and writing about assessment this past year, and it astounds me that the assessment methods, for as much as we talk about innovation, are still very much based on tradition. Given how much we know about how we learn and master a skill, why are assessments still all done, basically, the same way?

Let me explain. Behaviourist interventions are the best interventions we have for changing behaviours (I’m not talking about a philosophy here, but about what the evidence says). The basic principle for a behaviourist intervention is as follows. Identify a behaviour that needs changing, isolate that behaviour, take a baseline measure of the behaviour, introduce the intervention, and then measure the effect of the intervention. It can be a bit more complicated than that, but not much. You could think of education and learning as changing thinking and behaviours. After all, that is essentially what we are trying to do. Looked at through a behavioural prism, we don’t do a very good job of changing behaviours and thinking. It isn’t that the students (our subjects) aren’t willing, it is just that they struggle to figure out what we want.

What I mean by that can be understood if you think about a student submitting an essay for grading. You might say that innovative assessment means that an essay is only one tool from a large toolbox that is available, however, the point I am making applies to most what is available and used. In an essay, we expect that the student will have a well structured argument, use strong sources, show evidence of critical thinking, throw in a bit of originality, write with an easy to read style, use proper spelling, punctuation and grammar, and get the content right. That means that we are evaluating (at least) nine dimensions on a single piece of work. We expect the student to produce a multidimensionally superb piece of work that we then take in for marking. We, in as little time as possible (given that there are 183 sitting on your desk due back next Tuesday), become an expert judge (a future blog on this concept) on a hypercomplex problem, providing a simultaneous evaluation on the multifaceted dimensions, and awarding an appropriate level of credit for the work. And then we wonder about the lack of reliability between markers

So much for isolating something that we need to change, introducing an intervention, and then measuring the outcome of the intervention. Why can’t we design a higher level learning environment that isolates the skill or content that is targeted, and then introduces an intervention that continues until a mastery level of achievement is reached before introducing something more complicated.

In my Science of Learning module, I focus the students on the providing evidence from an acceptable source for their blog entries and incorporating that evidence into a well structured argument. Two higher thinking skills that the students practise over and over (at least 14 times) during the semester. One of the criticisms that I faced the first time I taught this way is that final year undergraduates shouldn’t all be able to get top marks in a class, there needs to be more spread recognising their varying abilities. However, I asked, in our examination board, if the students meet the criteria that I set, and they meet it well, shouldn’t they receive full credit for having done so? The exam board agreed with me. They also agreed that the learning outcomes ( providing evidence from an acceptable source and incorporating that evidence into a well structured argument) for the module were more than appropriate for a final year class. Just because the bulk of the students master it shouldn’t make their accomplishment any less.

I think it is a shame that this kind of learning is so rare in education. It doesn’t have to be.

The Tyranny of Content

Stuff – that’s what we teach – stuff. Stuff and more stuff.

We live in the information age, where information (and good quality information at that) is widely and freely available to more and more of us. Certainly the availability of information to university students is at unprecedented levels. And yet, our teaching models are still largely based on presenting information.

I have colleagues who complain about how full their modules are, how they can’t do anything different than straight lecture or they won’t cover the material, feel overwhelmed  by new material that needs to be incorporated because there is just so much old material that the students need to know. I call this the tyranny of content.

In HE, we decide the content. As professionals and experts in our respective fields, we decide what is important and what is not. We put together a syllabus, we assemble the learning objectives and declare the learning outcomes. We decide the content. If someone else is doing that for us, we become trainers rather than professionals.

Given the amount of information that is available to teach, we can never hope to design a programme that would cover it all. So why are we trying so hard to do just that?

We need to remember that the ‘H’ in Higher Education stands for higher order skills (critical thinking, critical analysis, synthesis etc.). How can cramming more stuff into a syllabus help students gain higher order thinking skills? I know that there are a few enlightened lecturers out there who focus on the higher order skills, but most of the faculty at HE institutions are focused on stuff.

There are simple reasons why this has happened. Stuff is easier to teach and assess. Stuff is what the students demand – they want a drip feed of facts that they can spew forth on demand. Stuff easily fits into the measurable metrics administrators demand. Stuff rules! We are slaves to the tyranny of content.

Higher order skills are incredibly difficult to teach (or can be). They are very hard to assess when the paucity of lower order skills gets in the way of evaluating what the students are trying to say. Students find higher order skills meaningless and boring (or think they do). As my last post indicated, we are shaped by the systemic expectations and reinforcers that surround us – and these rarely include higher order skills.

When someone succeeds in teaching and assessing higher order skills, we gaze on in admiration, awe-struck by their accomplishment, and envious of their success – shaking our collective heads and wishing we could do something like that before turning back to our stuff.

I every field of knowledge there is expertise. An expert knows a lot of stuff, but more importantly, they approach problems both within and outside of their area of expertise in a different way than a novice would. It is how they look at the problem, how they approach the problem, how they organise the problem that confirms to you that they are an expert in some related field. This critical examination of the problem, the organisation of the information, and then careful approach to addressing the problem is what we really need our students to learn. It is how we study the problem, not the problem itself that is important, and yet, we focus, almost exclusively, on the information around the problem, the right answer, the solution.

We need to somehow rethink what we do in HE so that we can shake off the tyranny of content. Our students have access to as much information as we have, and yet we often insist on repackaging and presenting it in our our own image. We need to refocus on the higher part of higher education.

Double Marking

I was reading about double marking in a book on assessment (Bloxham & Boyd) and found myself wondering why evidence is ignored when recommending best practise. Double marking is extremely expensive, and there is no evidence that it accomplishes anything other than to reassure both staff and students that the system is fair (Hand & Clewes, 2000).

So, what are the arguments against double marking?

1) Cost. Double marking doubles marking loads. In an era of mass education and diminishing budgets, can we afford to double mark?

2) Regression to the mean. We already have problems with markers gravitating to some average mark when they evaluate student work. Double marking means that markers tend to avoid either very high marks for brilliant work or very low marks for extremely poor work.

3) Marking criteria problems. There are well documented problems in the use of marking criteria, with the usual tagline of “we must be more explicit” when determining what the criteria actually mean. However, I have not seen anyone actually discuss the limits on human cognition in using a marking criteria matrix. Human memory can hold between 5 and 7 items. Most marking criteria have at least five explicit dimensions with a number of levels of performance attached to each dimension. If I were to produce a simple matrix with five dimensions and only five levels of performance (very slim given most markers want a range of 100 marks), I have 25 cells in a matrix that I should keep in mind while reading and evaluating a given students’ work. It is no wonder researchers keep finding very low validity and reliability when they look into the application of marking criteria.

4) Personal bias. Although this is related to marking criteria, it is large enough to be considered a problem in its own right. How important is the overall story that is being told in the essay, report, dissertation etc.? How important are the individual sections? How important is the grammar? punctuation? formatting? Different markers value different aspects of the work to different degrees (Read, Francis & Robson, 2005). Am I wrong for valuing critical thinking more than the look and format of a piece of work? Is my colleague wrong for focusing on the style of presentation? All of these things are important, but to think that any one of us has the superhuman ability to evaluate all of them to the same degree is foolish.

Double marking is an expensive security blanket that obscures marking and encourages mediocracy.

Read, B.,  Francis, B., &  Robson J. (2005) Gender, ‘Bias’, Assessment and Feedback: Analysing the Written Assessment of Undergraduate History Essays. Assessment and Evaluation in Higher Education, 30,3, 243-262.