Thought and Power

September 23, 2014 Leave a comment

In July, I wrote about learning thresholds, and how we could use technology to define and attain learning thresholds. I was reading Diane  Halpern’s “Thought and Knowledge” yesterday, and came across a passage that explains my thinking about learning thresholds, memorization, and critical thinking. It reads:

…thought is powerful only when it can utilize a large and accurate base of knowledge (page 5).

The preceding part of that line is also important in the context of learning in today’s world “…knowledge is powerful only when it is applied appropriately”.

Never before has the world had a greater need of people who can use critical thinking skills, and never before have we had a greater paucity of critical thinking skills – when compared to the total number of ‘educated’ individuals.

The large and accurate base of knowledge has taken precedence over everything else. And, as the sheer amount of information continues to increase to amounts truly unimaginable at a human scale, the obsession with the large and accurate knowledge base threatens to overwhelm, us with multitudes of memorizers who have no concept of thinking.

My problem is not with ‘a’ large and accurate knowledge base, but with ‘the’ large and accurate knowledge base. Those charged with the preparation of the next generation of thinkers have often spent years, if not decades, accumulating and conceptualizing their sliver of the world, and are rightly called experts in their fields. However, that expertise, in no way, prepares them to teach novices how to think. And the current state of affairs in higher education doesn’t allow for subject experts to become learning experts. These experts who focus on more and more information,with more and more classes, programmes, degree schemes, areas of study focus so intently on their field of study (as they are rewarded to do) that they have no conception of what the problem is. Except that they know that their students are not becoming the experts they think they are training them to be.

Nowhere is this better illustrated than in the recent findings that over 95% of university leaders thought their graduates were well prepared for the world of work, while about than 10% of business leaders agreed (see below). We are so out of touch with reality, we are rapidly losing the credibility that we are banking on to carry us through the disruptive innovation digitization has landed us in.

We can, and need to do better.

I found the evidence – and here it is:

“…(in) Inside Higher Ed‘s 2014 survey of chief academic officers, ninety-six percent said they were doing a good job – but… in a new survey by Gallup measuring how business leaders and the American public view the state and value of higher education, just 14 percent of Americans – and only 11 percent of business leaders – strongly agreed that graduates have the necessary skills and competencies to succeed in the workplace.”

Metacognitive Monitoring

September 22, 2014 Leave a comment

Metacognitive monitoring is the ability of people to discriminate between what they know and what they don’t know. Monitoring is considered the foundational metacognitive skill, with other components of metacognition (e.g. metacognitive awareness or metacognitive strategies) being built on the basic, do you know what you know and what you don’t know.

What we have found from numerous studies, is that participants are normally very poor at judging if they know something or not. In numerous studies done in my lab, we found that incoming university students were often performing at levels just above chance when it comes to discriminating between whether they knew the information or whether they were guessing. In other words, they were very poor at their metacognitive monitoring skills (not surprising, given the memorise and regurgitate nature of standardised testing that is the core of education today).

Based on the work around the “feeling of knowing (FOK)”, we reversed the FOK paradigm, and asked people to indicate their confidence in their answers to questions after they had already answered (normal FOK asks subjects to predict how well they can recall information in the future). The question we asked was simply, how sure are you that your answer that you just gave, is correct. We took their predictions, and married them to their actual performance, and were able to devise an accurate measure of their metacognitive monitoring ability – something we have called a metacognitive index (MI).

Although many of the subjects started out with very low MI scores (often, only slightly better than chance). we wanted to know if practise with the paradigm would improve their MI scores. Through the use of reinforcement schedules for adapting their behaviour, we managed to dramatically improve their MI scores after just a few weeks (30 minutes/week for six weeks). In follow up studies, we found that this made a significant difference in their studies, as you would expect given that metacognitive monitoring is the foundational metacognitive skill.

Armed with these results, we built an app for mobile devices  (Cognaware) that both measures and develops a user’s metacognitive monitoring ability. My hope is for large numbers of people engage with the app, and to improve their metacognitive monitoring ability. Research has shown that irrational decision making is linked to metacognition, not intelligence – effecting businesses and politics in a big way. The phenomenon of politicians being able to make the same promises every few years (and then breaking them) is because of the poor metacognitive monitoring ability in the general population. If we could, somehow improve individual’s monitoring discrimination ability (MI), I believe (based on research and expertise) that it would positively effect students, business people, and democracy as a form of government (and we certainly need the last one).

Help me spread the word about Cognaware by trying out the app, sharing your experience, and talking to people about what it can do for them.

Cognaware is based on years of research exploring ways to accurately measure and develop metacognitive monitoring. Help me make a difference in other’s lives.

Question base

July 14, 2014 Leave a comment

The first requirement for the system I’ve been putting together as a thought experiment that would accredit memorisation (see my three previous posts for some background) would be an infinite set of well tagged questions.

I think this is the easiest part of the system to put in place. We are all aware of the success of crowdsourcing as a way to provide content (think wikipedia). So why don’t we put together an open source question base?

Since this learning system is simply about fluency of recall, all we need are questions about stuff. And lots of them.

It isn’t simply about the questions. in order to make this a memorisation/learning environment, the questions have to be tagged – well tagged. This is necessary so that users can focus on their own learning desires.

The kind of tagging that would make this system useful has three varieties of tags: content domain, source, and event.

The content domain tags are the most obvious. Libraries have spent centuries (literally) organising knowledge into content domains. There are wonderful hierarchal systems that allow users to find learning resources (books, articles, papers, websites, posts, pictures, videos – and who knows what else) within a specific content domain. We haven’t been all that great at tagging these resources, but there’s no reason we can’t start. Within the new question base, an easy to use content domain tagging system is a must.

The second set of tags ahas to do with sources. Knowledge is found somewhere, and if questions can be tagged with a specific source, that makes them all the more powerful. Specific books, journal articles, or web-articles (think wikipedia) would allow users (both learners and contributors) to specify exactly where the information comes from that needs to be memorised to a fluent level. Teachers (face to face or virtual) could then specify both content domain and source, along with the required level of proficiency, for an event (discussion, seminar series etc.) required for the learner to be able to participate  fluently.

Finally, event tags could be included so that learners could prepare themselves for the kind of events specified above. They could even be specified for traditional assessment events (mid-term or final exams).

Properly tagged, an infinite number of questions embedded in a threshold learning system, could provide learners and educators with an invaluable tool for the foundational learning we call memorisation.

Accreditation and Learning

July 11, 2014 Leave a comment

We are entering a brave new world of learning, and – by extension – education (I hope). However, there is one aspect that has remained elusive, at least to me. That is the recognised accreditation of the learning that is taking place.

There are a number of pundits that scoff at the very idea of accreditation as something that belongs to the age of big, centralised institutions, with the big institutions claiming that this is what will legitimise them in the brave new world. Others have proposed a loose form of accreditation such as badges – a recognised symbol, but what does it mean?

I am concerned about accreditation. What I worry about is the recognition of earned authority. On the internet, anyone can set themselves up as an authority about anything, and they do. Fringe groups, radicalisation, pseudoscience, conspiracy theories – all of these (and more) rely on expertise and authority figures to drive them forward. With the big institutions controlling the recognition of learning, these activities have remained on the fringes, and are not recognised as mainstream or legitimate activities. As new forms of learning and knowledge exploration have arisen, so have the activities of these groups. If there is no ubiquitously recognised method of legitimately recognising and accrediting learning, it will be increasingly difficult for novices to be able to differentiate between authentic authoritative sources and a self proclaimed authority with no foundation.

I’m not talking about subject matter – this is a whole different discussion. What I’m talking about is recognising the authority of an expert, in any field, and providing me with a reason to trust this persons judgment within the area of their expertise.

This has been the motivation for my last two posts, we, as a learning community, have to come up with a universally accepted way to recognise and accredit learning. Being interested in a topic isn’t enough to be authenticated. We have to, somehow, be able to display the credentials that are both recognised and trusted by society at large. Knowing that someone has been awarded a PhD from a recognised university provides us with expectations about that person that you gain, simply by knowing they have a PhD.

Using a system like the one I wrote about in my post on learning thresholds  from earlier this week would be a beginning.

Next week, I’ll start outlining what we would need to put into place to realise this fairly simple concept that would allow us to accredit the, fundamentally important, memorisation component of learning.

Categories: Uncategorized

Learning Thresholds

July 9, 2014 2 comments

In my last post, I wrote about memorisation as a foundational component of learning. What I am going to write about today is a system to more accurately measure memorisation than the one that is currently used.

Currently, a test setter (teacher, institution etc.) determines the content domain that a test is designed to cover, and then writes questions that sample material from the content domain and the determines how much of the content domain has been learned (memorised) by how many of the questions in the sample have been answered correctly. One of the flaws in the system is that, if the test taker misses any of the questions, they are deemed to have missed that part of the content domain the questions were designed to cover. It is an all or nothing proposition that is supposed to accurately reflect the amount of material a person has learned.

An alternative that I would like to propose is based on psychophysical measurement.

Psychophysical measurement is the mapping of physical stimuli (e.g. light) onto a psychological experience (e.g. detecting light). Because biological sensory receptors vary in their sensitivity from minute to minute, a clever way to establish a threshold for detecting the physical stimuli were devised in the late 1800s by a group of very clever scientists. These scientists acknowledged that the strength of a psychological response didn’t directly map on to the actual state of the psychical world. In other words, although no light didn’t elicit a biological response, very weak levels of light didn’t elicit a response either. Increasing the strength of the physical light signal eventually elicits a biological response, however, doing this over and over doesn’t result in the response being elicited at the same level of physical stimulus every time (some variability), and working backward (decreasing the light until it is no longer detected) leads to a different level of sensitivity.

In order to come up with a way to accurately describe what is happening, psychologist’s in the area devised a stepping procedure where the light is increased and decreased in an unpredictable manner, and the value of physical light that the person correctly detects, say 50% of the time, becomes the detection threshold for that person. This doesn’t mean that there is no detection below that level, nor that there is perfect detection above that level, but it is a number used to describe the level at which the person detects light. The same methodology is used for other physical phenomena such as sound, pressure, and heat etc.

Using the same philosophy, we could measure the level at which a person ‘knows’ (has memorised) a body of knowledge. If there were an infinite number of questions, all properly tagged with the level of knowledge (difficulty) required to answer the questions, a smart testing instrument could feed the questions at a person, increasing or decreasing the difficulty level until the person consistently answered, say, 60% of the questions correctly. This difficulty level would then accurately describe the “learning threshold” for that person in that particular content domain, at that particular point in time.

That type of system could measure the ‘learned’ (memorised) material accurately, and would be comparable between teachers and institutions. This type of testing could be a part of everyday education instead of a single point in time examination that returns a static measurement that is often used to define an individual and pigeonhole them.

Just a thought.


Categories: Education, Learning, Teaching

Memorisation as Learning

July 4, 2014 1 comment

I have, in the past, made fun of learning as being equivalent to memorisation. I still hold to that when the entire definition  of learning is encompassed by memorisation. However, memorisation and being able to fluently recall things, is a foundational component of learning, and it cannot be ignored. In my post on Learning requirements, I suggested that there are two components of learning (I actually think there are three, but one of them is beyond the remit of formal learning environment). The first component is obtaining information. Within that requirement lies the fluent (accurate and quick) recall of the information that you know, which requires memorisation.

Technology and learning research has provided us with tools and techniques that can assist in this fundamental component of learning. SAFMED Cards, the testing effect, spaced learning, mastery based learning: all of these provide methods for learners to reach fluency with information, and most of these proven techniques are ignored by educators. Instead, educators favour reading out summaries of information (Powerpoint Slides) and requiring a recall test sometime later.

Wouldn’t it be wonderful if there was a tech firm that built a learning tool that incorporated many of these techniques and tool as a matter of course instead of building a slicker way to present Powerpoint slides with a voiceover reading them and expounding a bit?


Categories: Uncategorized

Wilful Blindness

June 6, 2014 3 comments

I was in a meeting recently about teaching, and as usual, I ended up chanting my line about what the evidence says about teaching this particular subject (statistics). One of the other lecturers said something that I have heard too many times. She said something like “I don’t care what the evidence says, I already know how I want to do it“. A few months earlier I was talking to someone about proposed changes to the procedures and penalties to deal with plagiarism, and asked the same question, receiving the same response. It seems to happen every few months, that as I ask if the evidence has been considered I am told that, essentially, it doesn’t matter what the evidence says, this is the way we have decided to do whatever.

These examples that I can think of are only the examples that the disregarding of evidence has been made explicit, often, there is no explicit statement, simply a disregarding of the evidence.

The most recent incident made me think about our attitudes toward evidence in general. I work in a research intensive department (ranked 50th in the world for research), and regularly rub shoulders with highly regarded researchers. What I began to wonder is, if some of these same people can so quickly and easily dismiss evidence about teaching and learning, how do they react to evidence that does not fully support their theoretical stance in their particular area of expertise. Do they simply dismiss evidence there as well?

I can’t help but have my faith in their scientific objectivity shaken by these interactions that take place year after year.

Categories: Teaching

Get every new post delivered to your Inbox.

Join 780 other followers

%d bloggers like this: