The first requirement for the system I’ve been putting together as a thought experiment that would accredit memorisation (see my three previous posts for some background) would be an infinite set of well tagged questions.
I think this is the easiest part of the system to put in place. We are all aware of the success of crowdsourcing as a way to provide content (think wikipedia). So why don’t we put together an open source question base?
Since this learning system is simply about fluency of recall, all we need are questions about stuff. And lots of them.
It isn’t simply about the questions. in order to make this a memorisation/learning environment, the questions have to be tagged – well tagged. This is necessary so that users can focus on their own learning desires.
The kind of tagging that would make this system useful has three varieties of tags: content domain, source, and event.
The content domain tags are the most obvious. Libraries have spent centuries (literally) organising knowledge into content domains. There are wonderful hierarchal systems that allow users to find learning resources (books, articles, papers, websites, posts, pictures, videos – and who knows what else) within a specific content domain. We haven’t been all that great at tagging these resources, but there’s no reason we can’t start. Within the new question base, an easy to use content domain tagging system is a must.
The second set of tags ahas to do with sources. Knowledge is found somewhere, and if questions can be tagged with a specific source, that makes them all the more powerful. Specific books, journal articles, or web-articles (think wikipedia) would allow users (both learners and contributors) to specify exactly where the information comes from that needs to be memorised to a fluent level. Teachers (face to face or virtual) could then specify both content domain and source, along with the required level of proficiency, for an event (discussion, seminar series etc.) required for the learner to be able to participate fluently.
Finally, event tags could be included so that learners could prepare themselves for the kind of events specified above. They could even be specified for traditional assessment events (mid-term or final exams).
Properly tagged, an infinite number of questions embedded in a threshold learning system, could provide learners and educators with an invaluable tool for the foundational learning we call memorisation.
We are entering a brave new world of learning, and – by extension – education (I hope). However, there is one aspect that has remained elusive, at least to me. That is the recognised accreditation of the learning that is taking place.
There are a number of pundits that scoff at the very idea of accreditation as something that belongs to the age of big, centralised institutions, with the big institutions claiming that this is what will legitimise them in the brave new world. Others have proposed a loose form of accreditation such as badges – a recognised symbol, but what does it mean?
I am concerned about accreditation. What I worry about is the recognition of earned authority. On the internet, anyone can set themselves up as an authority about anything, and they do. Fringe groups, radicalisation, pseudoscience, conspiracy theories – all of these (and more) rely on expertise and authority figures to drive them forward. With the big institutions controlling the recognition of learning, these activities have remained on the fringes, and are not recognised as mainstream or legitimate activities. As new forms of learning and knowledge exploration have arisen, so have the activities of these groups. If there is no ubiquitously recognised method of legitimately recognising and accrediting learning, it will be increasingly difficult for novices to be able to differentiate between authentic authoritative sources and a self proclaimed authority with no foundation.
I’m not talking about subject matter – this is a whole different discussion. What I’m talking about is recognising the authority of an expert, in any field, and providing me with a reason to trust this persons judgment within the area of their expertise.
This has been the motivation for my last two posts, we, as a learning community, have to come up with a universally accepted way to recognise and accredit learning. Being interested in a topic isn’t enough to be authenticated. We have to, somehow, be able to display the credentials that are both recognised and trusted by society at large. Knowing that someone has been awarded a PhD from a recognised university provides us with expectations about that person that you gain, simply by knowing they have a PhD.
Using a system like the one I wrote about in my post on learning thresholds from earlier this week would be a beginning.
Next week, I’ll start outlining what we would need to put into place to realise this fairly simple concept that would allow us to accredit the, fundamentally important, memorisation component of learning.
In my last post, I wrote about memorisation as a foundational component of learning. What I am going to write about today is a system to more accurately measure memorisation than the one that is currently used.
Currently, a test setter (teacher, institution etc.) determines the content domain that a test is designed to cover, and then writes questions that sample material from the content domain and the determines how much of the content domain has been learned (memorised) by how many of the questions in the sample have been answered correctly. One of the flaws in the system is that, if the test taker misses any of the questions, they are deemed to have missed that part of the content domain the questions were designed to cover. It is an all or nothing proposition that is supposed to accurately reflect the amount of material a person has learned.
An alternative that I would like to propose is based on psychophysical measurement.
Psychophysical measurement is the mapping of physical stimuli (e.g. light) onto a psychological experience (e.g. detecting light). Because biological sensory receptors vary in their sensitivity from minute to minute, a clever way to establish a threshold for detecting the physical stimuli were devised in the late 1800s by a group of very clever scientists. These scientists acknowledged that the strength of a psychological response didn’t directly map on to the actual state of the psychical world. In other words, although no light didn’t elicit a biological response, very weak levels of light didn’t elicit a response either. Increasing the strength of the physical light signal eventually elicits a biological response, however, doing this over and over doesn’t result in the response being elicited at the same level of physical stimulus every time (some variability), and working backward (decreasing the light until it is no longer detected) leads to a different level of sensitivity.
In order to come up with a way to accurately describe what is happening, psychologist’s in the area devised a stepping procedure where the light is increased and decreased in an unpredictable manner, and the value of physical light that the person correctly detects, say 50% of the time, becomes the detection threshold for that person. This doesn’t mean that there is no detection below that level, nor that there is perfect detection above that level, but it is a number used to describe the level at which the person detects light. The same methodology is used for other physical phenomena such as sound, pressure, and heat etc.
Using the same philosophy, we could measure the level at which a person ‘knows’ (has memorised) a body of knowledge. If there were an infinite number of questions, all properly tagged with the level of knowledge (difficulty) required to answer the questions, a smart testing instrument could feed the questions at a person, increasing or decreasing the difficulty level until the person consistently answered, say, 60% of the questions correctly. This difficulty level would then accurately describe the “learning threshold” for that person in that particular content domain, at that particular point in time.
That type of system could measure the ‘learned’ (memorised) material accurately, and would be comparable between teachers and institutions. This type of testing could be a part of everyday education instead of a single point in time examination that returns a static measurement that is often used to define an individual and pigeonhole them.
Just a thought.
I have, in the past, made fun of learning as being equivalent to memorisation. I still hold to that when the entire definition of learning is encompassed by memorisation. However, memorisation and being able to fluently recall things, is a foundational component of learning, and it cannot be ignored. In my post on Learning requirements, I suggested that there are two components of learning (I actually think there are three, but one of them is beyond the remit of formal learning environment). The first component is obtaining information. Within that requirement lies the fluent (accurate and quick) recall of the information that you know, which requires memorisation.
Technology and learning research has provided us with tools and techniques that can assist in this fundamental component of learning. SAFMED Cards, the testing effect, spaced learning, mastery based learning: all of these provide methods for learners to reach fluency with information, and most of these proven techniques are ignored by educators. Instead, educators favour reading out summaries of information (Powerpoint Slides) and requiring a recall test sometime later.
Wouldn’t it be wonderful if there was a tech firm that built a learning tool that incorporated many of these techniques and tool as a matter of course instead of building a slicker way to present Powerpoint slides with a voiceover reading them and expounding a bit?
I was in a meeting recently about teaching, and as usual, I ended up chanting my line about what the evidence says about teaching this particular subject (statistics). One of the other lecturers said something that I have heard too many times. She said something like “I don’t care what the evidence says, I already know how I want to do it“. A few months earlier I was talking to someone about proposed changes to the procedures and penalties to deal with plagiarism, and asked the same question, receiving the same response. It seems to happen every few months, that as I ask if the evidence has been considered I am told that, essentially, it doesn’t matter what the evidence says, this is the way we have decided to do whatever.
These examples that I can think of are only the examples that the disregarding of evidence has been made explicit, often, there is no explicit statement, simply a disregarding of the evidence.
The most recent incident made me think about our attitudes toward evidence in general. I work in a research intensive department (ranked 50th in the world for research), and regularly rub shoulders with highly regarded researchers. What I began to wonder is, if some of these same people can so quickly and easily dismiss evidence about teaching and learning, how do they react to evidence that does not fully support their theoretical stance in their particular area of expertise. Do they simply dismiss evidence there as well?
I can’t help but have my faith in their scientific objectivity shaken by these interactions that take place year after year.
Do we really believe in the principles alluded to here in this post? I do!
Originally posted on ETAD Café Canadien:
I’ve been silent on the issues surrounding the attack on academic freedom and tenure at the University of Saskatchewan last week. It resulted in a number of posts, concerns, media scrums and the like, and ultimately in this open letter to the Chair of our Board of Governors.
The Board of Governors is meeting now… as I write. 20 minutes before the meeting we received notice that the Vice President, Academic and Provost had resigned. Here’s the notice released to the campus community from the President.
Members of the campus community,
Brett Fairbairn, provost and vice-president academic, tendered his resignation to me earlier today, and I have accepted it.
In his letter of resignation, Brett said, “My motive for offering my resignation is my genuine interest in the well-being of the University of Saskatchewan. I have been a long-time member of our university community including being a…
View original 541 more words
Some background into my research into metacognitive monitoring. More to follow.
Originally posted on Scholarship of Learning:
Metamemory is considered a foundational component of metacognition, and can be thought of as knowing what you know.
Metacognitive regulation, one of the core aspects of metacognition, is being able to select appropriate problem solving strategies, being aware of different approaches and abilities, and being able to evaluate the success of the approach used. Metacognitive monitoring is one of the core aspects of metacognitive regulation, and refers to one’s awareness of comprehension and task performance.
Clearly, metamemory and metacognitive monitoring are closely related, and have been referred to by some researchers as the foundation of metacognitive awareness. If a person is aware of what they know (metamemory) or comprehend (metacognitive monitoring), and can evaluate the use of that knowledge or comprehension (metacognitive monitoring) in carrying out a task, they will be able to demonstrate improvement on that task the next time they are faced with it (or a…
View original 385 more words