In July, I wrote about learning thresholds, and how we could use technology to define and attain learning thresholds. I was reading Diane Halpern’s “Thought and Knowledge” yesterday, and came across a passage that explains my thinking about learning thresholds, memorization, and critical thinking. It reads:
…thought is powerful only when it can utilize a large and accurate base of knowledge (page 5).
The preceding part of that line is also important in the context of learning in today’s world “…knowledge is powerful only when it is applied appropriately”.
Never before has the world had a greater need of people who can use critical thinking skills, and never before have we had a greater paucity of critical thinking skills – when compared to the total number of ‘educated’ individuals.
The large and accurate base of knowledge has taken precedence over everything else. And, as the sheer amount of information continues to increase to amounts truly unimaginable at a human scale, the obsession with the large and accurate knowledge base threatens to overwhelm, us with multitudes of memorizers who have no concept of thinking.
My problem is not with ‘a’ large and accurate knowledge base, but with ‘the’ large and accurate knowledge base. Those charged with the preparation of the next generation of thinkers have often spent years, if not decades, accumulating and conceptualizing their sliver of the world, and are rightly called experts in their fields. However, that expertise, in no way, prepares them to teach novices how to think. And the current state of affairs in higher education doesn’t allow for subject experts to become learning experts. These experts who focus on more and more information,with more and more classes, programmes, degree schemes, areas of study focus so intently on their field of study (as they are rewarded to do) that they have no conception of what the problem is. Except that they know that their students are not becoming the experts they think they are training them to be.
Nowhere is this better illustrated than in the recent findings that over 95% of university leaders thought their graduates were well prepared for the world of work, while about than 10% of business leaders agreed (see below). We are so out of touch with reality, we are rapidly losing the credibility that we are banking on to carry us through the disruptive innovation digitization has landed us in.
We can, and need to do better.
I found the evidence – and here it is:
“…(in) Inside Higher Ed‘s 2014 survey of chief academic officers, ninety-six percent said they were doing a good job – but… in a new survey by Gallup measuring how business leaders and the American public view the state and value of higher education, just 14 percent of Americans – and only 11 percent of business leaders – strongly agreed that graduates have the necessary skills and competencies to succeed in the workplace.”
The first requirement for the system I’ve been putting together as a thought experiment that would accredit memorisation (see my three previous posts for some background) would be an infinite set of well tagged questions.
I think this is the easiest part of the system to put in place. We are all aware of the success of crowdsourcing as a way to provide content (think wikipedia). So why don’t we put together an open source question base?
Since this learning system is simply about fluency of recall, all we need are questions about stuff. And lots of them.
It isn’t simply about the questions. in order to make this a memorisation/learning environment, the questions have to be tagged – well tagged. This is necessary so that users can focus on their own learning desires.
The kind of tagging that would make this system useful has three varieties of tags: content domain, source, and event.
The content domain tags are the most obvious. Libraries have spent centuries (literally) organising knowledge into content domains. There are wonderful hierarchal systems that allow users to find learning resources (books, articles, papers, websites, posts, pictures, videos – and who knows what else) within a specific content domain. We haven’t been all that great at tagging these resources, but there’s no reason we can’t start. Within the new question base, an easy to use content domain tagging system is a must.
The second set of tags ahas to do with sources. Knowledge is found somewhere, and if questions can be tagged with a specific source, that makes them all the more powerful. Specific books, journal articles, or web-articles (think wikipedia) would allow users (both learners and contributors) to specify exactly where the information comes from that needs to be memorised to a fluent level. Teachers (face to face or virtual) could then specify both content domain and source, along with the required level of proficiency, for an event (discussion, seminar series etc.) required for the learner to be able to participate fluently.
Finally, event tags could be included so that learners could prepare themselves for the kind of events specified above. They could even be specified for traditional assessment events (mid-term or final exams).
Properly tagged, an infinite number of questions embedded in a threshold learning system, could provide learners and educators with an invaluable tool for the foundational learning we call memorisation.
In my last post, I wrote about memorisation as a foundational component of learning. What I am going to write about today is a system to more accurately measure memorisation than the one that is currently used.
Currently, a test setter (teacher, institution etc.) determines the content domain that a test is designed to cover, and then writes questions that sample material from the content domain and the determines how much of the content domain has been learned (memorised) by how many of the questions in the sample have been answered correctly. One of the flaws in the system is that, if the test taker misses any of the questions, they are deemed to have missed that part of the content domain the questions were designed to cover. It is an all or nothing proposition that is supposed to accurately reflect the amount of material a person has learned.
An alternative that I would like to propose is based on psychophysical measurement.
Psychophysical measurement is the mapping of physical stimuli (e.g. light) onto a psychological experience (e.g. detecting light). Because biological sensory receptors vary in their sensitivity from minute to minute, a clever way to establish a threshold for detecting the physical stimuli were devised in the late 1800s by a group of very clever scientists. These scientists acknowledged that the strength of a psychological response didn’t directly map on to the actual state of the psychical world. In other words, although no light didn’t elicit a biological response, very weak levels of light didn’t elicit a response either. Increasing the strength of the physical light signal eventually elicits a biological response, however, doing this over and over doesn’t result in the response being elicited at the same level of physical stimulus every time (some variability), and working backward (decreasing the light until it is no longer detected) leads to a different level of sensitivity.
In order to come up with a way to accurately describe what is happening, psychologist’s in the area devised a stepping procedure where the light is increased and decreased in an unpredictable manner, and the value of physical light that the person correctly detects, say 50% of the time, becomes the detection threshold for that person. This doesn’t mean that there is no detection below that level, nor that there is perfect detection above that level, but it is a number used to describe the level at which the person detects light. The same methodology is used for other physical phenomena such as sound, pressure, and heat etc.
Using the same philosophy, we could measure the level at which a person ‘knows’ (has memorised) a body of knowledge. If there were an infinite number of questions, all properly tagged with the level of knowledge (difficulty) required to answer the questions, a smart testing instrument could feed the questions at a person, increasing or decreasing the difficulty level until the person consistently answered, say, 60% of the questions correctly. This difficulty level would then accurately describe the “learning threshold” for that person in that particular content domain, at that particular point in time.
That type of system could measure the ‘learned’ (memorised) material accurately, and would be comparable between teachers and institutions. This type of testing could be a part of everyday education instead of a single point in time examination that returns a static measurement that is often used to define an individual and pigeonhole them.
Just a thought.
Information abundance means that learners have unprecedented access to information.This coupled with what we know about student engagement in academic study means that we might want to rethink the way we approach assessments. I believe that social media (SM) tools provide us with unique opportunities to asses in ways that weren’t even possible a few years ago. Using SM tools can provide opportunities not available using traditional assessment tools.
Regular readers will know that I am a huge fan of blogging as a form of assessment using SM. Some of the advantages (from my perspective) for the use of SM blogging include public exposure, and the ability to comment. Other advantages, that I think are relatively minor (from my perspective as a teacher) but are important to students, are: solid platforms, 24/7 access, universal availability, and simple and quasi-familiar interfaces. I think that the advantages that the students focus on are important to their ability to do the work, however, the advantages I focus on are real advantages for learning.
Although a few students (and a great many teachers) fear public exposure when it comes to assessment, I believe that it is of great benefit in the learning process. One of the hallmarks of authentic assessment is that the assessment is a closer reflection of the type of activity that is expected in real life. Seldom is serious writing done for the purposes of having a single individual read it, and then have it disappear. That is how most traditional writing assessments are done in HE today, with few exceptions. When students work is put up for public display, several things happen, they take more care in their work, they begin to produce work that will impress their peers, friends and family (you wouldn’t believe how many of my students invite their parents to participate in their learning this way), they look at each other’s work as models of good practise (how often does that happen with traditional assessments), they monitor each other’s work for unfair practise (with serious repercussions), and they are available for the wider community to engage with them.
My students tell me that after a few weeks of producing their weekly blog posts, I begin to disappear from their thinking when they are writing. They begin to write for their audience, in which I become a minor player. They write to convince those who will be making comments on their work, not for me, who will be grading the work. They present coherent arguments, backed by evidence and clear thinking, that allow them to get across the points that they want to make.
They also tend to invite others, not involved in the class, to read their work. They will interlink their various SM tools so that when they publish a post, it goes out to their friends and family on Facebook, Google+, and Twitter. When I see them in class and ask about a mother’s (or friends, or cousin’s) comment on their post, they invariably blush and say, “That’s my mother (or cousin or whatever) – I don’t know how she got on there.” Well, I do! The students have invited them in. In many cases, this is just an extension of them bringing the pictures home that they drew in grade 2, and looking for some measure of praise. This is great. Anyone who is going to write something that their mother is going to read is going to make sure she has a reason to say ‘well done!’. Why wouldn’t we, as educators, want to take advantage of this?
They have to read each other’s work in order to write (required) comments every week. As a part of the model I use, I write a short paragraph each week about what I have noticed in their collective writing (keeping them broadly within the parameters I set at the first of the class), and also point out the blogs that particularly impressed me. At the first of the semester, I tell them I will do this, and let them know, through a series of very unsubtle hints, that the posts I mention are the ones that got high marks that week. Over the course of the first three or four weeks, the spread of marks gets narrower and narrower as the students use the posts I mention as templates for their own writing. Our students are bright, and they want to do well. By showing them what I mean by doing well, they begin to seriously imitate the best. When they come and talk to me about how they can improve, I ask them if they have read the posts I have listed. When they tell me yes, I ask them if they notice a difference in what they are producing and what I have pointed out as being good work. They tell me yes, and then ask me what it is about the posts that make them better. I can (quite honestly) say to them that the really good posts make me go WOW!, and that is what they need to do. When they ask how that is done, I reply (again, quite honestly) “I have no idea, it just does”. They agree that it made them go wow as well, and then go away and try to make that happen in their work as well, and it often happens. “We are seeing peer-based learning networks where students are learning as much from each other as they are from their mentors and tutors (John Seely-Brown)”
I have had a single case of plagiarism in the five years I have taught this way. The students identified it (the student was using other students’ work and passing it off as his own), and were incensed that this would happen in their class. A delegation of students actually came to me demanding that the offending student be made to stand in front of the class and publicly apologise for what he had done. I told them not to expect that to happen any time soon. It was near the end of the semester, and the students didn’t even come back to the class again. Although this incident has had a few of my colleagues argue that we should protect students from this type of treatment, I tend to disagree. Although not a fan of Ayn Rand, I have to agree with her when she said “We can ignore reality, but we cannot ignore the consequences of ignoring reality.” I think that, too often, we try to protect our students from reality when that is exactly what they need to experience.
Finally, the public exposure opens them up to the likes of you and I -Professionals in the field who are always looking for good, interesting ideas that are presented in a well thought out format. In the past, my students have received favourable comments from around the world. One of my students received a scholarship to do a Masters degree at a prestigious university based on the blogs she wrote for my class. Someone commented on her work, and asked if she would like to collaborate with their research group, and when she explained that she was an undergraduate who was finishing up her degree that year, they asked if she would consider continuing her studies with them, all expenses paid. Unsolicited and unasked for, but welcome and appreciated. Not something you would get from having written an essay that only a single lecturer ever looks at (unless the work is double marked).
The requirement to comment on each other’s work is the other great learning outcome of using some SM tools for assessments.
I require my students to write five comments a week, bringing in fresh evidence each time to support the arguments they are making. This requires a significant amount of reading and thinking, and this is the one requirement that the students ask me to reduce every year. They are happy to write a blog post weekly, but feel that requiring them to make five comments makes for a heavy workload. I say, that’s what you’re here for.
Writing blog posts each week means that the students study a particular principle to a depth that I can be satisfied with in a senior undergraduate class. Having them comment on five of their peers posts means that they have to move out of their comfort zone, and engage in material that they otherwise wouldn’t. This satisfies me, as their teacher, that they have covered some breadth in the class.
However, I think the most powerful aspect of comments are the discussions and debates they spark. “It is better to debate a question without settling it than to settle a question without debating it (Joseph Joubert)”. I couldn’t agree more to a statement. They write, think and discuss matters in a lively, civil and scholarly manner. Everything I could hope for from my students.
As a learning tool, I have to say that blogging is one of the best. And social media blogging is far more powerful than blogging behind a firewall. In higher education, we deal with adults. We should be providing them with authentic experiences, and treating them like fully responsible adults. Helping them grow and develop in the real world in, for me, one of the most wonderful aspects of the work I do. I wish there were other who shared my excitement.
I think that there are two vital requirements in order for us to learn something: obtaining information, and understanding information. Both elements are necessary. Something can’t be learned if new information is not available, and learning is not real if there is no understanding is not reached. Given that both elements are necessary, various aspects of HE can be examined to determine if learning is happening. I will focus on one element here in this post today.
Anyone who has read my blog in the past will know that I don’t think highly of lecturing as a form of teaching. Lecturing is about transmitting information in a verbal format. It is the presentation of information, in a (hopefully) clear and logical fashion to a learner. Although many lecturers claim otherwise, this is what lecturing is – the transmission of information. I have no argument with the idea that there are good and poor examples of lecturing. I have been exposed to both. I have listened to well presented and interesting lectures, and I have listened to abysmal lectures that shouldn’t have been delivered. However, regardless of the quality of the lecture itself, a lecture is basically about transmitting information. That is all the lecturer has control over – the delivery or transmission of the content or information. The understanding component is what the learner has to do.
Many lecturers tell me that in their lecturers they deliver understanding – but they are simply wrong. Understanding is an active process that involves thinking and incorporating information into an already constructed worldview. This is an internalised process that is individualised by a learner. It can be supported and fostered by a good teacher, but requires active engagement by a learner before it can take place.
A lecture, by its very nature, is a one way communication where the learner is the passive recipient of the information that the lecturer is transmitting. The very design of a lecture theatre is to focus attention on the lecturer, and discourage discussion between learners and each other, or learners and the lecturer. Most lecturers would like to have interaction and discussion during a lecture, but the failure rate for this aspiration is high (maybe failure is to harsh, but few of us would get above a C-).
Given that we live in (or are mostly in) the age of information abundance, why do we see ourselves as vital in the process of transmitting information to learners? Why is lecturing synonymous with teaching in HE? Even within the lecture theatres that we use to transmit information, wifi means that the information that we are transmitting is in the very air around our learners – and I am frequently asked if we can get access to wifi blocked in a lecture so that the students will pay attention to the lecturer. Given that we live in a world of mass higher education, and that our availability to our learners is seriously limited by time and numbers, why do we insist on spending what little time we have on information transmission. This aspect of learning can be better accomplished a number of ways, including reading, listening to recorded information, or viewing video – none of which take up our precious time and energy, which could be devoted instead to fostering understanding.
I know that fostering understanding is a much more difficult task than transmitting information, and I think that is a part of the reason why we don’t devote ourselves to that aspect of learning. We have done our jobs if we transmit. It is the learners job to incorporate and understand the information we have transmitted. I have spoken and it is up to my students to learn.
We can do better than that. With all the flack thrown at the Khan Academy, the idea of a flipped classroom where the teachers role is to foster understanding has got to be better than the current situation where HE teachers see their primary function as information transmitters.
Just as the old photocopy notice “To copy is not to learn” is applicable to students, I think that the notice “To speak is not to teach” is just as true.
Images from https://allthingslearning.wordpress.com/2013/01/
Some of this post is using recycled material from earlier posts: It has been re-written for a different audience, but I thought it was good for this audience as well.
Bligh (1972, page 4) tells us that “In politics, lectures are called speeches. In churches they are called sermons. Call them what you like; what they are in fact are more or less continuous expositions by a speaker who wants an audience to learn something.”
Academics love them, and swear by their effectiveness (based on a case study with an N of 1 – themselves). Without a doubt, there are good lecturers and poor lecturers. Nothing riles the passion of an academic more than an attack on their favourite pastime – in the UK we’re even called lecturers!
Students love them because they are both expected and easy. As brave teachers move away from lectures as the primary form of teaching, students rise up in anger, demanding that the lecturer do their job and tell them what to memorise (this is not a joke, but has actually happened in the recent past).
Administrators love lectures because in an hour or so, you can tick the box on hundreds of hours of contact, calling them effective learning experiences.
But, how does lecturing stand up to scrutiny as effective learning experiences?
In 1972, Donald Bligh wrote a comprehensive review of the research evidence on teaching in HE – curiously, the book title was What’s the Use of Lectures? In this review, he looked at over 700 studies that demonstrated the ineffectiveness of lecturing as a learning event.
Bligh looked at several areas, and reviewed the literature looking at how effective a lecture is at achieving particular educational goals. Here is what he found:
|Educational Goal||Number of Studies Found|
|Lecture Less Effective||No Difference||Lecture More Effective|
|The Lecture as a Method of Acquiring Information||27||57||20|
|The Lecture as a Method of Promoting Thought||12||17||0|
|The Lecture as a Method of Teaching Values Associated with the Subject Matter||28||24||7|
|The Lecture as a Method of Inspiring Interest in a Subject||16||11||4|
|The Lecture as a Method of Promoting Personal and Social Adjustment||14||8||4|
|The Lecture as a Method of Teaching Behavioural Skills||27||30||7|
As Graham Gibbs recently wrote in the Times Higher:
More than 700 studies (referring to Gibbs work) have confirmed that lectures are less effective than a wide range of methods for achieving almost every educational goal you can think of. Even for the straightforward objective of transmitting factual information, they are no better than a host of alternatives, including private reading. Moreover, lectures inspire students less than other methods, and lead to less study afterwards.
For some educational goals, no alternative has ever been discovered that is less effective than lecturing, including, in some cases, no teaching at all. Studies of the quality of student attention, the comprehensiveness of student notes and the level of intellectual engagement during lectures all point to the inescapable conclusion that they are not a rational choice of teaching method in most circumstances.
A review by Hughes and Mighty written in the more recent past (2010) reinforced Bligh’s damning indictment of lecturing as learning events written over 40 years ago. The recent article in The Atlantic by Corrigan looks at the debate about lecturing and says about those defending and supporting lecturing:
In some ways these apologia accentuate the dividing line in the lecturing debate. They praise various aspects of lecturing, while criticizing alternative methods. These rhetorical moves reinforce the idea of a two-sided debate, lecturing vs. not lecturing. Their skirting of the research on the subject puts them on the less convincing side, in my view.
As academics, we need to decide if we base our working practises on gut feelings and the love of where we have come from, or look at what we do with a rational view of the effectiveness of our work.
As I read the reflections of my students who just completed my Science of Education module in the autumn, I actually had tears in my eyes as I read of their frustrations with the missed learning opportunities they had experienced (and paid good money for). They were lamenting the time spent wasted sitting passively through lecture after lecture, believing that they were engaged in an effective learning activity, only to find out, in my class, that lecturing is such a poor method of learning (they find this out themselves, I never actually tell them this – it is part of their self-directed learning experience into the Science of Education).
I believe that we can, and should, do better than this. It is really up to us.