Lecturing – is there no other way?

As I immerse myself into the murky waters of trying to see over the horizon (rather ungainly mixed metaphors here) to the HE of tomorrow, I am amazed at the singularity of thought with lectures being some kind of pivot point. What is so special about a lecture that it needs to be preserved at all costs. It is like there is no mind that can imagine learning without lectures. I don’t get it.

Don’t get me wrong. I lecture and I am good at it. I have won numerous awards for my lecturing, and my students heap accolades to my memory because of the entertaining and engaging manner that I have honed over the years. But, and this is a big one, I don’t believe that my standing up and delivering a good, entertaining, information packed 50 minutes is central to my students learning. Even if my students insist otherwise, I don’t believe that the lecture itself is that effective as a learning opportunity.

I have a background in cognitive psychology (the world calls me Dr.); how we perceive, process, categorise, store, integrate, and retrieve information.  For many years, because of my position in the University, I went to numerous educational seminars and conferences with a view to improve learning opportunities for students. Over and over, I was subjected to the world of learning styles – you know, the visual learner, auditory learner etc. etc. etc. I could never get my head around where this would fit in my understanding of how we process information, but somehow thought that educationalists were privy to some other body of research that I was unaware of. I can’t tell you how relieved I was when I heard an eminent cognitive psychologist (in a different venue) trash the whole idea of learning styles. As a result, my world took a sharp turn, and I changed my field of study.

I now study how cognitive psychology impacts formal education – it doesn’t. The education world ignores what we know (see Carl Weiman’s comments) about how information is processed, because it doesn’t fit their model of what it means to teach. This is especially true in HE where the principle method of teaching centres around lecturing.

In the 70’s, researchers found that only 35% of information presented orally (under ideal conditions) could be recalled after a five minute delay. We have found that learners typically recall less than 10% of information (in the form of a psudolecture) presented orally after a one week delay. With such poor memory for information presented verbally, why is that the teaching method we insist on keeping.

On the Learner Weblog, there is a post about the proliferation of lectures. It is noted there that YouTube is increasing the number of videos available by over 13 hours every minute. A search of YouTube lists almost 100,00 hits for university lectures. How many more ways do we need to be told that 1 + 1 is 2?

Although Chris Lloyd talks about the need for fewer lecturers in 60 years, he still finds himself in a world where lecturing holds centre stage in HE, it is just that in his world, centre stage will be held be the cream of the lecturing crop.

In the age of information abundance, we don’t need a million more lectures. We have enough. Lecturing is an inefficient method of transmitting information that needs to be retained. At best, it can be entertaining, but as far as efficiency, it is really poor. Just because this is the way it has always been done doesn’t mean that this is the best way to do it. Can’t anyone think of a different way to foster learning (my own definition of teaching) or is the bulk of the HE world going to spend its energy proliferating lectures?

From Text to Audio and Back Again: Providing Students with Good Feedback

Here is a short report I published in 2009 around the use of voice recognition software to provide feedback to students on their work. It worked, and has been adopted by a number of lecturers in my institution.

In recent years, a number of educational innovators have been experimenting with providing students with feedback on their work using audio podcasts. A number of problems with this approach have been identified. Using voice recognition software to turn speech into text, the speed and ease of providing high quality audio feedback can be combined with the advantages of printed text, giving the students the best of both worlds.

Since the National Student Survey began four years ago, feedback on assessment has been one of the lowest scoring categories (Attwood, 2009). One of the hoped for answers to the problem has been the use of audio podcasts to provide students with feedback on both their formative and summative work. A number of high profile research projects into the use of audio podcasting to provide students with feedback (e.g. Sounds Good; Audio Supported Enhanced Learning) have been undertaken in recent years.

Generally, the evaluation of providing audio feedback to students has been positive (Ice, Curtis, & Phillips, 2007; Merry & Orsmond, 2008; Rotheram, 2009). The students have enjoyed a more in-depth experience and appreciate the detailed explanations the audio recordings provide. In addition, they feel that the audio recordings are more personal, and are evidence of their tutor’s engagement with the process. Some of the challenges students identified included a reluctance to listen to feedback in public places (even with earphones), reluctance to mix feedback recordings with music on personal players, referencing particular parts of a paper in the feedback, and returning to specific points made in the feedback.

Staff members involved in the initial work have also responded positively to the experience as well (Ice et al., 2007; King, McGugan & Bunyan, 2008; Rotheram, 2009). They report that most of the initial users intended to continue with the method. However, there were some real limitations identified by the lecturers (Merry & Orsmond, 2008). The biggest problems surrounded the administration of the files (loading, naming, and distributing to students). Although the reports were generally positive, there is real doubt about the scalability of the method. Rotheram reported that one of the lecturers who decided not to continue using audio recordings didn’t feel it was practical with 80 students. Because of the administrative difficulties, the users didn’t report a time savings associated with the use of audio files.

Although the advantages of using audio files are worth pursuing, the problems, especially when dealing with large groups of students, are very real (King, McGugan & Bunyan, 2008; Rotheram, 2009). However, most of the problems are specific to the media used. Although much easier to use than previously, sound recordings are not the same as text files and paper.

However, there are significant advantages to using audio for student feedback. The time savings are significant (in the simple recording of feedback – see Merry & Orsmond, 2008), the depth and scope of the feedback is greater, and the feedback can be explained and understood, as opposed to illegible scrawling in the margins (Ice et al., 2007; King et al., 2008; Rotheram, 2009). The problems appear to be limited to the manipulation and administration of audio files, not in the audio recordings themselves.

My recent work in combining printed text feedback with audio recordings has demonstrated that the advantages of both types of feedback can be realised. Voice recognition software enables real time translation of speech into text, allowing a lecturer to provide feedback in the same manner as making an audio recording, taking advantage of the speed, depth and scope, and explanatory power of the recording, while producing a text file that can be distributed to the students in the same way a recording might be or, more simply, by printing and stapling the paper copy to the students’ work.

Method of Use

For this pilot, an Apple Power Mac (dual 2.8 GHz quad core, OS 10.5) running MacSpeech Dictate 1.4 speech to text translation, voice recognition software was used. The training of the voice recognition software took about ten minutes to reach an acceptable degree of accuracy.

While reading the students work, comments were provided by referring specifically to passages within the text. When the paper was finished, a summary of my thoughts about the paper with its strengths and weaknesses was provided, along with a grade for the work.

Thirty three students from two modules had feedback provided in this manner. For the 17 First Year students’ work, the process was timed in order to provide a comparison. The First Year scripts took (on average) 12.5 minutes to provide the feedback, explanations, and grade, and then print and staple the feedback to the back of the students’ work. The text averaged about 1.5 pages of single spaced text broken up into about 30 one to three line paragraphs. The marking of these first year reports had previously taken between 15 and 20 minutes, giving an average time savings of about five minutes each. It was estimated that similar time savings were realised with the 16 Second Year scripts.

Evaluation

The 33 students who were a part of the initial trial were asked to comment on the feedback they received with their work (handwritten text in the body of the work, or printed text at the end). Fifteen of the students responded, with six commenting that the type of feedback they received didn’t matter to them; four reporting that they preferred comments interspersed within the text on the paper; and five reporting that they liked the depth and quantity of the feedback printed and stapled to the back of their work. Two students commented on the misinterpreting of the audio to text by the programme, with some of the comments being fairly humorous.

Overall, there were no strong feelings expressed either way (somewhat disappointing). It is assumed that the 18 students who didn’t respond had no preference for the form of their feedback (or didn’t look at their feedback). The overall interpretation is that the method is effective, but not one that has elicited any strong feelings either for or against from the students.


Discussion

Using text to speech software to provide feedback on student work has been successful in this pilot. Students did not express strong preferences for the use of the system, but they did not express any strong dislike for the system either. However, significant time savings were realised by the staff member using it. In addition, the feedback was both quantitatively and qualitatively enriched.

From the results of this study, audio to text feedback on student work takes advantages of some the numerous benefits outlined in previous studies on providing audio feedback. From the students’ perspective, this includes the depth and quantity of the feedback. From the lecturers’ perspective, the positive experiences include providing more depth to the students’ feedback, and being able to fully explain how a comment might improve a piece of work, while enjoying considerable time savings.

There is a real loss in the audio to text if the personalised nature of the audio feedback is considered. The nuances and warmth of voice that the students reported to enjoy, is not available in the textual translation (Ice et al, 2007; Rotheram, 2009).

The drawbacks that have been identified with the provision of audio recordings are almost all associated with the administration of the sound files. The simplicity of the audio to text translation eliminates these administrative problems.

This simplicity is also scalable, whereas the advantages found with the use of audio files are not enough to offset the higher administrative burden if the system is scaled up. Providing audio recordings for 20 students may be worthwhile, but when the number of students increase, the cost is too high. This is not the case with audio to test translation. Because of the time savings, as the number of students increase, the value of the system increases. Saving an average of five minutes each when providing detailed feedback for 20 students is dwarfed by the time savings if similar gains are realised when providing feedback for 100 or more students.

Using the audio to text translation for providing detailed feedback on students work has the potential to change the nature of marking. Taking advantage of most of the benefits of using audio recordings, while minimising the drawbacks associated with their administration means that lecturers can enjoy the best of both worlds in their marking.

References

Attwood, R. (2009, March 5). Institutions hear consumers when students speak. Times Higher Education, 1886, 10.

Ice, P., Curtis, R., & Wells, J. (2007). Using asynchronous audio feedback to enhance teaching presence and students’ sense of community. Journal of Asynchronous Learning Networks, 11(2), 3 – 25.

King, D., McGugan, S., & Bunyan, N. (2008). Does it make a difference? Replacing text with audio feedback. Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education, 3(2), 145 – 163.

Merry, S., Orsmond, P. (2008). Students’ attitudes and usage of academic feedback provided via audio files. Bioscience Education e- Journal, 11(3). Doi: 10.3108/beej.11.3

Rotheram, B. (2009). Sounds Good: Quicker, better assessment using audio feedback: Final Report.

Teaching with Podcasts – A Great Success Story

Teaching labs for statistics classes is one of those labour intensive, not very much fun, teaching jobs. In Bangor, we enjoy 300+ students a year studying psychology, and teaching the students to use SPSS to analyse data is one of those difficult and thankless tasks no one really wants.

For many years, we divided our numbers up into groups of about 50, and then repeat taught six sessions for 90 minutes for about eight weeks each semester over three semesters. Even with an average of three postgrads helping out in each lab session, the results were never very positive – the most able were bored, the less able were lost, and the middle felt pushed, but kind of got it.

My colleague, Mike Beverley, was responsible for teaching the labs across both the first and second years while I taught the first year classes. The topics covered ranged from simple data entry to complex ANOVAs and factor analyses. Students were set assignments, analysed data, and turned in regular work. The experience was not very enjoyable for either the students or the teachers. Our satisfaction ratings from the students was low when it came to the labs (“Burn every copy of SPSS on the planet!” was typical). We needed a new model.

We initially decided to podcast Mike’s first session each week, and then he could ensure that the presentation to the students was at least consistent, and he didn’t have to repeat himself endlessly. Podcasting was new then (spring of 2005), so we were just trying things out. At first we recorded audio podcasts, but after a few weeks, we recorded the screen capture for the students. By chance, we stumbled onto a model that really worked. In the labs, we would usually provide about five minutes of instruction, and then let the students work at it for a few minutes before introducing something more. As a result, all of our podcasts were about five minutes long, demonstrating how to carry out a procedure with a voiceover. The initial fumbling about was successful enough that we decided to prepare podcasts over the summer to cover every topic we taught for use the following year.

In the Autumn of 2005, we scheduled the labs, employed the postgrads, and demonstrated the podcasts to the new, incoming students. To our surprise, no one ever came to another scheduled lab. I fib here – there were about six students who insisted on coming every week. After about three weeks, we rolled the six into a single session, and let the rest of the students know that we were only going to be in the lab for that single 90 minute session. If they had question, they needed to come along then.

The students learned SPSS – better than they had in previous years. Their feedback in the module evaluations was uniformly positive about learning SPSS that way, and we changed the way we did things (we still use the same basic model six years later).

Cost Savings

The savings from this have been great for the Department. Teaching the traditional labs went something like this:

  • 9 hours/week per instructor
  •         4 instructors = 36 hours/week
  •    8 weeks teaching across 3 semesters
  •    864 hours/year of instructional time
  • Stats support surgeries for 3 hours/week across 15 weeks involving 2 or 3 people
  •    Approximately 75 hours/year
  • Total of about 940 hours year teaching and supporting stats

Using podcasts for instruction, the cost went someting like this:

  • 22 weeks with 1 hours of support available per day (1 person currently)
  •      110 hours
  • Savings of 830 hours

Podcast Development

  • Estimate about 3 – 4 hours per lesson
  •       2 or 3 podcasts per lesson
  •       23 lessons
  •    70 podcasts in total
  • About 100 hours in total to make the initial podcasts

Bottom Line

Massive savings overall (830 hours less development time), and the podcasts are reusable. The instructor was happier (for a time), the students were happier. The students have the podcasts available throughout their entire undergraduate programme so they can refer to them anytime. They have control over their learning,and use the labs when they choose to do so. A real win – win solution!

The single biggest problem has been the updates of the programme (SPSS). This has meant that we have re-recorded the podcasts twice in six years – not too bad an investment given the long term benefits.

Success Secrets

I think there were a few things we did (and continue to do) right when we made the teaching podcasts. They were:

  • Have an expert in Stats & SPSS teaching make the podcasts (at least the first time)
  • Keep the podcasts short (5 – 10 minutes)
  • Don’t obsess about quality
  • Be prepared to release updates quickly if there is a need for clarification

In the area of teaching and learning, given the promises of efficiency and performance that technology has alluded to over the years, it is nice to see something that turns out to really work – along with some quantifiable evidence to illustrate just how well.

We started this way back in 2005 – I just haven’t put this out there since. I have presented it at a couple of conferences, but not put it out there where anyone could refer to it, so here it is.

Weekly Tests

In one of my earlier posts, I wrote about using tests as a learning instrument – the testing effect. When I still lectures as a means of teaching statistics and research methods, I used weekly open book tests as a way to get students to learn what information their books contained. It worked.

Open-book examinations or tests are defined as testing situations where students can use textbooks, notes, journals, and reference materials during a test (Eilertsen & Valdermo, 2000). A number of studies have identified numerous benefits in the use of open-book tests (c.f. Francis, 1982; Theophilides & Dionysiou, 1996). Some of the advantages include: a reduction in examination tension and stress (Brockbank, 1968; Feldhusen, 1961; Gaudry & Spielberger, 1971; Gupta, 1975; Jehu, Picton & Futcher, 1970; Michaels & Kieren, 1973; Tussing, 1951); a greater learning effect (Michaels & Kieran, 1973); a reduction in rote memorisation (Bacon, 1969; Betteridge, 1971); a reduction in cheating (Feldhusen, 1961; Gupta, 1975; Tussing, 1951); more constructive preparation – depending on the way open-book tests are structured (Feldhusen, 1961); and the promotion of active learning during the testing process (Feldhusen, 1961). There are some disadvantages to open-book testing which include: students wasting time during a test looking up information (Bacon, 1969; Jehu et al., 1970); and the amount of factual information in the study material learned is reduced (Kalish, 1958; Tanner, 1970). Within the research, there are also findings that indicate that there is no difference between open-book and traditional tests. Two of these are that there is no difference in attainment between the two types of tests (Brockbank, 1968; Feldhusen 1961; Kalish, 1958; Jehu et al., 1970); and that examination preparation methods do not differ between the two types of tests – depending on the way open-book tests are structured (Feldhusen 1961).

One of the benefits of open-book tests is that reduced anxiety increases confidence. Although increased confidence does not necessarily lead to better performance (Kalish, 1958; Krarup, Naeraa, & Olsen, 1974), as Pan & Tang (2005) observed, anything that increases students’ confidence in their ability will effectively reduce statistical anxiety (a real problem in the area).

Increasing student engagement with the learning resources provided is another benefit of open-book tests. As students experience difficulty with a subject, they begin a disengagement process and do not try to learn the it. Phillips (1995) found that the use of open-book tests linked to specified textbooks and textbook chapters is effective in increasing student engagement. He observed that through open-book testing, students read and studied their assigned textbook, and the students’ grades increased overall. Improving student engagement with the subject through engagement with the learning resources provided is a good way to both reduce anxiety and improve student performance.

Open-book tests, linked to specific learning resources, reduces stress among students, increases students’ confidence, and increases student engagement with learning resources.

Multiple Tests

Frequent classroom testing is one of the methods that can be used to both improve student satisfaction and improve student performance (c.f. Bangert-Drowns, Kulik, & Kulik, 1991; Glover, 1989; Roediger & Karpicke, 2006). Although normally viewed as a necessary evil to effectively assess student performance, tests can also be used as learning instruments. Tests are usually used infrequently in a class, often only once or twice in a semester (Roediger et al., 2006). When students know that a test is to occur, they will revise and study material in preparation (Bangert-Drowns et al., 1991; Leeming, 2002). The frequency of classroom testing has a direct relationship on the frequency of student revision; more frequent testing leads to more frequent revision.

Using a meta-analysis to look at a number of studies to explore the effects of frequent classroom testing, Bangert-Drowns et al. (1991) found that as the number of tests increase, student performance increases. The effect size diminishes as the number of tests increases. The gains in increased student achievement become increasingly smaller as the number of tests increase.

The effect on student performance is not solely due to the increase in the amount of revision students engage in because of testing, but testing itself has a direct effect on learning. If a student has successfully recalled material for a test, they have a greater chance of remembering it in the future than if they had not been tested. This is called the testing effect and was studied as early as 1917 by Gates. Glover (1989) observed that although the testing phenomenon has been studied extensively in cognitive psychology, there has been very little interest in it from the educational establishment. Indeed, the current mantra in educational settings is a call to reduce the assessment load on students to a minimum, which would preclude the use of tests as instruments of learning.

Roediger et al. (2006) found that students who were repeatedly tested on material as a part of the learning process had better long-term retention of the information than students who were given repeated opportunities to study the material before testing. He set up an experiment where students were given a short passage to study for later testing. In one group, the students had four study sessions, followed by a test. In the second group, the students had a single study session, and then there were three testing sessions, followed by a final testing session (five sessions for each group). In the fifth session, when both groups were tested, the students who were given repeated study sessions performed much higher than the students who had repeated test sessions as a part of their learning process (83% vs. 71%). However, when tested on the same material 1-week later, the students who had repeated testing sessions outperformed the students who had repeated study sessions (61% vs. 40%). Clearly, testing during learning enhances retention.

In addition to increasing student performance, Bangert-Downs et al. (1991) found that overall student satisfaction with a class improves with frequent classroom testing. The effect size for the increase is large with an overall increase of 0.59 standard deviations in student satisfaction for the studies that measured and reported on student satisfaction.

Clearly, the use of multiple testing in a statistics class has the potential to both improve student performance and increase student satisfaction.

Distributed Practice

Another benefit of multiple testing sessions spread across the semester is a more distributed model of learning. The number and spacing of tests during an academic semester will determine how distributed the learning process will be. For more than a century the advantage of distributed learning has been repeatedly demonstrated in memory research (c.f. Dempster, 1996). When learning is compared between massed practice, and distributed practice conditions, there is a marked difference in performance. Short term measures of performance (measures taken immediately after massed practice) produce much better results than distributed practice. However, long-term recall or retention is much better when distributed practice sessions are used than when massed practice sessions are used in learning.

Methods

The performance measure used to judge the success of the innovation was a closed book, two-hour final examination that was identical to the examination that had used in the previous year. Any difference in performance between the two years (309 in the non open-book year, and 287 in the open-book year) would be attributed to the new assessment method. In addition, to gauge student satisfaction, a question asking specifically about the weekly, open-book tests was included on the module evaluation form the students normally complete.

The measure used was student performance in a closed-book multiple-choice examination taken by the students. During the first academic year, the students took a 50 item mid-term examination, and a 40 item final examination. In the second academic year, these two examinations were combined to form an 87 item final examination. Two questions from the mid-term, and one question from the final examination had been eliminated from the first years results following standard item analysis, and so were not included in the second examination.

In addition, comments from the student evaluations from the module specifically relating to the weekly, open-book tests were analysed to provide a measure of student satisfaction for the weekly tests.

Results

Student Performance

The performance for each student was expressed as a percentage score for the purposes of the analysis. The students were ranked according to their performance and then divided into smaller cohorts to examine the differences (using a between groups t-test) across the range of abilities.

Student performance across two years, with the second cohort being required to take weekly open-book exams prior to the closed book final. The percentage scores represent student performance on the closed book, final examination.

Rank

2005

2006

%Increase

SD

t-test Results

Top 10%

75.6%

82.4%

6.8%

4.0%

t(57) = 6.57, p < .001

Top Quartile

70.6%

76.3%

5.7%

5.8%

t(145) = 6.37, p < .001

Middle 50%

56.0%

60.3%

4.3%

5.1%

t(297) = 7.48, p < .001

Bottom Quartile

40.9%

43.2%

2.3%

5.7%

t(148) = 2.62, p = .009

Bottom 10%

36.6%

37.4%

0.8%

3.6%

t(59) = 0.74, p = .47

Overall

55.8%

60.0%

4.2%

12.9%

t(594) = 4.16, p < .001

The results from the student satisfaction questions showed overwhelming support for the weekly open-book tests:

Responses to the end of semester module evaluation are tabulated, along with some of the phrases used to describe the weekly, open-book tests.

Rating Example Adjectives Number
Excellent Brilliant, Excellent, Very Good

70

Good Good, Liked them

63

Neutral Made me read

5

Negative Too early in the morning, Hated them

2

Introducing weekly, open-book tests, accomplished several positive outcomes for our students. The measurable outcomes were higher grades, with a disproportionate benefit accrued to higher performing students, and greater overall student satisfaction. Although we have not measured it, because of the cognitive benefits of distributed practice, we are hoping that the learning is long term, and will stay with the students throughout their studies. Again, we have not directly measured statistical anxiety, but feel that the innovation has reduced statistical anxiety among our students and fostered increased engagement in the subject to a greater degree than in the past. These results suggest a broad adoption of the model would greatly benefit students studying statistics.

I used this system for about five years before moving to a different method of teaching and assessing statistics and research methods. I think it worked out well for both the students and me as their instructor.

References

Bacon, F. (1969). Open-book examinations. Education and Training, 9, 363.

Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C. C. (1991). Effects of frequent classroom testing. Educational Research, 85 (2), 89 – 99.

Betteridge, D. (1971). Open-book exams. Education in Chemistry, 8 (2), 68 – 69.

Brockbank, P. (1968). Examining Exams. Times Literary Supplement, 25th July.

Dempster, F. N. (1996). Distributing and managing the conditions of encoding and practice. In E. L. Bjork & R. A. Bjork (Eds.), Memory: Vol. 10. Handbook of Perception and Cognition. (pp. 317 – 344). New York: Academic Press.

Eilertsen, T., & Valdermo, O. (2000). Open-Book assessment: A contribution to improved learning? Studies in Educational Evaluation,26 (2), 91-103.

Feldhusen, J. F. (1961). An evaluation of college students’ reactions to open-book examinations. Educational and Psychological Measurement, 21, 637 – 646.

Francis, J. (1982). A case for open-book examinations. Educational Review, 34 (1), 13-26.

Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, 6 (40).

Gaudry, E. & Spielberger, C. D. (1971). Anxiety and Examining Procedures. New York: Wiley & Sons.

Glover, J. A. (1989). The “testing” phenomenon: Not gone but nearly forgotten. Journal of Educational Psychology, 81 (3), 392 – 399.

Gupta, A. K. (1975). Open-book examinations in India – some reflections. In A. K. Gupta (ed.) Examination Reform, Directions research and Implications. New Delhi: Sterling.

Jehu, D., Picton, C. J., & Futcher, S. (1970). The use of notes in examinations. British Journal of Educational Psychology, 40, 335 – 337.

Kalish, R. A. (1958). An experimental evaluation of the open examination. Journal of Educational Psychology, 40, 200 – 204.

Krarup, N., Naeraa, N., & Olsen, C. (1974). Open-book tests in a university course. Higher Education, 3, 157 – 164.

Leeming, F. C. (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology, 29, 210 – 212.

Michaels, S. & Kieren, T. R. (1973). Investigation of open-book and closed-book examinations in mathematics. Alberta Journal of Educational Research, 19, 202 – 207.

Pan, W. & Tang, M. (2004). Examining the effectiveness of innovative instructional methods on reducing statistics anxiety for graduate students in the social sciences. Journal of Instructional Psychology, 31, 149-159.

Phillips, G. (1995). Using open book tests to encourage textbook reading in college. Journal of Reading, 38 (6), 484.

Roediger, H. L. III, & Karpicke, J. D. (2006). Test enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17 (3), 249 – 255.

Theophilides, C., & Dionysiou, O. (1996). The major functions of the open-book examination at the university level: A factor analytic study. Studies in Educational Evaluation, 22 (2), 157-70.

Tussing, L. (1951). A consideration of the open-book examination. Educational and Psychological Measurement, 2, 597 – 602.

Tanner, L. (1970). Performance in open-book tests. Journal of Geological Education, 18, 166 – 167.