This is a satisfyingly thick book with short chapters allowing one to dip in and out without losing track. The title is rather cool, and I can imagine producing it with a flourish in the staff room to impress colleagues. Sadly though, in my experience most will not be particularly interested. Teachers seem to be most excited by professional discourse that addresses the affective side of education – anything that speaks of making students feel safer, more connected and more loved will create a buzz of interest – and it goes without saying that this will raise student achievement. This predominance of heart over mind ignores more intellectual aspects, such as the applications of cognitive science or educational psychology, which only rarely find their way into consideration, despite their significant effect on student progress and well-being. It is these later features that Paul Kirschner and Carl Henrick address in their book.
How Learning Happens selects 28 key works of research in educational psychology from 1960 to 2013. Each work gets a chapter in which the reader is informed why they should read the article, a description of the research, the implications for teaching practice plus a useful bullet pointed ‘takeaway’ summary at the end. There are nice introductions to Sweller’s cognitive load theory, Geary’s biologically primary and secondary knowledge, Pavio’s dual coding, Rosenshine’s principals of instruction and Black and Wilam’s assessment for learning, along with a range of perhaps less commonly known works such as Rothkopf’s Concept of Mathemagenic Activities (no, not about maths, but activities that promote learning!) and Bandura’s work on self-efficacy. Each chapter is supported by key references with QR codes which provide a convenient way of accessing supporting material. The book is rounded out with a final chapter entitled ‘The Ten Deadly Sins of Education’ which addresses some of the common myths in education such as Ken Robinson’s ‘schools kill creativity.’
For a beginning teacher just starting out or experienced teachers wanting to broaden their knowledge by absorbing some of the science around learning, this is an excellent resource. Not every work discussed in the book appealed to me – a few me struck me more as arguments from imagination in which the researchers had perhaps mapped out an educational thought space without much evidence, but that’s psychology for you – it wasn’t so long ago that major theories in psychology were built on studies where n=5.
A minor quibble for me came in a grey breakout box (p 128) explaining standard deviation and effect size. The authors present effect size (d) as way of comparing the relative impact of different teaching approaches determined through research, much as Hattie does in his rankings of educational interventions link . In fact they make a point of mentioning Hattie’s hinge point of 0.4 as a cut-off below which ‘you could just as well done nothing.’ This is quite erroneous. For one, effect sizes resulting from research into different types of educational interventions cannot be meaningfully combined (as Hattie does in meta meta analyses) or easily compared. Many factors such as the quality of experimental design, sample size and even the age of the students (given spread of academic achievement data tends to increase with age) will affect the effect size (more info here). And teachers need to know this if they are to grapple with the findings of research.
While reading the book I excitedly tweeted a few paragraphs and although there were likes and retweets, some interesting cautionary comments were made by Christian Bokhove an Ed Prof at the University of Southampton. The first related to a Danish study (link) referenced in the chapter on direct instruction (p182). The study identified a range of features typical of ‘learner-centered education’ and looked at data on academic achievement and concluded that it had a negative impact, particularly more so on low socio-economic students. To a fan of direct instruction like me this was gold but Christian pointed out the limitations of the study link – a reminder of the dangers of taking studies cited in support of claims at face value.
A second paragraph I tweeted concerned the authors stating ‘motivation leads to learning’ as the 8th of their 10 Deadly Sins Of Education (p302). They observe how often one of the keys to improving outcomes for students is seen as improving their motivation and engagement, and claim that research shows that there is neither a causal nor reciprocal relationship between motivation and learning – it is always the one way – from learning to motivation. This clearly resonated with many on edutwitter, as it did with me. So often we hear arguments along the lines of ‘if only we make it more interesting they will engage and learn better’ or ‘we just have to get the students more motivated.’ The focus is rarely on how can we teach more effectively, give the student a taste of success, and then reap the natural flow of motivation that results. Nevertheless, it was pointed out by Christian and others that Kirschner and Hendrick have taken quite a narrow slice of the research on student motivation to support their point, and that there is a reasonable amount of evidence to support a reciprocal relationship between motivation and learning.
I recommend this book highly, it really is a ‘one shop stop’ for someone wanting to dive into evidence-informed education for the first time and provides many starting points for digging deeper into the science of learning.