The challenges associated with postgraduate dissertations
-Lynn Vos (May 2021)
I recently asked my students two questions about their online learning experiences: what worked and what didn’t work for them across all of their classes in the past year. Perhaps not surprisingly, despite the small sample size, things that worked for some students did not work for others. Some disliked pre-recorded lessons, others found them enjoyable and useful. Some loved their online lectures and a very structured class, others felt there should be less lecturing and more activities. Some wanted more face to face online classes, others liked the model they had with half of the scheduled class time being online face to face, and the other structured asynchronous learning. Some felt all of their lecturers had tried their best and were very appreciative. A small majority would prefer a blended model in the post Covid world – some online classes, some in the classroom. A few wanted to continue studying online for all their learning, others could not wait to get back to the classroom full time. Some responded with issues that also come up in classroom based learning – instructors should reply to emails more often and help those who are struggling. Only Zoom break out rooms were mentioned as a positive software- based learning tool. No other software tool was noted in the responses.
In the great global experiment that is online learning during the COVID-19 pandemic, the conversations about what works and what doesn’t work to engage and help students learn are happening at a pace and passion not experienced before. But, as with the great majority of educational conversations that have happened over the years, these ones also have their heavy elements of dogma from advocates and proponents of particular methods and approaches – none of which have been demonstrated in research to be the ‘answer’ to engaging and educating all students. Among these are:
“Lectures don’t work – you can’t lecture online!” “Flipped classrooms are the only answer”. “You must keep students engaged with lots of activities and only short bursts of lecturing”. “Active learning is best". "The future of learning is all technology".
It is very likely, however, that if we ask a much larger sample of students what they like and don’t like, what works for them and what does not, we will still find what I found. Every method and approach – when done conscientiously by the teacher – has its proponents and detractors amongst the student body. No single method works for all students. Rather, as I will discuss in my next post, what works is a particular set of characteristics of expert teachers (Hattie, 2012) whether they are teaching online or off.
Education has always been subject to the ‘next best thing’ whether it be authentic assessment, discovery learning, flipped classrooms, clickers, whole-person learning, simulations, mastery learning and many many more. I have written about and advocated for some myself!
This is not to say that each of these ideas and approaches do not have some value. As they were originally researched and tested for efficacy, results were found to be positive in most cases. The problems arise, however, in at least three ways. First, the original conception of the teaching idea is generally stretched, adapted, misunderstood and/or replaced with other ideas altogether over time such that proponents are not always talking about the same thing. If you read the literature on authentic assessment, for example, a set of principles that is meant to help structure meaningful experiences in the classroom, you will find dozens of different definitions and explanations when it is discussed amongst different advocates. Another example - experiential learning. Today it is often used to mean active learning or work experience when it was originally a theory of learning introduced by John Dewey and further refined by David Kolb as the experiential learning cycle - concrete experience, reflective observation, abstract conceptualization and active experimentation
Second, those who advocate for a single method, such as problem based learning, make the mistake of putting their own enthusiasm and expertise in place of the art and science of good teaching practice – the latter actually holding the kernels of something that is very valuable to student learning.
Third, research conducted by John Hattie (2009) shows that almost every approach or factor that has been researched has been shown to have a non-zero, positive effect on student learning. More on this in a moment. Before, I consider Hattie’s conclusions in a bit more detail, it is important to note that to argue that a particular approach or method is the ‘key’ to student learning is to undervalue and misrepresent the educational process and how students actually learn. We need to remember that:
1. Different approaches have their value in different circumstances;
2. No single approach is the answer to the complex matter of learning and educating;
3. Students are human beings who are adapted by nature to learning in all kinds of ways and in all kinds of circumstances.
4. Finding the best and multiple ways to engage student faculties and predispositions, while continually reflecting on what works well and what doesn’t, is what good teaching is about, not particular approaches.
Perhaps the most useful works I have ever come across on what contributes to student learning are those by John Hattie – Visible Learning (2009)[1] and Visible Learning for Teachers (2012)[2].
One of the main conclusions from his synthesis of “more than 800 meta-analyses of 50,000 research articles, about 150,000 effect sizes and about 240 million students” (Hattie, 2012: 2) and the additional 100 that he completed after his first work is that almost every teaching approach or intervention that has been studied can make a difference to student learning. Of all the hundreds of teaching interventions including simulations, e-learning, study skills training, activity-based methods, student control over learning, meta-cognition training, flipped classrooms, formative feedback, cooperative learning, some indeed had greater effect sizes than others (formative feedback for example had an effect size of 0.88 while student control over learning had only an effect size of 0.04).
Various student characteristics including prior achievement, pre-term birth weight, family structure, gender, motivation and student expectations/self-reported grades also had varying size effects, with student expectations of what grades they are likely to get having the greatest size effect of all at 1.44! (and gender the lowest at 0.12).
The point? If zero is the baseline for measuring the effect of any intervention or student characteristic on learning, then almost every approach will have some effect on learning and thus every researcher and every proponent of the 150 different approaches that made up Hattie’s synthesis of the 900+ meta-analyses can claim to be making a positive difference. The only factors that demonstrated small, often minute, negative size effect were summer vacation (-0.02), welfare policies (-0.12), retention (-0.13), television (-0.18) and mobility (-0.34).
Thus, Hattie concludes:
‘When teachers claim that they are having a positive effect on achievement, or when it is claimed that a policy improves achievement, it is a trivial claim, because virtually everything works: the bar for deciding ‘what works’ in teaching and learning is so often, inappropriately set at zero’ (p.2.).
He then explains the dangers of using zero as the baseline:
‘Setting the bar at zero means that we do not need any changes in our system! We need only more of what we already have – more money, more resources, more teachers per student, more…..But this approach, I would suggest, is the wrong answer….Setting the bar at an effect size of d=0.0 is so low as to be dangerous. We need to be more discriminating ’ (p.3).
Hattie (2012) notes that if we are going to have some confidence in the learning effect of an intervention we need to show an improvement in student learning of at least the average gain found across all the research. His synthesis found that the average effect size across all interventions was 0.4 – what he calls the 'hinge point'. Anything less should be viewed as not effective and be noted as such when discussed. This is illuminating when we consider that studies on the ‘quality of teaching’ only show a size effect of 0.48! Peer influences are greater at 0.58. Teaching strategies, however, show some improvement at 0.68, classroom discussion 0.82 and teacher credibility 0.90. (We should also consider and ponder that the greatest impact on student learning are students’ self-reported grades – that is, their personal expectations that they will or have learned (1.44)!)
So, the next time you hear someone advocate for a particular approach, you may wish to ask them what the research showed in terms of effect size. This may not make you very popular, of course, given that adherence to particular approaches can sometimes be like adherence to a political party, religion, or type of diet/health regime --held strongly and not really open for discussion
In my next article, I will discuss what Hattie suggests actually works to improve student learning – he identifies 5 major dimensions of excellent or expert teachers and what he demonstrates is that together they are much more powerful than any single approach or method in helping students to learn – whether online or off.
[1] Hattie, J.A.C. (2009). Visible Learning: A synthesis of 800+ meta analyses on achievement. London: Routledge. [2] Hattie, J.A.C. (2012). Visible Learning for teachers: Maximizing impact on learning. London: Routledge.