top of page

Quality in HE- What really matters? A review of Gibbs (2010)

-Lynn Vos (2017) (Image by Free-Photos from Pixabay)

This blog post reviews Dimensions of Quality by Graham Gibbs (2010) on what really matters in terms of educational quality and how it should be measured.


In September 2012, students in England began paying up to £9000 per year in undergraduate fees. In addition, Universities were now required to report a range of key information statistics (KiS) about their programmes, including NSS scores; learning, teaching and assessment methods; and graduate employment outcomes (DHLE). Since that time recruitment has remained stable for most institutions, but a new metrics regime has arrived – the Teaching Excellence Framework (TEF). TEF has been nothing if not controversial and since its first iteration, changes have already been made to the metrics that Universities will be measured against.

One of the biggest debates around TEF, the NSS, and related metrics is that they are only proxies for quality in higher education, and perhaps not particularly good ones. Of course, quality is a slippery term, meaning different things in different contexts. However, Graham Gibbs (2010), in his report for the Higher Education Academy (HEA), “Dimensions of Quality” has provided some pretty clear evidence of what constitutes educational quality and educational gain and the report should be required reading for everyone in HE. This review highlights some of the findings from his report as well as from comments Gibbs made during a 2012 Keynote speech at the Association of Business Schools (ABS) Conference in Manchester.

Ill-informed measures of quality.

Gibbs has been frank in noting that in the UK HE has many ill-informed measures of quality, whether they are from the NSS, the QAA, student-staff ratios or graduate employment. For one thing, we tend to measure single variables such as the educational product (a degree programme, for instance; final grades) when what we should also be measuring is educational gain. Educational gain is the difference between performance on a particular measure before and after a student’s experience of higher education and over the past three years HEFCE has been carrying out pilot projects on what it is and how it can be measured.[1] To Gibbs and the other researchers whose work he reviews, “‘quality’ means whatever improves learning gains, or improves variables known to predict gains, such as student engagement, or improves educational practices that are known to predict engagement, such as 'close contact' with teachers” (Keynote, ABS, Manchester, 2012).

For convincing arguments about whether an institution or the system in general creates educational learning gain, it is necessary to assess a range of quality dimensions and to undertake multivariate analysis to identify what combination of educational processes lead to or detract from it. Other countries, such as the United States and Australia have undertaken such analyses, but we have not done so in the UK, partly because we are often measuring the wrong things, but also because so many different agencies in the UK are involved in gathering this data and to date, it has never been fully collated[2]. Gibbs argues that what we need are valid indicators of quality, comparable across institutions and only then will we be able to make accurate statements about the quality our of university programmes. His 2010 report identifies those indicators and clears up myths about what does and does not measure the quality of a University education.

What does and does not contribute to educational quality?

Gibbs uses the 3P typology developed by Biggs (1993). Biggs conceived of education as a complex system where quality and output are affected by the interaction of three groups of variables - presage, process and product. In the report, Gibbs explores which of the factors alone or in combination are actually good measures of educational quality and educational gain.

Presage variables exist within an institution. These include available funding, the quality of academic staff, staff/student ratios, library resources, research performance, and the reputation that enables an institution to have highly selective entry requirements. According to Gibbs, research into these variables, alone and in combination has shown they do NOT explain much of the variation between Universities in relation to educational gain. Only one is a good measure of the grades (educational product) that students will achieve while at University and it is the reputation that allows an institution to select only the top performing students. One of the best measures of how well students will perform at university is the grades that they entered with (see Hattie, 2012). But Gibbs cautions us not to draw wider conclusions about grades and educational quality – “while league tables in the UK invariably include A-level point scores as an indicator of educational quality…. they tell us almost nothing about the quality of the educational process within institutions or the degree of student engagement with their studies (Gibbs, 2010, pg. 18)”.

What about staff-student ratios (SSRs)? Most of us assume that the lower the SSR, the higher the quality of education a student will receive because, among other things, we assume that he or she will receive more individual contact with a teacher who will be able to provide more feedback, more quickly. But once again Gibbs warns us not to draw these conclusions without more evidence. First, research by Terenzini and Pascarella (1994) has demonstrated that once student entry grades have been taken into account [3], SSRs do not predict educational gain. Second, how SSRs are reported varies widely from institution to institution and they do not take into account who is actually doing the teaching. In some institutions, the top academics may be so engaged with research activities that they rarely deliver a lecture or run a seminar. In these circumstances, students take more of their classes from research students or part-time tutors than from full-time, experienced academics with top reputations in their field. As one parent told me recently, her daughter attended a red brick University to study management but most of her classes were delivered by PhD students (who likely had little if any training in how to teach and assess students).

Product variables are the outcomes of higher education and include degree classification, retention and employability. These are the factors most often discussed and measured as evidence of institutional performance in the UK. But performance and educational gain are very different concepts. Degree classification, retention and employability are all highly unreliable measures of quality and engagement. Gibbs notes:

Our measures of employment and employability are not very meaningful in the UK. This issue concerns the difference between expertise for efficiency, which is what employers recruiting graduates normally demand, and adaptable expertise, that enables an individual to operate effectively in unpredictable new situations (a characteristic of the UK jobs market) (Schwartz et al., 2005). It takes very different kinds of educational process to develop these two forms of expertise. There is a lack of evidence about the long-term consequences for graduate employment of either narrowly focused vocational education or education that emphasises efficiency in generic ‘employability skills’, rather than emphasising the higher order intellectual capabilities involved in adaptable expertise. (Gibbs, 2010)”

Process variables are most closely related to teaching and learning in an institution “and include class size, class contact hours, independent study hours and total hours, the quality of teaching, the effects of the research environment, the level of intellectual challenge and student engagement, formative assessment and feedback, reputation, peer quality ratings and quality enhancement processes” (Gibbs, 2010, pg. 19). Here we begin to see what really matters and what are strong predictors of educational gain. In his keynote speech at the ABS Conference, Gibbs discussed these variables and their importance:

“The process variables that best predict gains are not to do with the facilities themselves, or to do with student satisfaction with these facilities, but concern a small range of fairly well-understood pedagogical practices that engender student engagement. Class size, the level of student effort and engagement, who undertakes the teaching, and the quantity and quality of feedback to students on their work are all valid process indicators. There is sufficient evidence to be concerned about all four of these indicators in the UK. (Keynote, ABS Conference, Manchester, 2012)”.

If we really want to improve the student experience as well as improve outcomes, Gibbs aruges, we need consider only a small range of educational factors that have proven their value time and time again in educational research – the seven principles of good practice in undergraduate education by Chickering and Gamson (1987). Good practice:

• Encourages student-faculty contact;

• Encourages cooperation among students;

• Encourages active learning;

• Gives prompt feedback;

• Emphasizes time on task;

• Communicates high expectations;

• Respects diverse talents and ways of learning.

Unfortunately, Gibb notes, one of the measures that we put so much energy and resources into in the UK – the National Student Survey (NSS) – has told us almost nothing about whether universities are enhancing student learning or engagement through the encouragement of these practices. The revised 2017 NSS with its questions around learning opportunities is a step in the right direction, but I feel that at least two of these questions are rather opaque from a student perspective because they use language well understood by academics but not necessarily by students filling out surveys.

Other indicators of quality

Gibbs debunks some other myths of educational quality. For example, it’s not the number of contact hours that matter as much as how much time students apply to their studies and the quality of that time. ‘Time on task’ is one of the most important indicators of how much students learn. Students in the UK have the lowest number of learning hours in Europe and Gibbs argues that “we make the lowest demands on our students….students would have to do nine years in HE if they were to meet the Bologna standards” (p. 23). (Bologna suggest 4500 study hours for a three-year undergraduate programme. Some UK programmes provide only 1500). He notes that while actual numbers of contact hours are not a good indication of quality (giving the example of The Open University which has managed top NSS scores while having the lowest class contact hours in the UK) students need to be taught how to make use of all the extra time they have outside of class. He also points out that the Open University example doesn’t mean:

“that you can cut class contact hours from an existing unchanged pedagogy without making any difference to student learning, or that increasing hours will make no difference. …. Very little class contact may result in a lack of clarity about what students should be studying, a lack of a conceptual framework within which subsequent study can be framed, a lack of engagement with the subject, [and] a lack of oral feedback on their understanding… (p. 22)”

It depends what role the class contact is performing. What matters is the quantity and quality of engagement generated by the particular uses to which class contact is put. What is more important than class contact hours is the total number of hours that students put in, both in and out of class and whether they are engaging in deep vs. surface learning during those hours.

It is possible to assess whether deep learning is going on within the classroom and during study hours. Deep learning is more likely to occur when students “experience good, [early] feedback on assignments, and when they have a clear sense of the goals of the course and the standards that are intended to be achieved”. Feedback given early in a course through formative assessment in particular and provided almost immediately after the work is done leads to better performance on subsequent assessment, deeper reflection on the part of the student, and better outcomes. We may assume that three weeks turn-around time on student assignments is an improvement, but Gibbs says such a long delay makes feedback almost useless. We need to find ways to make feedback as immediate as possible, regular, and as early in the course as we can.

In his review of the many studies of what contributes to educational gain, Gibbs draws other conclusions:

  • Engagement is more effective if we place high intellectual demands on our students and challenge them;

  • Not all innovation is wonderful – in one study it was shown that too much diverse innovation in assessment was ‘messing’ up students;

  • REF (Research Excellence Framework) scores are NOT good indicators of student performance. There is little or no relationship between measures of the quality or quantity of teachers’ research and measures of the quality of their teaching (for a review of 58 studies of the evidence, see Hattie and Marsh, 1996);

  • However, the “research environment” in a department is important at postgraduate level, particularly for dissertation students;

  • Good learning resources improve student outcomes. The key is not simply to have more and more resources but to do more with what you have;

  • “Teachers who have teaching qualifications (normally a Postgraduate Certificate in Higher Education, or something similar)…[are].. rated more highly by their students than teachers who have no such qualification (Nasr et al., 1996, in Gibbs, 2010, pg. 26);

  • Teaching quality matters;

  • “A survey of international students who have experienced both a UK higher education institution and another EU higher education institution (Brennan et al.,2009) found that such students are less likely to rate them as ‘more demanding’, a finding that does not justify the lower number of hours involved.” (Gibbs, 2010, pg. 24)

Gibbs provides numerous references to research studies that have allowed him and others to draw the conclusions noted above. Worth mentioning again, he argues that in the UK we tend to look at the relationships between one pair of variables at a time, such as the relationship between SSRs and student outcomes. As his report makes very clear, however, these relationships are often “confounded with related variables” (Gibbs, 2010, pg. 43) and therefore few relationships between two variables can be interpreted with any degree of confidence. He argues that in the US, “there have been far more, larger and more complex, multivariate analyses that take account of a whole raft of variables at the same time and which, as a consequence, are able to tease out those variables that are confounded with others and those that are not.” (Gibbs, 2010, pg. 43). We need to do this in the UK.

One of the most thought-provoking sections in Gibb’s work is that related to aspects of quality that appear to contribute significantly to educational gain, but which are more difficult to measure: University departments with high quality management, made up mainly of full time members of staff, who work well together, share ideas regularly, have healthy ‘communities of practice’, and exhibit values such as ‘‘liking young people’. All are important to improving student outcomes and increasing the quality of students’ education.

There is so much of value and importance in Gibb’s work and as noted above, it should be required reading. For me, the qualities and values expressed in the last paragraph are perhaps the most poignant. In this age of competition, cut-backs, pressure for research output, and league tables, can we not begin a movement that challenges our managers to build the kinds of departments described above? Some of us have worked in departments like that and seen the enormous benefits they provide to both students and to the intellectual and personal development of staff. We have also watched many of them wane and disappear under the conflicting pressures and confusions of what is a large part of our higher education environment today.

In my travels around the country when I was with the HEA, however, I have seen evidence of these kinds of departments. Let the examples of these departments be the ones that lead us back to a set of values and processes that make higher education the powerful agent for change and societal benefit that it can be.

You can find Gibbs’ full publication on the HEA website here: https://www.heacademy.ac.uk/system/files/dimensions_of_quality.pdf

References

Biggs, J.B. (1993). From theory to practice: a cognitive systems approach. Higher Education Research and Development,12(1): 73–85.

Chickering, A.W., & Gamson, Z.F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7): 3–7.

Hattie, J. (2012). Visible learning for teaching: Maximising impact on learning. Abingdon, Oxon: Routledge.

Hattie, J., & Marsh, H.W. (1996). The relationship between research and teaching: a meta- analysis. Review of Educational Research, 66(4): 507–542.

Nasr, A., Gillett, M., & Booth, E. (1996). Lecturers’ teaching qualifications and their teaching performance. Research and Development in Higher Education,18: 576–581.

Pascarella, E T. & Terenzini, P. (2005). How college affects students: a third decade of research, Volume 2. San Francisco: Jossey-Bass.

Terenzini, P. T., & Pascarella, E.T. (1994). Living myths: Undergraduate education in America, Change, 26(1): 28–32.

[1] Initially, learning gain was going to be a new measure built into TEF, however it does not appear that this will happen in the near future (March 2018)

[2] This may change with the new Office of Students which takes over the work of HEFCE and the Office for Fair Access and will administer the TEF

[3] Pascarella and Terenzini, (2005) have demonstrated that close contact with teachers is a good predictor of educational outcomes.



Featured Posts
Recent Posts
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
Follow Us
No tags yet.
Search By Tags
bottom of page