Welcome folk from UHI. Hope you find this interesting. Your e-learning portal is here. Good luck with it all.
Over recent years I’ve been employed in a position to help improve the quality of learning and teaching (and e-learning) at a university. If all goes according to plan, I might well have a related position for the next few years, at least. This post is intended to identify and provide some early insights into what I think has been the biggest flaw about my practice and in the practice at most universities when it comes to learning and teaching and how they attempt to improve learning and teaching.
The “biggest” adjective is not intended to indicate certainty, there may be bigger flaws, there are certainly other flaws. But at this point in time, given my current thinking, this is what I think are the biggest flaws. The term “flaw” could also be replaced by “hurdle” and/or “barrier”.
The biggest flaw?
Over the last couple of years, as I’ve had a less than positive experience, I’ve increasingly become convinced that the biggest flaw in individual and organisational attempts to improve learning and teaching is quite simply that there is no widely accepted measure of what is good or bad learning and teaching. There are two main problems with the approaches that are used:
- They don’t work.
- There is not wide acceptance of the value.
This absence of an effective measure leads to what I’ve talked about in a recent post – task corruption and the observation that task corruption occurs most frequently with tasks where it is difficult to define or measure the quality of service. Learning and teaching within a university, for me at least and especially when applied to institutions that I’m familiar with, suffers from just this flaw.
Most, if not all, of the problems, debates, struggles and political fire-storms around learning and teaching within universities can be tracked down to the uncertainty about what is quality learning and teaching.
The don’t work
At this point in time I am pretty certain that the following methods don’t work (at least not by themselves, and probably not even when complimented by other methods):
- Student results – given the realities of university learning and teaching I don’t believe (a belief backed by the published research of others) that these are a good indication of student learning. Certainly not for comparison purposes between offerings of courses, especially if taken by different staff or across disciplines.
- Level 1 smile sheets – i.e. the majority of what passes for learning and teaching “evaluation” at universities in Australia. Surveys of students at the end of courses or programs asking how they felt. This is broken.
Absence of wide acceptance
Now there may be methods to measure the quality of learning and teaching that do work. You may know of some, feel free to share them. But the point is that when it comes to the complexity and diversity inherent in the organisational practice of learning and teaching within higher education, there is no method that is broadly accepted.
The absence of this broad acceptance and subsequent widespread, disciplined use totally voids any validity the evaluation method may have. Unless the senior management, middle management, coal-face practitioners and all other stakeholders see the value of the measure, it doesn’t matter that they work.
This lack of acceptance is not unexpected as teaching is a wicked design problem. A point made by the quote from Prof Richard Elmore illustrated by the attached photo. Many of the defining characteristics of wicked design problem make it very difficult to effectively get wide acceptance of a solution. For example, from the Wikipedia page
- There is no definitive formulation of a wicked problem.
i.e. everyone will have their own understanding of the problem, which implies their own beliefs about what the solution should be.
- There is no immediate and no ultimate test of a solution to a wicked problem.
“No ultimate test of a solution”, makes it somewhat hard to evaluate and measure.
Impacts on improving learning and teaching?
In the absence of any measure of quality learning and teaching, I can’t see how you can possibly implement any improvements to learning and teaching within a university in any meaningful way. If you can’t measure it and get broad acceptance of the value, then whatever you do is likely not to be accepted and will eventually be replaced.
Over the 19 years I’ve been involved with learning and teaching at Universities I’ve seen the negative ramifications of this again and again. Some examples include:
- Resting on their laurels (a foundation built of sand).
I’ve heard any number of academics proudly claim that they are brilliant teachers or that there courses are fantastic. Only to take those courses as a student, hear from other students or take over the courses and discover the reality. In the absence of any effective and accepted measures of teaching quality it’s possible to defend any practice, including doing nothing to improve.
- Fearing change and reverting to past practice.
People hate change. When there are different measures of value/outcome, it’s possible to ignore something good, especially when it is different. I’ve seen courses re-designed by talented teachers or instructional designers get thrown in the bin.
- Task corruption.
In some cases the “good design” hasn’t been trashed, it has been “corrupted” – as in task corruption. For example, an approach based on reflective journals has the questions modified so they don’t encourage deeper reflection (such questions are easier to come up with and easier to make) and necessary steps to support and encourage students to reflect, ceased. So the reflective journal is still “there”, but its use has been corrupted.
Disclaimer and request for insights and case studies
Perhaps all of the above is due to the limitations of my experience and knowledge. If you know better, please feel free to share.
This is not new. So why talk about it? Well, it is a problem that will have to be addressed in someway. So this post is an attempt to think about the problem, identify its outcomes and start me thinking about how/if it can be solved.