In some reading for the thesis today I came across the concept of McNamara’s fallacy. I hadn’t heard this before. This is somewhat surprising as it points out another common problem with some of the more simplistic approaches to improving learning and teaching that are going around at the moment. It’s also likely to be a problem with any simplistic implementation of academic analytics.
What is it?
The quote I saw describes McNamara’s fallacy as
The first step is to measure whatever can be easily measured. This is ok as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.
The Wikipedia page on the McNamara fallacy describes it as referring to Robert McNamara’s – the US Secretary of Defense from 1961 through 1968 – explanation of the USA’s failure in Vietnam down to a focus on quantifying success through simply indicators such as enemy body count, while at the same time ignoring other more important factors. Factors that were more difficult to measure.
The PhD thesis which I saw the above quote ascribes it to Yankelovich (1972), a sociologist. Wikipedia ascribes it to Charles Handy’s “The Empty Raincoat”. Perhaps indicating that the quote is from McNamara himself, just presented in different places.
Within higher education it is easy to see “pass rates” as an example of McNamara’s fallacy. Much of the quality assurance within higher education institutions is focused on checking the number of students who do (or don’t) pass a course. If the pass rate for a course isn’t too high, everything is okay. Much easier to measure this than the quality of student learning experience, the learning theory which informs the course design, or the impact the experience has on the student, now and into the future. This sort of unquestioning application of McNamara’s fallacy sometimes make me think we’re losing the learning and teaching “war” within universities.
What are the more important, more difficult to measure indicators that provide a better and deeper insight into the quality of learning and teaching?
Analytics and engagement
Student engagement is one of the buzz words on the rise in recent years, it’s been presented as one of the ways/measures to improve student learning. After all, if they are more engaged, obviously they must have a better learning experience. Engagement has become an indication of institutional teaching quality. Col did a project last year in which he looked more closely at engagement, the write up of that project gives a good introduction to student engagement. It includes the following quote
Most of the research into measuring student engagement prior tot he widespread adoption of online, or web based classes, has concentrated on the simple measure of attendance (Douglas & Alemanne, 2007). While class attendance is a crude measure, in that it is only ever indicative of participation and does not necessarily consider the quality of the participation, it has nevertheless been found to be an important variable in determining student success (Douglas, 2008)
Sounds a bit like a case of McNamara’s fallacy to me. A point Col makes when he says “it could be said that class attendance is used as a metric for engagement, simply because it is one of the few indicators of engagement that are visible”.
With the move to the LMS, it was always going to happen that academic analytics would be used to develop measures of student engagement (and other indicators). Indeed, that’s the aim of Col’s project. However, I do think that academic analytics is going to run the danger of McNamara’s fallacy. So busy focused on what we can measure easily, we miss the more important stuff that we can’t.