My PhD is essentially arguing that most institutional approaches to e-learning within higher education (i.e. the adoption and long term use of an LMS) has some significant flaws. The thesis will/does describe one attempt to formulate an approach that is better. (Aside: I will not claim that the approach is the best, in fact I’ll argue that the notion of there being “one best way” to support e-learning within a university is false.) The idea of “better” raises an interesting/important question, “How do you measure success with institutional use of an LMS?” How do you know if one approach is better than another?

These questions are important for other reasons. For example, my current institution is currently implementing Moodle as its new LMS. During the selection and implementation of Moodle there have been all sorts of claims about its impact on learning and teaching. During this implementation process, management have also been making all sorts of decisions made about how Moodle should be used and supported (many of which I disagree strongly with). How will we know if those claims are fulfilled? How will we know if those plans have worked? How will we know if we have to try something different? In the absence of any decent information about how the institutional use of the LMS is going, how can an organisation and its management make informed decisions?

This question is of increasing interest to me for a variety of reasons, but the main one is the PhD. I have to argue in the PhD and resulting publications that the approach described in my thesis is in some way better than other approaches. Other reasons include the work Col and Ken are doing on the indicators project and obviously my beliefs about what the institution is doing. Arguably, it’s within the responsibilities of my current role to engage in some thinking about this.

This post, and potentially a sequence of posts after, is an attempt to start thinking about this question. To flag an interest and start sharing thoughts.

At the moment, I plan to engage with the following bits of literature:

  • Malikowski et al and related CMS literature.
    See the references section below for more information. But there is an existing collection of literature specific to the usage of course management systems.
  • Information systems success literature.
    My original discipline of information systems has, not surprisingly a big collection of literature on how to evaluate the success of information systems. Some colleagues and I have used bits of this literature in some publications (see references).
  • Broader education and general evaluation literature.
    The previous two bodies of literature tend to focus on “system use” as the main indicator of success. There is a lot of literature around the evaluation of learning and teaching, including some arising from work done at CQU. This will need to be looked at.

Any suggestions for other places to look? Other sources of inspiration?

Why the focus on use?

Two of the three areas of literature mentioned above draw heavily on the level of use of a system in order to judge its success. Obviously, this is not the only measure of success and may not even be the best one. Though the notion of “best” is very subjective and depends on purpose.

The advantage that use brings is that it can, to a large extent, be automated. It can be easy to generate information about levels of “success” that are at least, to some extent, better than having nothing.

At the moment, most universities have nothing to guide their decision making. Changing this by providing something is going to be difficult. After all, providing the information is reasonably straight forward. Changing the mindset and processes at an institution to take these results into account when making decisions…..

Choosing a simple first step, recognising it’s limitations and then hopefully adding better measures as time progresses is a much more effective and efficient approach. It enables learning to occur during the process and also means if priorities or the context changes, you lose less as you haven’t invested the same level of resources.

In line with this is that the combination of Col’s and Ken’s work on the indicators project and my work associated with my PhD provides us with the opportunity to do some comparisons of two different systems/approaches within the same university. This sounds like a good chance to leverage existing work into new opportunities and develop some propositions about what works around the use of an LMS and what doesn’t.

Lastly, there are some good references that suggest that looking at use of these systems is a good first start. e.g. Coates et al (2005) suggest that it is the uptake and use of features, rather than their provision, that really determines their educational value.


Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Coates, H., James, R., & Baldwin, G. (2005). A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning. Tertiary Education and Management, 11(1), 19-36.

Jones, D., Cranston, M., Behrens, S., & Jamieson, K. (2005). What makes ICT implementation successful: A case study of online assignment submission. Paper presented at the ODLAA’2005, Adelaide.

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.