PhD Update #19 – Falling just a little short

I’ve fallen a bit short of what I wanted to achieve this week, however, overall I’m feeling pretty good about progress. In particular, because some of the initial evaluation results point to the “Webfuse way” having some quantitative benefits. Also, if the work in the last week get’s the okay from the esteemed supervisor it should make completing chapter 5 pretty straight forward.

What I’ve done

In the last update I said I would get a draft of chapter 4 complete and off to the supervisor.

Well, that didn’t happen but I’m just about there. Here’s a breakdown of progress on chapter 4 so far:

  • Introduction (4.1) and problem definition (4.2) – done.
    The e-learning@CQU post this week was the last part of section 4.2.
  • Intervention (4.3) – done
    This is where I spent most of this week and its covered in these posts: early section on why build a system and design guidelines and three posts on the design and implementation of Webfuse (1, 2 and 3)
  • Evaluation (4.4) – basically done.
    I’ve spent the last couple of days thinking about, preparing and doing some initial evaluation of the use of Webfuse from 1996 through 1999. Explained somewhat in three posts: early thinking, some more thinking and some early results and results of evaluation for 2006-2009.

    This is taking a bit longer because I’m essentially establishing the evaluation process that I’ll use in both chapters 4 and 5. I’m also having to grab the necessary archives so I can perform the evaluations.

    The structure of this section is essentially complete, as is the content. I’m waiting to see if I can get some data for a couple of years.

  • Reflection and learning (4.5) — nothing done yet.

What I’ll do next week

The overall aim remains pretty much the same as for last week:

  • Get a draft of chapter 4 off to the supervisor.
    This means completing the rough outline of section 4.4 – the evaluation – and leaving some holes for the data I don’t have access to at the moment. The main task will be converting existing descriptions of the ISDT from the Walls et al model to the Gregor and Jones one.
  • Get re-started on the remaining components of the Ps Framework – chapter 2.

Evaluation of Webfuse course site feature usage: 2006 through 2009

In a recent post I messily wrote about the start of the process of evaluating the use of Webfuse for my thesis. This post takes the ideas/process from that post and applies it to the course websites produced by Webfuse from 2006 through 2007. The data in here is in a similar time frame to the work being done by Col and Ken on their indicators project.


The basic idea is to use the categories proposed by Malikowski et al (2007) as a way to examine the level of feature usage within the Webfuse course sites from 2006 through 2009 (or as much as 2009 that has completed). The following diagram is adapted from Malikowski et al (2007) and it summarises their five categories and also gives an indication of the level of feature usage they have found in their survey of the LMS/VLE literature.

Malikowski Flow Chart

Webfuse is the web-based e-learning system that is the basis for my PhD and some description of that is available here. Most of the course sites hosted on Webfuse are not password protected – you can see the latest list here

Two of the Malikowski et al (2007) categories are excluded in the following set of findings because:

  1. All Webfuse course sites transmit content.
    The creation of a Webfuse course site automatically includes any and all course content that is in a fairly standard, accessible digital format.
  2. Webfuse doesn’t support CBI.
    Webfuse doesn’t pretend to offer any form of computer-based instruction of the type required by Malikowski et al (2007).


(*: The 2009 academic year is currently half way through the second of three terms. The 2009 numbers will increase.)
Category 2006 2007 2008 2009*
Number of course sites 304 262 229 178
Class interactions 67.1% 95.4% 98.3% 97.8%
Evaluating students 38.1% 58.7% 65.1% 61.2%
Evaluating course 96.7% 97.7% 53.3% 38.2%

Questions and observations

A misc. collection of questions and observations arising from the above:

  • Webfuse has created a set of results very different from the norm.
    According to Malikoski et al’s (2007) lit survey interactions and student evaluations are moderately used (in the range of 20-40% of courses) and evaluating the course rarely used (i.e. hardly ever used or mentioned). The following highlights some of the differences.

    In the above the level of interactions is approaching 100%.

    At times, evaluating the course, rarely ever used, is approaching 100% usage.

    Evaluating students is at levels twice that reported in Malikowski et al (2007)

  • In 2007 through 2009 what are the less than 5% of course sites that don’t have interactions doing?
  • The levels for communication (approaching 100%) is higher than that reported for Blackboard (46% in 2006 and up to 61% for T1, 2009).
  • The levels for student evaluation (around 60%) is more than double, sometime three times, that for Blackboard (no more than 20%).
  • Strong need to include in this framework some level of usage of each feature.
    The almost 100% adoption, at times, of interactions and evaluating the course, are almost certain to hide something very troubling. Yes, the course site provides that functionality. However, how much has it actually been used? There’s a need to establish some sort of measure of how much each feature has been used to provide a more useful insight into what is going on.

You can lead a horse to water…

The approach to course sites embedded in Webfuse was that there was a default course site structure. That structure would be created and filled in with information automatically. This would give academics an almost complete course site to modify. Over time, increasingly the course sites were simply copied from last term to the next, edited and used by students.

Even with an almost automated process minimising the work required of the academics, I feel there was a fair bit of limited or inappropriate use. Yes, there may be a class mailing list of discussion forum, but how often did the academics use it? I believe/feel (something that needs to be tested) that a lot of academics simply didn’t engage with these features. Either they didn’t have time or they didn’t have the inclination. For a number, I think it was a case, for a number of reasons, of minimising workloads. Perhaps it might have also been a lack of knowledge.

Problems with minimum standards

As mentioned previously my current institution is adopting a set of minimum standards for course websites. A specification of the components that all course websites must have, as a minimum. I’m not a fan of the idea and the findings here further encourage those negative thoughts.

The Webfuse default course sites were essentially a set of minimum standards that were automatically created for the staff member. Even created automatically, I feel that large parts of these course sites were not supported/used by academic staff.

What do you think is going to happen with a set of minimum standards that the academics actually have to implement? i.e. with the move to Moodle, it appears that the staff will have to do the work to construct the course sites. They have to do the work to implement the minimum standards set by someone else.

Is that going to increase the quality of learning and teaching?


Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

How the LMS – as enterprise system – warps the practice of L&T

At the start of an early day of working on the PhD I am feeling particularly old. Dealing with a teenager at home may also have contributed to it. So, I’m feeling in a particularly curmudgeonly frame of mind, i.e. grumpy old b*stard. Please keep that in mind when reading the following, it will likely come over much more cynical/negative than it is meant.

One of my colleagues, Nona Muldoon has just posted a couple of blog posts on the idea of the 2minuteMoodle. The posts are an introduction and a What is it, how to do it.

The idea is a good idea and it is something I think would be of benefit to students, if staff engaged it in appropriately. However, I did have a couple of concerns – which I outline briefly below.

My main concern is how this suggestion seems to encapsulate what I see as perhaps the biggest problem with the institutional use of enterpise information systems like an LMS – even if it’s an open source LMS/VLE. That’s the main crux of this post.

Minor points

First, the minor points about the idea.

Everything old is new again

The basic idea is very similar to the idea of weekly summaries that I’ve seen academics use at the host institution for at least 10 years. Traditionally these were sent out by the course coordinator at either the start/end of a week laying out a short “map” of the week gone/to come. Usually sent out to the class mailing list or discussion forum. There is even a great deal of correlation between the regular topics most coordinators used and the questions in the 2minuteMoodle approach.

In fact, I’ve seen situations where a new coordinator will take the weekly summaries prepared by a more experienced coordinator from the previous offering and use them as a foundation. i.e. the new coordinator will edit the old summaries to update them for the specifics of the current offering or the knowledge/beliefs of the new coordinator and post them again.

I think this is a fundamentally good idea, and there should be more of it.

Why the focus on one technology?

One of the major differences is the technology. It used to be done with simple text-based email. The new approach makes use of, basically, a podcast. As mentioned in the posts technology has moved on and made it much simpler for both staff and students to use audio, but I wonder if it’s worth the extra effort.

There will still, in this day an age, be students who can’t afford or don’t have access to online audio (though I imagine a small percentage). I know there will be staff who will have significant difficulties using audio to create the the summaries.

Which makes me wonder why the focus on one particular implementation technology? Why not have the explanation of the idea mention/allow staff use other, potentially simpler media. The introduction does mention podcasts or VoiceThread – why not mention a simple text message.

How enterprise systems wrap institutional practice

Enterprise systems, like institutional LMSes, are usually fairly substantial investments. The cost the organisation a lot and are implemented using fairly traditional teleological approaches. A main feature of that is that everything the organisation does must be aimed, or at least seen to be aimed, at the enterprise system.

This is one of the reasons the immediate response to detecting shadow systems is to seek their elimination. Shadow systems aren’t part of the shadow system and consequently must be eliminated. The same goes for any practice or process that doesn’t fit within the confines of the enterprise system. Given the inherent inflexibility of enterprise systems, this is particularly troubling as it limits innovation and change and reinforces the “design for replacement” practice of most institutional IT systems and the problem of stable systems drag.

But there is also another problem with this need to show alignment with the enterprise system. A whole range of practices, policies, arguments and projects, some of them questionable, seek to encourage acceptance by labeling themselves with the enterprise system. Even when they have little or no connection with the enterprise system. The idea is that the cost and subsequently the importance of the enterprise system provides a certain level of attention and respect that can help other ideas.

Consequently, the institution by small steps over time finds its practice, at least in name, catering for the enterprise system rather than for what is good or valuable for the institution and its participants.

From one perspective, Nona’s 2minuteMoodle idea is an example of that. In a nutshell, I might summarise 2minuteMoodle as:

  • Drawing on ideas of scaffolding a series of “questions” is arrived at.
  • Each week course coordinators create audio recordings of them using those questions as a framework to provide scaffolding/motivation for the students.
  • The audio recordings are distributed by RSS/podcast and included in the LMS.

From that description, why is the idea called “2minuteMoodle”. What role does Moodle play in this idea? Isn’t the core of it the theoretical ideas from the scaffolding literature and perhaps the use of voice. Isn’t the podcast the main way for distributing this? A podcast means that a truly digital native student could access the podcast without going to the LMS.

Isn’t the LMS a transitory presence? In the 10 years of “enterprise” e-learning, CQU has had three institutional LMSes – WebCT, Blackboard and now Moodle – and a home grown one. The LMS comes and go, the basic idea of scaffolding remains, regardless of the LMS.

So, why call this idea 2minuteMoodle?

My response is that this is an example of how enterprise systems warp the practice of organisations. At the worst, it’s an example of how the focus is increasingly moving away from what is good L&T and what we know about it and moving towards how do we feed the institutional enterprise system. At the best, it’s an example of how the limited attention resources at an institution are being consumed by the implementation and migration to an enterprise system so that good ideas have to take on the “badge of importance” provided by the enterprise system in order to garner some attention.