Assembling the heterogeneous elements for (digital) learning

Month: March 2010 Page 1 of 2

Moodle curriculum mapping – Step 2

This is the second exploration of an idea for enhancing Moodle to enable curriculum mapping. It carries on from the first step and is part of a broader project.

The aim today is to:

  • Create a CSV file of Moodle outcomes for a couple of programs.
    Mostly to get a feel for the outcomes that accrediting bodies are after and to test out this “uploading” of outcomes. Also to get some insight into how the “scales” might work.
  • “Map” a course or two with those outcomes.
    The aim is to get a feel for how difficult doing this actually is and how well it works. Perhaps get some insights into ways it could be made easier/more effective.
  • Start identifying the database structures where that information is placed.
    This is a pre-cursor to starting to develop extensions to Moodle that will draw on this information. It helps identify where the information is, what is there and what might be possible in terms of development.

Am going to be updating this post throughout today (30 March, 2010)

Moodle outcomes CSV file testing

Moodle allows you to upload outcomes into Moodle via a CSV file. The format is a 6 field CSV file

  • outcome_name – full name
  • outcome_shortname – short name
  • outcome_description
  • scale_name – name of scale
  • scale_items – comma separate list of scale items
  • scale_description


Participation;participation;;Participation scale;”Little or no participation, Satisfactory participation, Full participation”;

Each outcome in Moodle is associated with a scale. It’s typically used to make student performance against the outcome. For curriculum mapping, I believe the scale can be used to measure how well the course/activity/resource meets the outcome/attribute etc.

The task now is to create a useful CSV outcomes file for my purposes. The choices that exist include:

Am thinking I’ll start with the institutional graduate attributes – mostly for political reasons – and then do one of the disciplinary bodies outcomes for a bit more learning.

Graduate attributes

Not 100% certain this represents the current state of the institution’s graduate attributes, but it’s good for an experiment. The institution is apparently introducing graduate attributes progressively during 2010 with all undergraduate programs done from Jan 2011 and all other programs from 2012.

The institution has 8 graduate attributes:

  • Communication
  • Problem solving
  • Critical thinking
  • Information literacy
  • Team work
  • Information technology competence
  • Cross cultural competence
  • Ethical practice

As it stands, I’ve been unable to find any description of these. However, a document describing the project has developed some “levels of achievement” for the attributes and offered descriptions of those levels using learning outcomes and the revised Bloom’s taxonomy.

The three levels are: introductory, intermediate and graduate. Each of the outcomes/levels are associated with learning domains from the revised Bloom’s taxonomy.

Note: my aim here is to identify what has been done and work out how it can be translated into Moodle’s outcomes CSV file. Not to judge what’s been done.

The CSV file

The first version of a CSV file for the attributes is done and successfully in. Will reflect more on this after lunch.

The Moodle help documentation suggests that the format is as listed above, with outcome_description and scale_description as optional. That means that you don’t have to include them in a line, but you do need to include all fields. Getting the format exactly right was an interesting experience in trial and error.

The first two lines of the file are

Communication;comm;”Described here″;”CQU Graduate Attributes (Communication)”;”Introductory – Use appropriate language to describe/explain discipline-specific fundamentals/knowledge/ideas (C2), Intermediate – Select and apply an appropriate level/style/means of communication (C3), Graduate – Formulate and communicate views to develop an academic argument in a specific discipline (A4)”;

What is looks like

When trying to map an activity/resource in Moodle, you use the “edit” facility for that activity/resource and a part of the resulting page looks like the following – click on it to see it bigger.

Moodle outcomes

Some comments on this image:

  • Duplicate outcomes suggest Outcome management not great.
    You can see three outcomes for Communication. This is due to the problems associated with importing the CSV file – 2 failed attempts, followed by a successful one. And subsequent difficulties in finding out how to delete the older versions of the outcome…..Ahh, you have to go to “edit outcomes”.
  • Not enough information.
    While it wouldn’t be a problem eventually, the problem I’m currently facing is that I’m not familiar enough with the outcomes to understand what they mean. I want some additional pointers in the interface – even just the normal Moodle help link (a little question mark). This absence is somewhat related to the next point.
  • Can’t use the scale here and now.
    For curriculum mapping, I want to select the scale here and now. The idea is to specify to what level this activity/resource meets the outcome. This highlights the difference in purpose between the outcomes in Moodle (focused on measuring individual student performance) and what the outcomes would be used in many forms of curriculum mapping (mapping how well a course covers outcomes). For Moodle outcomes the scale starts to apply in the gradebook, i.e. when you’re marking the individual student. Not in the activity/resource.

    Graduate attributes could be used for both approaches, map the course and also track student progress.

  • The need for groupings of outcomes.
    The first outcome “David’s first outcome” is some from some earlier testing. But it does highlight an additional requirement, the ability to separate (and perhaps map) between different groupings of outcomes. e.g. CQU’s graduate attributes, course learning outcomes and perhaps discipline accrediting body learning outcomes.
  • The Moodle workflow is somewhat limited.
    With outcomes, as with other aspects of Moodle, the “workflow” – the sequence of screens you go through as you perform a task – leaves a bit to be desired. It’s not often clear where to go, or as you finish how best to proceed.

Other outcomes

Am now looking at the accreditation requirements for psychology and public relations to understand what is there and what implications that might have for this idea.

In terms of public relations it appears to be a combination of course outcomes, university graduate attributes and some specific “criteria/areas” specified by the program.

In psychology, there’s an odd mixture of discipline specific “graduate attributes”, with each having its own set of critiera, and a collection of “skills” to “map” assessment against.

Where’s the data?

Seems the outcomes stuff might be stored in three tables:

  • grade_outcomes: id, courseid, shortname, fullname, scaleid, description, timecreated, time modified, usermodified
    Obviously the table the CSV import modifies.
  • grade_outcomes_courses: id, courseid, outcomeid
    Links a course with an outcome in the previous.
  • grade_outcomes_history: id, action, oldid, source, timemodified, loggeduser, courseid, shortname, fullname, scaleid, description.
    Not sure on this one.

So, one question is where does the mapping against a particular activity/resource get put?

What about code?

moodle/lib/grade/grade_outcome.php defines a class grade_outcome, that is meant to handle it all, including database manipulation.

Moodle, Oracle, blobs and MS-Word – problem and solution?

This documents a problem, and hopefully an initial solution, around the combination of technologies that is Moodle, an Oracle database, and content that is copy and pasted from Word.

Origins of the problem

The problem first arose at my current institution and its installation of Moodle, which is using Oracle (11g I believe) as the database. It first became apparent through BIM an application I’ve written but was also occasionally occuring with the Moodle wiki.

BIM allows students to create an individual blog on their choice of blog engine – mostly – and then provides management services to staff. Part of that is keeping a copy of all of the students’ blog posts within the Moodle database.

The problem was that some student posts weren’t being inserted into the database. They were reporting the following error

ORA-01704: string literal too long

These same posts work fine on my installation of Moodle – using MySQL – so it appeared to be an Oracle problem. The Oracle error message is related to the strange requirements Oracle has when you want to insert long strings of text. On “good” databases you just do an insert, like anything statement. However, on Oracle, if you are trying to insert a long string into a field that is a BLOB or CLOB, you have to use a different process involving an insert statement putting in a special empty field and then another step.

Gotta love a wonderfully designed and consistent enterprise database.

The question is, what is causing this problem?

The problem

After much diagnosis it appears that the presence of “special characters” created by students copying and pasting content from MS Word into their blog is at the core of the problem. When inserting these long posts into the database, what normally happens is that Moodle checks the length of the post, if it is greater than 4000 Moodle will jump through the special (and silly) hoops that Oracle requires.

However, for the problem posts when Moodle checks the length of the post, it is less than 4000. And the posts do have less than 4000 characters. However, when Moodle tries the normal insert process into Oracle, we get the above error message.

It appears that the problem is being caused by the presence of “special characters” from Word. These appear to be “tricking” Oracle into thinking that these posts are greater than 4000 characters.

The solution

The solution appears to be to clean up the posts before inserting them into the database.

The Moodle discussion forum uses exactly the same process for inserting discussion forum messages into the database. It doesn’t appear to have this same problem as the HTML editor in Moodle appears to do a reasonable job as cleaning up the “special characters”. Though, this might not be 100% successful.

In a perfect world, I want to put in some PHP code into BIM (at first, and perhaps into Moodle later) that does this cleaning. The obvious question is does or why doesn’t, Moodle support this already.

Existing Moodle support

Moodle has optional support for HTMLPurifier, however, not sure this is an exact match for this purpose. To be clear, the problem here isn’t cleaning up the HTML generated by Word. It’s the special characters for quotes, dashes etc that Word uses. In some cases this is actual “special characters, but for some reason, these are also appearing in the text as things like ’ for a single quote. I realise by that description I’m revealing that I haven’t bothered to dig to far into this, yet.

In addition, my main interest is solving this problem in BIM for the short term. So something that needs to be changed at the Moodle level is of little interest.


There’s this bit of PHP suggestion, essentially what I’m looking for. It does pick up some of the problems, but not all. In particular, it doesn’t deal with the “&#” issue.

Combining a few different bits and pieces brings me to this code
$badchr = array(
'“', // left side double smart quote
'â€'.chr(157), // right side double smart quote
'‘', // left side single smart quote
'’', // right side single smart quote
'…', // elipsis
'—', // em dash
'–', // en dash

'’', // single quote
'–' // dash

$goodchr = array('"', '"', "'", "'", "...", "-", "-",
''', '-');

$post = str_replace($badchr, $goodchr, $post);

It seems to work with a couple of ad hoc posts. Need to test it more completely.


Aim here will be to write a test harness that attempts to insert all of the posts made by students so far into the Oracle database. The idea/hope is that this should capture all of the problem posts and give some security that the above code is getting all of the problems.

The testing code is written and running. At first run it comes up with five blogs that have additional problems.

So, I’ve started doing a character by character examination of the posts to find the “funny” characters. I’m then adding these to the “cleaning” process. (Yes, I know this is kludgy).

By the time I’d “fixed” the second of the 5 posts, the subsequent posts were working. So, let’s run the lot again.

Fixed. Had more “special chars” to tweak.

From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques

The following draws on principles/theory from psychology to guide thinking about how to incorporate “data” from “academic analytics” into an LMS in a way that encourages and enables academic staff to improve their learning and teaching. It’s based on some of the ideas that underpin similar approaches that have been used for students such as this Moodle dashboard and the signals work at University of Purdue.

The following gives some background to this approach, summarises a paper from the psychology literature around behaviour modification and then explains one idea for a “signals” like application for academic staff. Some of this thinking is also informing the “Moodle curriculum mapping” project.

Very interested in pointers to similar work, suggestions for improvement or expressions of interest from folk.


I have a growing interest in how insights from psychology, especially around behaviour change can inform the design of e-learning and other aspects of the teaching environment at universities in a way to encourage and enable improvement.
Important: I did not say “require”, I said “encourage”. Too much of what passes in universities at the moment takes the “require” approach with obvious negative consequences.

This is where my current interest in “nudging” – the design of good choice architecture and behaviour modification is coming from. The basic aim is to redesign the environment within which teaching occurs in a way the encourages and enables improvement in teaching practice, rather than discourages it.

To aid in this work, I’ve been lucky enough to become friends with a pyschologist who has some similar interests. We’re currently talking about different possibilities, informed by our different backgrounds. As part of that he’s pointing me to bits in the psychological literature that offers some insight. This is an attempt to summarise/reflect on one such paper (Michie et al, 2008)

Theory to intervention

It appears that the basic aim of the paper is to

  • Develop methods to clarify the list of behaviour change techniques.
  • Identify links between the behaviour change techniques and behavioural determinants.

First a comparison of two attempts at simplifying the key behavioural determinants for change – the following table. My understanding is that there are some values of these determinants that would encourage behaviour change, and others that would not.

Key Determinants of Behaviour Change from Fishbein et al., 2001; Michie et al., 2004
Fishbein et al Michie et al
Self-standards Social/professional role and identity
Skills Skills
Self-efficacy Beliefs about capabilities
Anticipated outcomes/attitude Beliefs about consequences
Intention Motivation and goals
Memory, attention and decision processes
Environmental constraints Environmental context and resources
Norms Social influences
Action planning

It is interesting to see how well the categories listed in this table resonate with the limits I was planning to talk about in this presentation. i.e. it really seems to me, at the moment, that much of the environment within universities around teaching and learning is designed as to reduce the chance of these determinants to be leaning towards behaviour change.

Mapping techniques to determinants

They use a group of experts in a consensus process for linking behaviour change techniques with determinants of behaviour. The “Their mapping” section below gives a summary of the consensus links. The smaller headings are the determinants of behaviour from the above table, the bullet points are the behaviour change techniques.

Now, I haven’t gone looking for more detail on the techniques. The following is going to be based solely on my assumptions about what those techniques might entail – and hence it will be limited. However, this should be sufficient for the goal of identifying changes in the LMS environment that might encourage change in behaviour around teaching.

First, let’s identify some of the prevalent techniques, i.e. those that are mentioned a more than once and which might be useful/easy within teaching.

Prevalent techniques

Social encouragement, pressure and support

The technique “Social processes of encouragement, pressure, support ” is linked to 4 of the 11 determinants: Social/professional role and identity, beliefs about capabilities, motivation and goals and social influences. I find this interesting as it can be suggested that most teaching is a lone and invisible act. Especially in a LMS where what’s going on in other courses. Making what happens more visible might enable this sort of social process.

There’s also some potential connection with “Information regarding behaviour of others” which is mentioned in 3 of 11.

Monitoring and self-monitoring

Get mentioned as linked to 4 of 11 determinants. Again, most LMS don’t appear to give good overall information about what a teacher is doing in a way that would enable monitoring/self-monitoring.

Related to this is “goal/target specified”, part of monitoring.

There’s more to do here, let’s get onto a suggestion

One suggestion

There’s a basic model process embedded here, something along the lines of:

  • Take a knowledge of what is “good” teaching and learning
    For example, Fresen (2007) argues that the level of interaction, facilitation or simply participation by academic staff is a critical success factor for e-learning. There’s a bunch more literature that backs this up. And our own research/analysis has backed this up. Courses with high staff participation show much higher student participation and a clearer correlation between student participation and grade (i.e. more student participation, the higher the grade).
  • Identify a negative/insight into the behavioural determinants that affect academic staff around this issue.
    There are a couple. First, it’s not uncommon for staff to have an information distribution conception of teaching. i.e. they see their job as to disseminate information. Not to talk, to communicate, or participate. Associated with this is that most staff have no idea what other staff are doing within their course sites. They don’t know how often other staff are contribution to the discussion forum or visiting the course site.
  • Draw on a behavioural technique or two to design an intervention in the LMS that can encourage a behaviour change. i.e. that addresses the negative in the determinants.
    In terms of increasing staff participation you might embed into the LMS a graph like the following. Embed it in such a way as the first thing an academic sees when they login, is the graph – perhaps on part of the screen.

    Example staff posts feedback

    What this graph shows is for a single (hypothetical) staff member the number of replies they have made in course discussion forums for the three courses the staff member has taught. The number of replies is shown per term, in reality it might be shown by week of term – as the term progresses.

    This part can hit the “monitoring”, “self-monitoring” and “feedback” techniques.

    The extra, straight line represents the average number of replies made by staff in all courses in the LMS. Or alternatively, all courses in a program/degree into which the staff member teaches. (Realistically, the average would probably change from term to term).

    This aspect hits the “social processes of encouragement, pressure, support”, “modelling/demonstration behaviour of others”. By showing what other people are doing it is starting to create a social norm. One that might perhaps encourage the academic, if they are below the average, to increase their level of replies.

    But the point is not to stop here. Showing a graph like this is simple using business intelligence tools and is only a small part of the change necessary.

    It’s now necessary to hit techniques such as “graded task, starting with easy tasks”, “Increasing skills: problem-solving, decision-making, goal-setting”, “Planning, implementation”, “Prompts, triggers, cues”. It’s not enough to show that there is a problem, you have to help the academic with how to address the problem.

    In this case, there might be links associated with this graph that show advice on how to increase replies or staff participation (e.g. advice to post a summary of the week’s happenings in a course each week, or some other specific, context appropriate advice). Or it might also provide links to further, more detailed information to shed more light on this problem. For example, it might link to SNAPP to show disconnections.

    But it’s even more than this. If you wanted to hit the “Environmental changes (e.g. objects to facilitate behaviour)” technique you may want to go further with than simply showing techniques. You may want to enable this “showing of techniques” to be within a broader community where people could comment on whether or not a technique worked. It would be useful if the tool help automate/scaffold the performance of the task, i.e. moved up the abstraction layer from the basic LMS functionality. Or perhaps the tool and associated process could track and create “before and afters”. i.e. when someone tries a technique, store the graph before it is applied and then capture it at sometime after.

It’s fairly easy to see how the waterfall visualisation (shown below) and developed by David Wiley and his group could be used this way.


Their mapping

Social/professional role and identity

  • Social processes of encouragement, pressure, support


  • Information regarding behaviour by others


  • Goal/target specified: behaviour or outcome
  • Monitoring
  • Self-monitoring
  • Rewards; incentives (inc. self-evaluation)
  • Graded task, starting with easy tasks
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Rehearsal of relevant skills
  • Modelling/demonstration of behaviour by others
  • Homework
  • Perform behaviour in different settings

Beliefs about capabilities

  • Self-monitoring
  • Graded task, starting with easy tasks
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Coping skills
  • Rehearsal of relevant skills
  • Social processes of encouragement, pressure and support
  • Feedback
  • Self talk
  • Motivational interviewing

Beliefs about consequences

  • Self-monitoring
  • Persuasive communication
  • Information regarding behaviour, outcome
  • Feedback

Motivation and goals

  • Goal/target specified: behaviour or outcome
  • Contract
  • Rewards; incentives (inc. self-evaluation )
  • Graded task, starting with easy tasks.
  • Increasing skills: problem-solving, decision-making, goal-setting
  • Social processes of encouragement, pressure, support
  • Persuasive communication
  • Information regarding behaviour, outcome
  • Motivational interviewing

Memory, attention, decision processes

  • Self-monitoring
  • Planning, implementation
  • Prompts, triggers, cues

Environmental context and resources

  • Environmental changes (e.g. objects to facilitate behaviour)

Social influences

  • Social processes of encouragement, pressure, support
  • Modelling/demonstration of behaviour by others


  • Stress management
  • Coping skills

Action planning

  • Goal/target specified: behaviour or outcome
  • Contract
  • Planning, implementation
  • Prompts, triggers, cues
  • Use of imagery


Fresen, J. (2007). “A taxonomy of factors to promote quality web-supported learning.” International Journal on E-Learning 6(3): 351-362.

Michie, S., M. Johnston, et al. (2008). “From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques.” Applied Psychology: An International Review 57(4): 660-680.

Limits in developing innovative pedagogy with Moodle: The story of BIM

The following is a presentation abstract that I’ve submitted to MoodleMoot AU’2010. It’s based on some ideas and thoughts expressed here previously. In particular, the main focus on the paper will be showing that the reason most university e-learning/teaching is not that great, is not really the fault of the academics. Instead it will argue that there are a range of limits within the higher education environment that are mostly to blame.

Regardless of whether it gets accepted at Moodlemoot. I’ll be presenting it at CQU sometime in early July and will post the video etc here then.


Open source Learning Management Systems (LMS) such as Moodle are widely recognised as addressing a number of the limitations of proprietary, commercial LMS. Just as those commercial LMS addressed some of the limitations of the “Fred-in-the-shed” era of early web-based e-learning. Early web-based e-learning addressed some problems with text-based Internet e-learning, which addressed limitations of text-based computer-mediated communications…and so it goes on. Rather than being “without limits” this presentation will suggest that e-learning with Moodle, as currently practised, has a number of limits and that progress can be made through the recognition, understanding and removal of those limits.

The presentation will argue and illustrate that these limits place significant barriers in the way of encouraging widespread, simple improvements in learning and teaching. Let alone the barriers these limits create for the development of true pedagogical innovation. The presentation will explain how these limits are not solely, or even primarily, due to the characteristics of Moodle. It will outline how the majority, but not all, of these limits arise from the nature and characteristics of the broader social context, institutions, purpose, processes and people involved with e-learning. It will show how a number of these limitations have been known about and generally ignored for decades, to the detriment of the quality of learning and teaching. The presentation will also seek to identify a variety of approaches or ways of thinking that may help transform the practice of e-learning with Moodle into something that truly is “without limits”.

The presentation’s argument, the identified limitations and the potential solutions all arise from and will be illustrated by drawing on the experience of developing BIM (BAM into Moodle). BIM ( is a Moodle module released in early 2010 and currently being used at CQUniversity and under consideration by the University of Canberra. BIM allows teaching staff to manage, mark and comment (privately) on individual student blogs that are hosted on the students’ choice of external blog provider. BIM is based on BAM (Blog Aggregation Management) a similar tool integrated into CQUniversity’s home-grown “LMS”. Since 2006, BAM/BIM has been used in 30+ course offerings, by 70+ staff, and with 3000+ students making 20000+ blog posts.

The design of BAM/BIM is intended to remove an inherent limit that underpins the design of all LMS. That is, as an integrated system, the LMS must provide all functionality. It appears that this is a limit that the design of Moodle 2.0 seems focused on removing. However, this presentation will suggest that this is only one of many, often fundamental, limits surrounding the use of Moodle, and that these limits need to be recognised, understood and addressed. The suggestion will be that it is only by doing this that we can aid in the development of truly innovative pedagogy.

The suffocating straightjackets of liberating ideas

Doing some reading and came across the following quote which I had to store for further use. It is quote in Chua (1986) and is ascribed to Berlin (1962, p 19)

The history of thought and culture is, as Hegel showed with great brilliance, a changing pattern of great liberating ideas which inevitably turn into suffocating straightjackets, and so stimulate their own destruction by new emancipatory, and at the same time, enslaving conceptions. The first step to understanding of men is the bringing to consciousness of the model or models that dominate and penetrate their thought and action. Like all attempts to make men aware of the categories in which they think, it is a difficult and sometimes painful activity, likely to produce deeply disquieting results. The second task is to analyse the model itself, and this commits the analyst to accepting or modifying or rejecting it and in the last case, to providing a more adequate one in its stead.

Chua (1986) uses it as a intro to alternate views of research perspectives. But it applies to so much.

For me the most obvious application, the one I’m dealing with day to day, is the practice of e-learning within universities. Post-thesis I think I need to figure out how to effectively engage more in this two stage process. I think the Ps Framework provides one small part of a tool to help this process, need to figure out what needs to be wrapped around it to encourage both steps to happen.


Berlin, R. J. (1962). Does political theory still exist? Philosophy, Politics and Society. P. Laslett and W. G. Runciman, Basil Blackwell: 1-33.

Chua, W. F. (1986). “Radical developments in accounting thought.” The Accounting Review 61(4): 601-632.

First step in "Moodle curriculum mapping"

This is perhaps the first concrete step in a project that is aiming to look at how the act of curriculum mapping can be embedded into the, increasingly, most common task and tool used by academics. That is, how can an LMS (like Moodle) be used/modified to make curriculum mapping a part of what academics do, both in terms of maintaining the mapping, but more importantly using the mapping in interesting and useful ways.

As outlined in a previous post it appears that Moodle (the institutional LMS at my current institution) already has functions that offer some basic level of support for curriculum mapping. However, they are mostly used/intended for tracking student outcomes/performance. This post documents an initial foray into using these functions to implement some form of curriculum mapping. The plan is:

  • Use existing functions to map a course or two and find out how that works and how it might be made better.
  • Use the data of the mapping to generate some applications that use the data.

Turned out, due to having to fight other fires, that today’s work was limited. Only small progress.

The courses and the people

I’m working with 2/3 courses. Two from in and around public relations and one from psychology. More detail on these later.

The set up

The plan is to perform this project on a copy of Moodle running on my laptop. i.e. it’s separate from any systems people rely upon and allows me the freedom to code. I’ll be taking backups of the live course sites for the above course, restoring them on my laptop’s Moodle and mapping the courses.

My first problem was to restore the backups. I had an old version of libxml which meant the restore process in Moodle wasn’t handling the HTML code all that well. So new install of xampp and Moodle – some wasted time. Really didn’t like the new Nazi password approach that is now default in the version of Moodle I’m using. More passwords to write down. 😉

Getting outcomes up and going

I’d had outcomes working on the old version of Moodle. My next barrier was getting outcomes to appear on the new. It wasn’t happening simply and I was running out of time, so it sat for a bit. Here’s what I’ve done to get it working:

  • As the Moodle administrator, turn on outcomes under “General Settings”
    Just typing “outcome” in the Moodle adminstrators block was the quickest way to find it.
  • Create some outcomes
    Either in the Admin box under grades or inside an individual course.
  • Possibly add site wide outcomes to the course.
    Outcomes option in course modify box.

Having completed those tasks the theory is that everytime you edit an activity or resource you will have an option to view and select appropriate outcomes.


An outcome has the following data associated with it:

  • Full and short name.
  • Standard outcome – is it available site wide.
  • Scale – which existing scale to use with the outcome.
  • Description – textual description

Outcomes can be imported using as csv file. This could be useful as you could create a set of outcomes for a particular discipline in a CSV file and make them available for anyone to use. Folk at other institutions could import them and have a consistent set of outcomes.

Also, you may not want all discipline outcomes to be available site wide. Could annoy the mathematicians if they kept seeing outcomes from psychology etc. Having outcomes as a CSV would allow these to be imported at a course level. But maybe not…

Checking when outcomes appear

Interested in seeing if the outcomes appear for all activities/resources. Doing a quick test with a couple of courses and reporting where it works. It works for

  • Forums
  • Resource
    • Web page
    • Link to file or web page

Doesn’t work for

  • Labels
    Means of inserting text/HTML into the topics. Used by some to specify readings. Might want to have outcomes on these.
  • Summary


As I was doing the above test, a few thoughts arose:

  • What outcomes would you have for a course synopsis?
    For some resources/activities they are too global, too high level to specify a list of outcomes/attributes etc. What do you do with these?

    Given that one of the aims might be to highlight “coverage”, there are some things you wouldn’t allocate anythign to.

  • Why wouldn’t you have outcomes associated with labels?
  • The obvious question which has been bugging me for a while – not all activities/resources for a course are likely to be in the course site. Any curriculum mapping based on the LMS site is not going to be complete. Unless there is some change in practice on the part of the academics. Not a straight forward thing to do.

Why is University/LMS e-learning so ugly?

Yesterday, Derek Moore tweeted

University webs require eye candy + brain fare. Puget Sound’s site does both with colour palate & info architecture

In a later he also pointed to the fact that even Puget Sounds instance of Moodle looked good. I agreed.

This resonated strongly with me because I and a few colleagues have recently been talking about how most e-learning within Universities and LMS is ugly. Depressing corporate undesign seeking to achieve quality through consistency and instead sinking to being the lowest common denominator. Sorry, I’m starting to mix two of my bete noires:

  1. Most LMS/University e-learning is ugly.
  2. Most of it is based on the assumption that everything must be the same.

Let’s just focus on #1.

I’m using ugly/pretty in the following in the broadest possible sense. Pretty, at its extreme end, is something that resonates postively in the soul as your using it effectively to achieve something useful. It helps you achieve the goal, but you feel good while your doing it, even when you fail and even without knowing why. There’s a thesis or three in this particular topic alone – so I won’t have captured it.

Why might it be ugly? An absence of skill?

Let me be the first to admit that the majority of e-learning that I’ve been responsible for is ugly. This design (used in 100s of course sites) is mostly mine, but has thankfully improved (as much as possible) by other folk. At best you might call that functional. But it doesn’t excite the eyes or resonate. And sadly, it’s probably all downhill from there as you go further back in history.

Even my most recent contribution – BIM – is ugly. If you wish to inflict more pain on your aesthetic sensibility look at this video. BIM rears its ugly head from about 1 minute 22 seconds in.

In my case, these are ugly because of an absence of skill. I’m not a graphic designer, I don’t have training in visual principles. At best I pick up a bit, mostly from what I steal, and then proceed to destroy those principles through my own ineptitude.

But what about organisations? What about the LMS projects like Moodle?

Why might it be ugly? Trying to be too many things to too many?

An LMS is traditionally intended to be a single, integrated system that provides all the functionality required for institutional e-learning. It is trying to be a jack of all trades. To make something so all encompassing look good in its entirety is very difficult. For me, part of looking good is responding to the specifics of a situation in an appropriate way.

It’s also not much use being pretty if you don’t do anything. At some level the developers of an LMS have to focus on making it easy to get the LMS to do things, and that will limit the ability to make it look pretty. The complexity of the LMS development, places limits on making it look pretty.

At some level, the complexity required to implement a system as complex as a LMS also reduces the field of designers who can effectively work with to improve the design of the system.

But what about organisations adopting the LMS, why don’t they have the people to make it look good?

Why might it be ugly? Politics?

The rise of marketing and the “importance of brand” comes with it the idea of everything looking the same. It brings out the “look and feel” police, those folk responsible for ensuring that all visual representations of the organisation capture the brand in accepted ways.

In many ways this is an even worse example of “trying to be too many things”. As the “brand” must capture a full range of print, online and other media. Which can be a bridge too far for many. The complexity kills the ability for the brand to capture and make complete use of the specific media. Worse, often the “brand police” don’t really understand the media and thus can’t see the benefits of the media that could be used to improve the brand.

The brand and the brand police create inertia around the appearance of e-learning. They help enshrine the ugliness.

Then we get into the realm of politics and irrationality. It no longer becomes about aesthetic arguments (difficult at the best of times) it becomes about who plays the game the best, who has the best connection to leadership, who has the established inertia, who can spin the best line.

The call to arms

I think there is some significant value in making e-learning look “pretty”. I think there’s some interesting work to be done in testing that claim and finding out how you make LMS and university e-learning “pretty”.

Some questions for you:

  • Is there already, or can we set up, a gallery of “pretty” LMS/institutional e-learning?
    Perhaps something for Moodle (my immediate interest) but other examples would be fun.
  • What bodies of literature can inform this aim?
    Surely some folk have already done stuff in this area.
  • What might be some interesting ways forward i.e. specific projects to get started?

Limits in developing innovative pedagogy with Moodle: The story of BIM

The following is the extended presentation abstract I plan to submit to MoodleMoot AU 2010. The idea was to submit a paper, but time has run out. The recent blog posts (starting with this one) about the story of BIM provide some of the early reflection that will form the basis of the presentation. The challenges mentioned in those posts will be abstracted somewhat to generate a series of limitations.

In part, the approach I am taking with this presentation is to respond to the polyannas who complain about me being too negative, and never seeing the positives. It’s always been my argument that what I do is not to ignore the positives, recognising and reusing what’s worked forms an important part of my information systems design theory for e-learning. For example, as far back as 1999 I have three publications (1, 2, 3) where recognising and reusing the positives is a key feature. It has been my argument that the polyannas are so busy focusing on the positives because they don’t want to recognise and engage with what doesn’t work. It’s a case of “don’t mention the war”, the SNAFU principle, confirmation bias and pattern entrainment, defensive routines and the lack of a willingness to question the practices on which ones self esteem is built. For me, it is only through recognising, understanding and addressing the limits that you can encourage innovative learning and teaching. You have to recognise and respond to the context.


Open source Learning Management Systems (LMS) such as Moodle are widely recognised as addressing a number of the limitations of proprietary, commercial LMS. Just as those commercial LMS addressed some of the limitations of the “Fred-in-the-shed” era of early web-based e-learning. Early web-based e-learning addressed some problems with text-based Internet e-learning, which addressed limitations of text-based computer-mediated communications…and so it goes on. Rather than being “without limits” this presentation will suggest that e-learning with Moodle, as currently practised, has a number of limits and that progress can be made through the recognition, understanding and removal of those limits.

The presentation will argue and illustrate that these limits place significant barriers in the way of encouraging widespread, simple improvements in learning and teaching. Let alone the barriers these limits create for the development of true pedagogical innovation. The presentation will explain how these limits are not solely, or even primarily, due to the characteristics of Moodle. It will outline how the majority, but not all, of these limits arise from the nature and characteristics of the broader social context, institutions, purpose, processes and people involved with e-learning. It will show how a number of these limitations have been known about and generally ignored for decades, to the detriment of the quality of learning and teaching. The presentation will also seek to identify a variety of approaches or ways of thinking that may help transform the practice of e-learning with Moodle into something that truly is “without limits”.

The presentation’s argument, the identified limitations and the potential solutions all arise from and will be illustrated by drawing on the experience of developing BIM (BAM into Moodle). BIM ( is a Moodle module released in early 2010 and currently being used at CQUniversity and under consideration by the University of Canberra. BIM allows teaching staff to manage, mark and comment (privately) on individual student blogs that are hosted on the students’ choice of external blog provider. BIM is based on BAM (Blog Aggregation Management) a similar tool integrated into CQUniversity’s home-grown “LMS”. Since 2006, BAM/BIM has been used in 30+ course offerings, by 70+ staff, and with 3000+ students making 20000+ blog posts.

The design of BAM/BIM is intended to remove an inherent limit that underpins the design of all LMS. That is, as an integrated system, the LMS must provide all functionality. It appears that this is a limit that the design of Moodle 2.0 seems focused on removing. However, this presentation will suggest that this is only one of many, often fundamental, limits surrounding the use of Moodle, and that these limits need to be recognised, understood and addressed. The suggestion will be that it is only by doing this that we can aid in the development of truly innovative pedagogy.

Research Method – Overview

The following is the first part of chapter 3 of my thesis. The aim of this part is to explain the broad view of research that informs the work. The second part will give more specific details about the specific method used. Over the next week, I’m re-reading this chapter, when the fixes are done, I will upload a completed version.

Update: The latest version of the complete chapter is available from this page


This thesis aims to answer the “how” question associated with the design, development and evolution of information systems to support e-learning in universities. It seeks to achieve this by using an iterative action research process (Cole, Purao et al. 2005) to formulate an information systems design theory (ISDT) (Walls, Widmeyer et al. 1992; Walls, Widmeyer et al. 2004; Gregor and Jones 2007). This chapter aims to situate, explain and justify the nature of the research method adopted in this work. It starts by examining the question of research paradigm and its connection with theory (Section 3.2). In particular, it seeks to explain why the choice of paradigm is seen as secondary to deciding the type of theory to be produced, in terms of selecting a research method. The chapter then uses four questions about a body of knowledge identified by Gregor (2006) to describe the particular perspectives that inform the research method to formulate the ISDT developed in this thesis (Section 3.3).

The formulation of an ISDT is one example of design research (Simon 1996; Hevner, March et al. 2004). At the start of this work, design research was not a dominant research methodology within the field of information systems (Lee 2000). There was a reluctance to accept the importance of this type of knowledge within information systems (Gregor 2002) and to this day there remain diverse opinions and on-going evolutionary understanding about the nature, place and process associated with design research and design theory (Baskerville 2008; Kuechler and Vaishnavi 2008). Consequently, the thinking underlying this thesis, and the content and structure of this chapter, has undergone a number of iterations as understanding has improved throughout the entire research process. For example, initial descriptions of this work (Jones and Gregor 2004; Jones and Gregor 2006) used the structure of an ISDT presented by Walls, Widmeyer and El Sawy (1992). This thesis uses the improved specification of an ISDT presented by Gregor and Jones (2007), an improvement that arose, in part, from work associated with this thesis. For these reasons, this chapter may delve into greater detail about these issues than traditional.

Paradigms and theory

It seems traditional at this point to describe the type of research paradigm that has informed the research for this work based on the assumption that the paradigm embodies a world view that has provided the fundamental assumptions to guide this research project and its selection of method. This section takes a slightly different approach.

This section seeks to argue that the question of research paradigm is of secondary importance to the matching of the research question, to the type of theory that best fits and subsequently the most appropriate research methodology or paradigm. This section argues that the aim of research is the generation and evaluation of knowledge (Section 3.2.1) and that this knowledge is typically expressed as different types of theory (Section 3.2.2). Lastly, the section seeks to connect this view with similar views of research paradigms (Section 3.2.3).

What is reseaerch?

The sixth edition of the OECD’s (2002) Frascati Manul defines research and experimental development as a

creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications

Vaishnavi and Kuechleer (2004) define research as “an activity that contributes to the understanding of a phenomenon”. Research, in its most conceptual sense, is nothing more than the search for understanding (Hirschheim 1992). Research is systematic, self-critical inquiry that is founded in curiosity and driven by a desire to understand arising from a stable, systematic and sustained curiosity and subjected to public criticism and, where appropriate, empirical tests (Stenhouse 1981).

Based on these perspectives it appears that a major aim of research is to generate and evaluate knowledge. Various perspectives on the nature of that knowledge, it’s purpose, validity, novelty, utility etc exist. Returning to the OECD (2002), they define research and development to cover three activities:

  1. 1. basic research;
    Experimental or theoretical work, without practice application in view, that aims to acquire new knowledge of the foundations of phenomena and observable facts
  2. applied research; and
    Original investigation aimed at acquiring new knowledge primarily for a specific practical aim or objective.
  3. experimental development.
    Systematic work based on existing knowledge that is directed towards producing new, or improving existing, processes, systems or services.

Even with these differences, a major aim of research appears to be to make a contribution to knowledge. If this is the case, then how is that knowledge represented. In creating and validating knowledge, scientists rely on the clear and succinct statement of theory, theory that embodies statements of the knowledge that has been developed (Venable 2006). Developing theory is what separates academic researchers from practitioners and consultants (Gregor 2006).

The role of theory and method

If an aim of research is to make a contribution to knowledge, should theory be used to represent that knowledge? Theory should be a primary output of research (Venable 2006). Theory development is a central activity in organisational research (Eisenhardt 1989). There is value in theory because it is practical. The practicality of good theory arises because it advances knowledge in a scientific discipline and guides research towards crucial questions (van de Ven 1989). Theories are practical as the enable knowledge to be accumulated in a systematic manner and the use of this knowledge to inform practice (Gregor 2006).

While there is recognition of the importance of theory, there remains questions about what it is. There has been a long-running search for the meaning of “theory” (Baskerville 2008). DiMaggio (1995) identifies at least three views of what theory should be and suggests that each has some validity and limitations. There is and has been disagreement about whether a model and a theory are different, whether or not a typology is a theory and other questions about theory (Sutton and Staw 1995). Many researchers within information systems use the word theory, but fail to give any explicit definition (Gregor 2006). This lack of consensus about what theory is, may explain why it is difficult to develop strong theory in the behavioural sciences (Sutton and Staw 1995).

Types of theory

Part of the confusion around theory has been around its purpose, around whether or not there are different types of theory. Within the information systems field there has been several different approaches to identifying different types of theory. Iivari (1983) described three levels of theorising: conceptual, descriptive and prescriptive. A number of authors (Nunamaker, Chen et al. 1991; Walls, Widmeyer et al. 1992; Kuechler and Vaishnavi 2008) have used the distinction of kernel and design theories. Taking a broad view of theory Gregor (2006) identified five inter-related categories of theory based on the primary type of question at the foundation of a research project. These five categories and their question of interest are summarized in Table 3.1.

Table 3.1 – Gregor’s Taxonomy of Theory Types in Information Systems Research (adapted from Gregor 2006)
Theory type Distinguishing attributes
I. Analysis Says “what is”.
The theory does not extend beyond analysis and description. No causal relationships among phenomena are specified and no predictions are made.
II. Explanation Says “what is”, “how”, “why”, “when”, “where”.
The theory provides explanations but does not aim to predict with any precision. There are no testable propositions.
III. Prediction Says “what is” and “what will be”.
The theory provides predictions and has testable propositions but does not have well-developed justificatory causal explanations.
IV. Explanation and prediction (EP) Says “what is”, “how”, “why”, “when”, “where” and “what will be”.
Provides predictions and has both testable propositions and causal explanations.
V. Design and action Says “how to do something”.
The theory gives explicit prescriptions (e.g., methods, techniques, principles of form and function) for constructing an artifact.

The taxonomy presented in Table 3.1 is based on little prior work and there exists opportunities for further work and improvement (Gregor 2006). There also remains some disagreement about the designation of design theory to Theory type V (Venable 2006). However, it does seem to provide a foundation on which to build sound, cumulative, integrated and practical bodies of theory within the information systems discipline (Gregor 2006).

Relationship between theory and method

Gregor (2006) suggests that research begins with a problem to be solved or a question of interest. The type of theory that is to be developed or tested depends on the nature of this problem and the questions the research wishes to address (Gregor 2006). This connection is made on the basis of the primary goals of theory (Gregor 2006). Assuming this image of the research process then it seems logical that the next step is the selection of research methods or paradigms most appropriate to develop or test the selected theory type. This is not to suggest that there is a one-to-one correspondence between a particular theory type and a particular method or paradigm. Gregor (2006) argues that none of the theory types necessitate a specific method, however, proponents of specific paradigms do favour certain types of theory over others. While there is no necessary correspondence between theory types and methods/paradigms, it is suggested that certain methods/paradigms are better suited to certain types of theory, research problems and researchers.

Recognising different types of theory makes it possible to see the differences as complementary and consequently enable integration into a larger whole (Gregor 2006). It is possible for research to make a contribution to more than one type of theory. Baskerville (2008) argues that there is clearly more to design research than design theory alone. Kuechler and Vaishnavi (2008) show how a design research project is contributing to both design theory (Gregor’s Type V) and kernel theory (Gregor’s other types). The possibility for a research project to make contributions to different types of theory suggests that a research project may draw upon several different methods or paradigms.

The role of research paradigms

Having briefly summarised the perspective on research, theory and method in previous sections, this section makes some connections between this perspective and the views on research paradigms expressed by Mingers (2001) and the pragmatic view of science/paradigm (Goles and Hirschheim 2000).

Research methodology attempts to approximate a compatible collection of assumptions and goals which underlay methods, the actual methods, and the way the results of performing those methods are interpreted and evaluated (Reich 1995). The assumptions or beliefs about the world, how it works and how it may be understood has been termed a paradigm (Kuhn 1996; Guba, 1999). Numerous authors have sought to identify and describe different research paradigms. Lincoln and Guba (2000) identify five major paradigms: positivism, postpositivism, critical theory, constructivism and participatory action. Within the information systems discipline, Orlikowski and Baroudi (1991) identify three broad research paradigms: positivist, interpretive and critical. Within information systems and in connection with the rise of design research, numerous authors (Nunamaker, Chen et al. 1991; March and Smith 1995; Hevner, March et al. 2004) have suggested that it is possible to identify two broad research paradigms within information systems: descriptive and prescriptive research. Where descriptive research is seen as traditional research where prescriptive research is design research. There are some who take issue with seeing design research as a separate paradigm (McKay and Marshall 2007).

Just as there are differing views on the number and labels of different research paradigms, there are differences on how to describe them. Guba and Lincoln (1994) describe the beliefs encompassed by a paradigm through three, interconnected questions: ontology, epistemology and methodology. Mingers (2001) describes a paradigm as being a general set of philosophical assumptions covering ontology, epistemology, ethics or axiology and methodology. Gole and Hirschheim (2000) use ontology, epistemology and axiology.

Mingers (2001) describes three perspectives on paradigms. These are:

  • isolationism;
    Views paradigms as based on contradictory assumptions which makes them mutually exclusive and consequently a researcher should follow a single paradigm.
  • complementarist; and
    Paradigms are seen as more or less suited to particular problems and selection is based on a process of choice.
  • multi-method.
    Paradigms are seen to focus on different aspects of reality and can be combined to provide a richer understanding of the problem.

Minger’s (2001) multi-method perspective seems to fit well with a research project seeking to address a research problem through making contributions to different types of theory (as described in Section 3.2.2). Such a perspective suggests that the question of whether a researcher is positivist, interpretivist or critical is of secondary importance to the question of fit between problem, theories and methods.

Such a perspective seems to have connections with that of the pragmatist perspective of research described by Goles and Hirschheim (2000). Pragmatists consider the research question as more important then the method used or the worldview meant to underpin the method (Tashakkori and Teddlie 1998). Table 3.2 compares four important paradigms, including pragmatism. It has been suggested that pragmatism draws on a philosophical basis of pluralism to undercut the traditional dichotomous battle between conflicting paradigms (Goles and Hirschheim 2000). It facilitates the construction of connections and interplay between conflicting paradigms (Wicks and Freeman 1998).

If a paradigm must be chosen, then that of pragmatism seems the best fit. This research puts the question of “how to design and support an information systems for e-learning within universities” as the focus. The type(s) of theories, the methods to be used and their appropriateness should flow and align with that question. The following section provides a explanation of this alignment and describes the choices made for this work.

Table 3.2 – Comparisons for four important paradigms used in the social and behavioural sciences (adapted from Tashakkori and Teddlie 1998)
Positivism Postpositivism Pragmatism Constructivism
Methods Quantitative Primarily quantitative Quantitative + qualitative Qualitative
Logic Deductive Primarily deductive Deductive + inductive Inductive
Epistemology Objective point of view, Knower and known are dualism Modified dualism. Findings probably objectively “true” Both objective and subjective points of view Subjective point of view. Knower and known are inseparable.
Axiology Inquiry is value-free Inquiry involves values, but they may be controlled Values play a large role in interpreting results Inquiry is value-bound
Ontology Naive realism Critical or transcendental realism Accept external reality. Choose explanations that best produce desired outcomes Relativism
Causal linkages Real causes temporally precedent to or simultaneous with effects There are some lawful, reasonably stable relationships among social phenomena. These may be known imperfectly. Causes are identifiable in a probalistic sense that changes over time There may be causal relationships, but we will never be able to pin them down. All entities simultaneously shaping each other. It’s impossible to distinguish causes from effects.


Baskerville, R. (2008). "What design science is not." European Journal of Information Systems 17(5): 441-443.

Cole, R., S. Purao, et al. (2005). Being proactive: Where action research meets design research. Twenty-Sixth International Conference on Information Systems: 325-336.

DiMaggio, P. (1995). "Comments on "What theory is not"." Administrative Science Quarterly 40(3): 391-397.

Eisenhardt, K. (1989). "Building theories from case study research." The Academy of Management Review 14(4): 532-550.

Goles, T. and R. Hirschheim (2000). "The paradigm is dead, the paradigm is dead…long live the paradigm: the legacy of Burrell and Morgan." Omega 28: 249-268.

Gregor, S. (2002). "Design Theory in Information Systems." Australian Journal of Information Systems: 14-22.

Gregor, S. (2006). "The nature of theory in information systems." MIS Quarterly 30(3): 611-642.

Gregor, S. and D. Jones (2007). "The anatomy of a design theory." Journal of the Association for Information Systems 8(5): 312-335.

Hevner, A., S. March, et al. (2004). "Design science in information systems research." MIS Quarterly 28(1): 75-105.

Hirschheim, R. A. (1992). Information Systems Epistemology: An Historical Perspective. Information Systems Research: Issues, Methods and Practical Guidelines. R. Galliers. London, Blackweel Scientific Publications: 28-60.

Iivari, J. (1983). Contributions to the theoretical foundations of systemeering research and the Picoco model. Oulu, Finland, Institute of Data Processing Science, University of Oulu.

Jones, D. and S. Gregor (2004). An information systems design theory for e-learning. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D. and S. Gregor (2006). The formulation of an Information Systems Design Theory for E-Learning. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Kuechler, B. and V. Vaishnavi (2008). "On theory development in design science research: anatomy of a research project." European Journal of Information Systems 17(5): 489-504.

Kuhn, T. S. (1996). The Structure of Scientific Revolutions. Chicago, University of Chicago Press.

Lee, A. S. (2000). "Irreducibly Sociological Dimensions in Research and Publishing." MIS Quarterly 24(4): v-vii.

March, S. T. and G. F. Smith (1995). "Design and Natural Science Research on Information Technology." Decision Support Systems 15: 251-266.

McKay, J. and P. Marshall (2007). Science, Design and Design Science: Seeking Clarity to Move Design Science Research Forward in Information Systems. 18th Australasian Conference on Information Systems, Toowoomba.

Mingers, J. (2001). "Combining IS Research Methods: Towards a Pluralist Methodology." Information Systems Research 12(3): 240-259.

Nunamaker, J. F., M. Chen, et al. (1991). "Systems development in information systems research." Journal of Management Information Systems 7(3): 89-106.

OECD (2002). Frascati Manual: Proposed standard practice for surveys on research and experimental development. Paris, France, Organisation for Economic Co-operation and Development: 254.

Orlikowski, W. and J. Baroudi (1991). "Studying information technology in organizations: Research approaches and assumptions." Information Systems Research 2(1): 1-28.

Reich, Y. (1995). "The study of design research methodology." Transactions of the ASME.

Simon, H. (1996). The sciences of the artificial, MIT Press.

Stenhouse, L. (1981). "What counts as research?" British Journal of Educational Studies 29(2): 103-114.

Sutton, R. and B. Staw (1995). "What theory is not." Administrative Science Quarterly 40(3): 371-384.

Tashakkori, A. and C. Teddlie (1998). Mixed methodology: combining qualitative and quantitative approaches. Thousand Oaks, California, SAGE.

Vaishnavi, V. and B. Kuechleer. (2004, 18 January 2006). "Design Research in Information Systems."   Retrieved 20 April 2004, 2004, from

van de Ven, A. (1989). "Nothing is quite so practical as a good theory." The Academy of Management Review 14(4): 486-489.

Venable, J. (2006). The role of theory and theorising in design science research. First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Walls, J., G. Widmeyer, et al. (2004). "Assessing information system design theory in perspective: How useful was our 1992 initial rendition." Journal of Information Technology, Theory and Application 6(2): 43-58.

Walls, J., G. Widmeyer, et al. (1992). "Building an Information System Design Theory for Vigilant EIS." Information Systems Research 3(1): 36-58.

Wicks, A. and R. E. Freeman (1998). "Organization studies and the new pragmatism: Positivism, anti-positivism and the search for ethics." Organization Science 9(2): 123-140.

Embedding behaviour modification – paper summary

A growing interest of mine is an investigation of how the design of the environment and information systems to support university learning and teaching can be improved with a greater consideration given to factors which can help encourage improvement and change. i.e. not just building systems that do a task (e.g. manage a discussion forum) but design a discussion forum that encourages and enables an academic to adopt strategies and tactics that are known to be good. If they choose to.

One aspect of the thinking around this is the idea of behaviour modification. The assumption is that to some extent improving the teaching of academics is about changing their behaviour. The following is a summary of a paper (Nawyn et al, 2006) available here.

The abstract

Ubiquitous computing technologies create new opportunities for preventive healthcare researchers to deploy behavior modification strategies outside of clinical settings. In this paper, we describe how strategies for motivating behavior change might be embedded within usage patterns of a typical electronic device. This interaction model differs substantially from prior approaches to behavioral modification such as CD-ROMs: sensor-enabled technology can drive interventions that are timelier, tailored, subtle, and even fun. To explore these ideas, we developed a prototype system named ViTo. On one level, ViTo functions as a universal remote control for a home entertainment system. The interface of this device, however, is designed in such a way that it may unobtrusively promote a reduction in the user’s television viewing while encouraging an increase in the frequency and quantity of non-sedentary activities. The design of ViTo demonstrates how a variety of behavioral science strategies for motivating behavior change can be carefully woven into the operation of a common consumer electronic device. Results of an exploratory evaluation of a single participant using the system in an instrumented home facility are presented


Tell’s how a PDA + additional technology was used to embed behaviour modification strategies aimed at decreasing the amount of television watching. Describes a successful test with a single person.

Has some links/references to strategies and research giving principles for how to guide this type of design.


Set the scene. Too many Americans watch too much TV, are overweight and don’t get exercise. Reducing TV watching should improve health, if replaced with activities that aren’t sedentary. But difficult because TV watching is addictive and exercise is seen to have high costs and initial experience not so good.

The idea is that “successful behavior modification depends on delivery of motivational strategies at the precise place and time the behavior occurs”. The idea is that “sense-enabled mobile computing technologies” can help achieve this. This work aims to:

  • use technology to disrupt the stimulus-reward cycle of TV watching;
  • decrease the costs of physical activity.

Technology-enabled behavioral modification strategies

Prior work has included knowledge campaigns and clinical interventions – the two most common approaches. Technology used to reduce television usually gatekeepers used to limit student access – not likely to be used by adults. There are exercise-contingent TV activation systems.

More work aimed at increasing physical activity independent of television. Approaches use include measuring activity and providing open loop feedback. i.e. simple, non-intrusive aids to increase activity. The more interactive, just in time feedback may help short-term motiviation – e.g. video games. Also technology interventions that mimic a human trainer.

For those not already exercising small increases in physical activity may be better than intense regimens.

The opportunity: just-in-time interactions

Technological intervention based on the value of: that people respond best to information that is timely, tailored to their situation, often subtle, and easy to process. This intervention uses a PDA device intended to replace the television remote control and adds a graphical interface, built-in program listings, access to a media library, integrated activity management, and interactive games.

It tries to determine the goals of the user and suggest alternatives to watching TV in a timely manner. The addition of wearable acceleration sensors it can also function as a personal trainer.


Provide a user experience rewarding enough to be used over time.

Grabbing attention without grabbing time

Prior work on behavior change interventions reveals them to be:

  • resource-intensive, requiring extensive support staff;
  • time-intensive, requiring the user to stop everyday activity to focus on relevant tasks.

This is why the remote is seen as a perfect device. It’s part of the normal experience. Doesn’t need separate time to use.

Sustaining the interaction over time

Behavior change needs to be sustained over years to have a meaningful impact.
Extended use of a device might run the risk of annoyance, so avoided paternalistic or authoritarian strategies. Focus instead on strategies that promote intrinsic motivation and self-reflection. Elements of fun, reward and novelty are used to induce positive affect rather than feelings of guilt.

Avoiding the pitfall of coercion

Temptation of using coercion for motiviation. The likelihood that users will tolerate coercive devices for long is questionable.

Avoiding reliance on extrinsic justification

Optimal outcome of any behavioural intervention is change that persists. Heavy reliance on extrinsic justification – rewards or incentives – may result in dependency that can hurt persistence if removed. Also problems if the undesirable behaviour – watching TV – is the reward for exercise.

Case study

Low cost remote produced from consumer hardware. Laptop provided to manage media library. GUI with finger input.

Provides puzzles that use the TV for display and physical activity for input.

Behavior modification strategies

Most derived from basic research on learning and decision-making – suggestibility, goal-setting and operant conditioning). Examples include:

  • value integration – having persuasive strategies embedded within an application that otherwise provides value to the user increases the likelihood of adoption.
  • reduction – reducing the complexity of a task increases the likelihood that it will be performed.
  • convenience – embedding within something used regularly, increases opportunities for delivery of behaviour change strategies.
  • ease of use – easier to use = more likely to be adopted over long term.
  • intrinsic motivation – incorproating elements of challenge, curiosity and control into an activity can sustain interest.
  • suggestion – you can bias people toward a course of action through even very subtle prompts and cues.
  • encouraging incompatible behaviour – encouragement can be effective
  • disrupting habitual behaviour – eliminate bad habits by the conditions that create them are removed or avoided.
  • goal setting – concrete, achievable goals promote behaviour change by orienting the individual toward a definable outcome.
  • self-monitoring – motivated people can be more effective when able to evaluate progress toward outcome goals.
  • proximal feedback – feedback that occurs during or immediately after an activity has the greatest impact on behaviour change.
  • operant conditioning – increase frequency of desirable behaviour by pairing with rewarding stimuli.
  • shaping – transform an existing behaviour into more desirable one by rewarding successive approximations of the end goal.
  • consistency – draw on the desire of people to have a consistency between what they say and do to help them adhere to stated goals.

Exploratory evaluation

Use it with a single (real life) person to find out what happens.

Done in a specially instrumented apartment, including 3 phases: baseline with normal remote, 12 days at home, 7 days in lab with special remote. Participant not told that this was aimed at changing behaviour around watching TV and physical activity.


Television watching reduced from 133 minutes a day during baseline to 41 minutes during intervention.

Evaluation against the adopted strategies were positive.


Substantial improvement important. Phase strategies in over time. Strategies are initially seen as novel – can use this curiosity. Not all users will react well.


Nawyn, J., S. Intille, et al. (2006). Embedding behavior modification strategies into a consumer electronic device: A case study. 8th International Conference on Ubiquitous Computing: 297-314.

The story of BIM – the slow expansion of BAM

Episode 3 of the story of BIM

A descent into Limbo

The initial development and use of BAM was in a single course that finished around November, 2006. This was the last course I was to teach. By early 2007 I had applied for and been selected as the new Head of E-Learning and Materials Development at the same institution. While still an academic role, I would no longer be teaching. Instead I would be the supervisor of a group of 10-20 staff who were responsible for various tasks including curriculum design and desk top publishing. It wasn’t until the middle of 2007 that I became responsible for the support of CQU’s official LMS (Blackboard).

2007 was the start of a significant period of uncertainty. By the time I took over e-learning the Director who attracted me to the position had not had her contract renewed. While I was the designer of Webfuse and very critical of the nature and implementation of traditional LMS (like Blackboard), I found myself in the position of being responsible for Blackboard, but not Webfuse. At around the same time a new Director of information technology was appointed and there was a major re-alignment of IT. i.e. All IT should be the responsibility of the IT division. Various other factors contributed to on-going uncertainty about how to achieve anything related to e-learning. Perhaps the biggest was the perception that the institution was in financial peril and subsequent decisions not to renew contracts.

CHALLENGE #13 A context of increasing uncertainty and reducing funding does not contribute well to the development of innovative pedagogy.

In addition, there were significant shortcomings in the existing processes and outcomes underpinning how materials development and e-learning were currently being done. Not to mention, that after many years absence, it was necessary to figure out how curriculum designers could once again be incorporated into the institution. It was thought that this should be the focus of what I was doing, not on further developments of BAM.

To give some idea of the level of uncertainty. I was employed as the Head of E-learning and Materials Development. But the unit I was in charge of was called the Curriculum Design and Development Unit (CDDU). As stated above, CDDU did not initially include responsibility for e-learning. In addition, CDDU had no official input into the institution’s learning and teaching grants process.

In this context, there were no plans to complete development of BAM. Instead, ad hoc support would be provided to people who wished to use BAM with the understanding that once the situation became more certain decisions could be made about what to do and what not to do. This never really happened.

Subsequent use of BAM, 2006-2009

The following table gives a breakdown of BAM usage by each term. It summarises the number of courses using BAM, the number of staff and students in those courses and the number of blog posts managed by BAM. In summary, during this period 69 staff used BAM to manage 19969 blog posts made by 2789 students.

Usage statistics of BAM: T3, 2006 to T3, 2009
Period # Courses # Staff # Students # Blog posts
T3, 2006 1 10 138 1009
T1, 2007 1 10 185 1561
T2, 2007 3 5 188 1301
T3, 2007 1 5 83 483
T1, 2008 3 8 277 1977
T2, 2008 5 15 247 1596
T3, 2008 1 9 202 1474
T1, 2009 4 22 596 4434
T2, 2009 5 26 639 3443
T3, 2009 3 11 139 519

During this period, there was essentially no publicity for BAM. The staff who chose to use BAM in their courses fell into the following categories:

  • They were lumbered with BAM because they were employed to run a course that had already been set up to use BAM.
  • Were guided to the use of BAM by someone working within CDDU.
  • Based on an earlier experience of BAM, they wanted to use BAM in another course.

The course for which BAM was initially developed was particularly was taught be a different person in each of the following 6 offerings. Only on the 7th subsequent offering was it taught by someone who had taught the course while BAM was being used. In practice, this meant that each of the people teaching the course were using the assessment designed by the previous person. Not assessment they had designed. It was on this 7th offering, that the decision was made to drop BAM from the course.

The limited consistency in responsibility for the course makes it difficult to develop a sense of ownership of the course. This can limit full engagement with the intent of innovative pedagogy. The courses where BAM has been most successful, has been where the coordinator/designer of the course has been instrumental in decision making. “Ownership” of a course and the decisions around its design are increasingly difficult in a context where courses are offered multiple times a year and where, increasingly, teaching staff are casuals. Especially with the application of innovative pedagogies which are challenging the status quo. It often takes time to become familiar with the innovation and the ramifications of the changes it creates within the context of a course.

CHALLENGE #14 A context where there is limited consistency in responsibility for a course, make it difficult to learn about the problems and evolve an innovative pedagogy.

From the start an eventual interest of mine in developing BAM was to explore more social applications of the blog technology. Especially, investigating applications that lean more to co-operation than collaboration as suggested by Stephen Downes. However, once I no longer was responsible for the course, I no longer had the “power” to make that decision about a course. It was up to the coordinators.

CHALLENGE #15 Innovative pedagogy is limited by the conceptions, beliefs, desires and aims of the coordinators of a course.

The hobbled BAM

A common theme underlying the design of BAM was that in its first use I would be the coordinator and I had command line access to the server on which BAM was running. Not to mention the expertise to use it. That meant that it wasn’t necessary to develop a web interface for the coordinator/management tasks for BAM. Examples of these tasks included: configuring a course to use BAM, releasing marked posts to students, running copy detection over student posts, changing students registered blogs and merging BAM results with the institution’s end-of-term results processing system.

Since the limbo period resulted in no changes to BAM, coordinators of the above courses had to make do with BAM as is. There were unable to perform these tasks. Instead they had to ask me to perform them on the command line. This introduced delays.

??What’s the challenge here, there is one, how to phrase it?

Most of the courses that were using BAM during this stage were being taught across most of the institution’s 9 campuses (spread across the eastern seaboard of Australia) and by distance education students. The institution and each of its 9 campuses have an infrastructure and set of processes around offering support for information technology ranging from desktop support through to e-learning and beyond. This infrastructure did not recognise BAM as a centrally supported IT system. It could also be argued that few people within this infrastructure knew anything of the centrally supported IT system on which BAM was based. This lack of knowledge led to some confusion when teaching staff new to BAM asked their local IT folk about problems they were experiencing.

It is difficult to see how an IT infrastructure across multiple campuses, run under different regimes could fully understand and respond to all applications of innovative blended learning. How many courses does an innovation need to run across before it become cost effective for the IT infrastructure to disseminate training about that innovation to its IT support staff? 10, 20, 300, 1000?

CHALLENGE #16 Innovative applications of blended learning often operate at a scale that does not match what is required by an enterprise IT approach to support.

In the case of BAM, this was somewhat increased by the reliance on external blogs and RSS. The majority of staff and students had little familiarity with these technologies and vanishingly small numbers had actually create a blog prior to this course. What about the net generation. Aren’t students already meant to be familiar with the Internet and social media?

CHALLENGE #17 The vaunted net generation doesn’t quite appear to have arrived.

It’s extra work

All of the staff who have used BAM comment on the extra amount of work it involves. This perception arises from a number of different sources, including:

  • Blogs, bams and reflective journals are all new to most of these staff and their students. Novelty means more effort to become familiar and find out the strategies and tactics to work around the flaws.
  • The design of BAM itself was not always the best and added to the amount of work.
  • Over reliance on old mindsets or a misunderstanding of the rationale behind the use of BAM.

There appeared to be a correlation between how negative a staff member was about BAM and how strongly they viewed BAM simply as another assignment to be marked. A significant number of the staff using BAM are employed as casual academic staff. This means that they are given a fixed number of hours to lecture, tutor and mark based on the number of students they are responsible for and the subsequent number of tutes. For example, if 25 students fit in a tute and you have 50 students you are paid for 2 tutes per week. Your payment for those 2 tutes includes payment to do whatever marking of assignments you are expected to do.

Courses using BAM almost always included BAM usage as an assignment that the staff had to mark. In the case of COIS20025 this was an extra assignment on top of the normal two assignments. This mean staff had to mark an extra assignment without any extra pay!

CHALLENGE #18: Fixed methods for the calculation of workload and subsequent payment to causal teaching staff limit the flexibility needed for innovative blended learning pedagogy.

Not surprisingly, these staff saw BAM as yet another assignment, and one they weren’t being marked for.

On the other hand, the coordinators who pushed hard for BAM saw BAM as an instrument to improve learning and teaching. To address fundamental problems with learning in courses. To make teaching more student centered. To make student thinking more visible in order to allow staff to help students be aware of any limitations before submitting assessment that would be marked.

The coordinators choosing to adopt BAM typically did so from a “visionary” or “quality” perspective. Other staff saw it through a more “pragmatic” perspective and consequently there experience was very different. Many, if not most, students also adopted this more “pragmatic” perspective and saw BAM as yet another assignment in which they applied there value logic. i.e. how much is the assignment worth in raw percentage terms, what do I need to do to get the percentage I need.

It was common for similarly pragmatic students to ask why they need to do this on a blog. Couldn’t they just do it on paper or a Word document? Why do I have to learn this new technology? Why all this extra work?

CHALLENGE #19 Innovative pedagogies are perceived in very different ways. The differences in perceptions influence how people engage with them.

Late enrolling students and student signatures – Indicators project?

Just floating another idea for a research project around the Indicators project.

The spark

In reflecting back on the origins of BIM, I generated the following stats about late enrollments

73% of the students in this offering enrolled in the course after term had started. In fact, 21% of the students were enrolled on the last day of week 2 of term. Supposedly the last day students could enroll. A further 9% of students were enrolled after week 2.

This was in 2005. I’ve since had confirmation that the problem continues to exist from another staff member. Along with the suggestion that in one course, all of the students who enrolled late, failed the course. Correction: I’ve just been corrected. what actually happened was that “all of the students who failed had enrolled late”. i.e. enrolling late – in this course – is a good predictor that you will fail the course.

I’ve just checked the 2010 offering of the 2005 course. 48.5% of students enrolled after the term had started. On the positive side, none of them have enrolled late. For another course I’m interested in this term, the figure was 52%

Student signatures

One of the “patterns” we have talked about is the idea of developing a “student signature” of LMS usage. i.e. a pictorial (or perhaps mathematical) way to represent how an individual student uses the LMS. The idea is that there might be differences in how they use the LMS. Something that might be used as a predictor. i.e. if you see pattern X in week 4 for a student, there might be a strong likelihood that they will do Y.

Amazing the serendipity in the blogosphere. The day I posted this, but after I’d posted it, I see David Wiley’s post about some of the work they are doing. The waterfall visualisation he shares could form the basis for one type of student signature, and could be interesting in looking for patterns around the topic of late enrolling students. However, as Mark Smithers asks in a comment, defining how you calculate the amount of time a student spends on assessable activities might be difficult. Initially, we might have to rely on site visits as a proxy (though I’m getting uncomfortable with this use of proxies).

What might be done

It would be interesting to:

  • See if late enrolling students have a different signature.
  • Whether they are more likely to achieve a certain outcome.
  • Find out what they are feeling when the start a course late.
  • Why they are starting a course late.
  • If this significant late enrollment can’t be stopped, what strategies can be adopted to help the students get going.

The story of BIM – Development of BAM

This is the second episode in the continuing story of BIM. This episode picks up the story late in the first half of 2006. The initial foray into using individual student blogs to encourage reflection and make student learning more visible was bumbling along. It was now time to think about what I’d be teaching the following term.

Introducing COIS20025, Systems Development Overview

Since 1990 I had been teaching within the information technology program at CQU. For various reasons I was no longer happy teaching within that program, one of the reasons is that through my work in e-learning I’d realised that there was a lot more to technology systems, than technology. You can make a beautiful bit of technology, but if it involves change on the part of the users, they won’t use it they way you intended (if at all). As a result, from the second half of 2006 I was moving into the information systems program. Theoretically, information systems is interested in the combination of software and otherware (i.e. the people, organisations etc.).

My first teaching task within the information systems program was the course COIS20025, Systems Development Overview. It was a Masters level course looking intended to give an overview of how information systems were developed (somewhat surprisingly – though not really – it tended to focus on the technology and methods and very little on the otherware). Traditionally the course was taught across multiple campuses by 10+ teaching staff and 200+ students. I would be the course coordinator responsible for managing all this and in addition teaching the students at the Rockhampton campus and those studying by distance education.

Check what early emails I have about this course. And my move to the information systems school. Give the background here on the course.

Thinking about COIS20025

By early 2006 I was starting to look at and think about what I might have to do with COIS20025. History showed that there were some concerns about the course. For example, a previous coordinator had received a “please explain” about why there was only 1 student getting a HD from 384. The response from the coordinator explained this drawing on reasons that were widely recognised by those at the coal face. i.e. that the majority of the students came from a non-English speaking background (NESB), most had little familiarity with technology, COIS20025 was taken in the first term they were in Australia getting to grips with a foreign culture and these problems were made worse by the fact that 73% of the students in this offering enrolled in the course after term had started. In fact, 21% of the students were enrolled on the last day of week 2 of term. Supposedly the last day students could enroll. A further 9% of students were enrolled after week 2.

Some comments from the previous coordinator

there is nothing in the course itself that makes me believe that it is too difficult for Post-Graduate students to manage…I can not see what else can be done to improve the results in COIS20024 and COIS20025 with the exception of capping the number of students we enrol and haivng a more stringent entry requirement for English Language Skills.

It could also be observed that the methods used in the course differed little from traditional methods from the glory (and probably mythical) days of higher education of motivated students.

This particular tension between teaching staff and the management of teaching staff is not new or specific to this course. My first experience of it was the late 1990s and have seen it in numerous courses. To this day I see no evidence of significant and successful changes to address this problem.

CHALLENGE #6: The increasing complexity of the teaching context within some Australian universities has resulted some significant tension and differences of opinion between management and teaching staff. Many of these remain unresolved.

While the changing nature of the students and the context suggests a need to change toward more innovative methods that engage the students, that address the limited performance of students. Tutty et al (2008) capture the more common approach

The solution to the high failure rate was to change the assessment to satisfy the institutional requirements of satisfied students and reasonable pass rates rather than explore an alternative learning and teaching approach – an effective solution in the current higher education environment that encourages the academic to prioritise other areas, such as research.

CHALLENGE #7: The context within universities is not favourable to exploration of innovative, alternative pedagogies.

Reflective journals – the Word document approach

During 2005, in an attempt to encourage student reflection and hopefully encourage a higher level of learning an assignment was introduced that required students to keep a “reflective” diary. The diary was submitted at the end of term for review by staff. There were two problems observed with this approach: creative writing and plagiarism. Since the diary was not submitted until the end of term, it was felt by teaching staff that many students left the diary until a last minute exercise in creative writing. There was also problems with plagiarism. The second offering of COIS20025 in 2005 had 62 plagiarism incidents.

To address these sorts of problems students were meant to submit their diaries three times during the term to teaching staff for checking. This practice was never widely implemented due to workload and resource problems. In particular, this is a problem for teaching staff at the international campuses who are either casual staff employed for a specific number of students and a particular set of assumptions, or they are full-time staff who operate under a high workload.

CHALLENGE #8: Workload pressures limit the adoption of changes in pedagogy.

The problem

Introduce the problem with the reflective journal assignment.

Back in 2006 I had a problem. I was about to take over a post-graduate Masters course in System development. The course was taught across multiple campuses by 10+ teaching staff and taken by 200+ students. Most of the students were from a non-English speaking background. In order to encourage reflection and higher order learning, the course had one assignment that required the students to keep a reflective journal. Here’s where the problem arises.

Students were told to keep the journal as a Word document and submit it at the end of the term. Consequently, most students did not work on the journal during term, leaving it to the last minute. Teaching staff did not see what was written in the journal and could not respond or comment. The course coordinator – located at a campus some distance from where the majority of the students were located – certainly couldn’t see what was going on. The limited integration between what the student was doing and the marking of the journal probably contributed to significant levels of plagiarism that occurred with the assignment.

How to make it better

Given my interest and experience with blogs, this seemed a perfect fit. In addition, at this stage this course was offered using CQU’s home-grown LMS, Webfuse. The Webfuse assignment submission system was already heavily used in this course, it was part of the way things were done. In addition, I am the main designer of Webfuse and as late of 2004 was the Webfuse team lead. This mean I could see how easy it would be to add a feature to Webfuse that would allow the use of blogs in a large course to be managed in a more efficient way. This feature was to be known as Blog Aggregation Management – BAM.

CHALLENGE #9 It was only through a unique combination of contexts that the idea for BAM was identified. Remove any one of these conditions and BAM probably would not have happened.

After 10 years of working on innovative e-learning through my own interest I was not going to develop BAM on top of the rest of my workload. Especially since there remained large uncertainty about the future of Webfuse (Webfuse eventually was repalced at CQU in 2010). In addition, at this stage my current laptop was getting on a bit. I needed a new one and university computer replacement policy would result in me getting something less than nice. I needed some money to top this up.

The previous coordinator of COIS20025 and I collaborated on a proposal to develop BAM. This was put to the faculty management and included the suggestion that I be given $2000 to develop BAM. The proposal was accepted.

CHALLENGE #10 The development of innovative blended learning pedagogy requires additional funding. Innovation costs initially.

The decision to support the development of BAM was, from my perspective, not a logical thing for the organisation to do. I believe it only happened because a small number of the players within faculty management at this point in time, had positive impressions of Webfuse, myself and the idea of BAM. Had this proposal been considered at another point in time, with different people in those positions, I believe the outcome would have been different.

The initial development and use of BAM

By the 8th of May 2006 I was running a “design workshop” for BAM. An hour long presentation to university staff proposing the idea and describing prototypes of how it might work. The aim was to get the idea out there and get feedback. The video of that presentation can be viewed on Google Video. I need to look through this and the powerpoint slides and find where I was up to with my thinking

On the 11th of September, 2006 a follow up presentation was given to report on the lessons learned so far in the use of BAM in the course. It includes a summary of what has happened as of Week 8 of (a 12 week) term:

  • 258 of 278 students have registered blogs;
  • Marking of blog posts not done, mostly due to the late arrival of the marking facility within BAM.
  • Staff overview of the blogs has been patch.

A significant component of the use of BAM in COIS20025 for this term was spent in designing the questions students were asked to respond to on their blogs. It wasn’t just the technology. The design had moved beyond just using minute papers into some specific questions intended to make visible the students’ preparation for the assignments. The aim was to enable staff to make comments before assignment submission, reduce plagiarism and attempt to increase higher-order learning.


For various reasons, I do not believe that these aims were achieved on a broad scale. In part, this is because the COIS20025 students were very pragmatic, the majority did what what was required to get the mark they wanted with a minimum of effort, rather engage completely with the intent.

CHALLENGE #11: A significant percentage of students are very pragmatic. They don’t automatically engage with the aims of innovative pedagogy.

Another significant contributor was “change management” issues. Essentially, most staff did not look at student blogs during the term. They were left until marking was done at the end of term. Two main issues created this: the late finish of the interface staff would use to mark the blogs, and workload issues for the staff. While BAM may have reduced the actual work required to mark student journals, the novelty and uncertainty around the system meant this was not how staff perceived the system.

CHALLENGE #12: New or novel pedagogies are always perceived to require more work from participants. This is simply because of the need for participants to become familiar with the novelty. Novelty on top of an overloaded or overly pragmatic set of participants limits engagement.


One of the more engaged members of the teaching staff made this observation in interviews

Students tend to fall off the wagon…the major reason why the blog was created. So that we can keep track of students working week by week rather than…working the night before the assignment is due…in that respect it seems to have worked well.

The blogs also appeared to confirm Du and Wagner (2005) and provide a useful predictor for learning outcomes for high and low performing students. Of the 9 high performing students in COIS20025, all 9 performed highly on their blog posts. Of the 6 students who did not register a blog (obvious by week 4 of term), 4 failed every course they studied that term and 2 failed two of the three courses they were studying.

Technically the system worked well. Students had a blog without the institution needing to provide a blog service. The management system, once in place, enable marking and tracking of student posts to occur and integrated with institutional management systems. The project was included in the ELI Guide to Blogging (Coghlan, Crawford et al. 2007) released in 2007 which indicated that the marriage of Web 2.0 applications with institutional systems was a compelling part of this project and that it promised to give institutional systems greater efficacy and agility with reduced cost.

More details

More detail about the initial use of BAM in COIS20025 can be found in two publications. One, written in 2006 just after completion, and another written in late 2008 after a period of some reflection.

The story of BIM – Origins – blogs and minute papers

The following is a start of trying to capture the story of BAM and BIM. The aim is to use this as part of a case study for a paper for MoodleMoot’AU 2010.

As well as trying to re-create the story of BIM from various publications, email archives and presentations the aim of the following is also to identify the challenges faced while developing BIM. This will be the first in a sequence of posts as I attempt to re-create the history.

Origins: Blogs and minute papers

I’d first used blogs in teaching around 2004. Had been aware of them for a while. By the end of 2005 I was back teaching first year programming in courses that were taken by distance education students, traditional on-campus students and international students at capital city, commercial campuses. I was directly responsible for the on-campus and distance education students.

For a long time (going back to the early 90s) I’ve been concerned with the plight of distance education students and the limited support they receive. In an attempt to achieve a couple of aims I wrote a paper titled “Enhancing the learning journey for distance education students in an introductory programming course”. It was intended to try and reinvigorate the REACT process while at the same time giving me the reason to reflect on how I might support the distance education students better and reduce the failure rate in the first year programming courses.

A part of the considerations was the use of blogs. Especially in combination with the idea of minute papers (Murphy & Wolff, 2005). The idea was described as this (Jones, 2005)

The author has used minute papers in face-to-face teaching and found them to be useful. It is thought that asking distance education students to blog a minute paper each time they do some study will provide a minimal level of structure and help the coordinator be aware of how each student is progressing.

Implementation plan

I implemented some of the ideas, including the use of blogs, described in Jones (2005) in the first term of 2006 in a course called Procedural Programming. Before the start of term I emailed all of the distance education students taking the course explaining how and why I wanted to use the blogs to support them. The reward to students for blogging minute papers was that I would be responding directly to their quotes. This included the offer of a pre-submission check of their assignments. i.e. before they submitted, I would look at their assignment and give them feedback.

There were only 17 or so distance education students, so while it was more work, I didn’t think it would be that great. However, I did recognise that it would be additional work, I thought it worthwhile as it would help the students, but there was no way to get recognition for this extra workload. Even though the rationale for doing was sound.

CHALLENGE #1: Organisational processes and policies, especially workload, don’t easily respond to changes in workload balance.

Checking my email archives for the course, I find the first mention of this blog. Apparently, I set up this blog to serve as an example for the students in this course. Here’s the first post reflecting on the early stages of this process.

Those same email archives reveal 3 students sending me the URLs for their blogs.
1 of those blogs is empty, no posts. Another has a couple of posts from the student and some comments from me. I remember this student. We ended up moving discussions to email and those discussions were quite extensive. The last blog had a regular post each week, for most weeks, and also had a comment from me on each one. Is that a 33% success rate? Of the students one failed (no posts), one got a credit (2 posts) and one got a HD (lots of posts). Is that a correlation between engagement on the blog and final result? It’s been since suggested that blogs are a good predictor of student results for high and low achievers.

Another student at this stage commented in an email

I dont know anyone who uses Blogs, and i have never read or used one myself.

CHALLENGE #2 Blogs are novel to many students. Students are conservative, getting them to try something new is a challenge.

Other comments from the same student

If you were offering me presubmission checks in return for a weekly minute sheet/blog i would do it. I can see how a student that was lacking confidence would feel more secure posting a message that only the lecturer would read. The only problem that i see with not having a communal learning environment is that if the lecturer cant perform their duties in a timely manner there is no one else that is aware of the students problems. The onus is fairly heavily on the lecturer to perform.

CHALLENGE #3 A design that relies on the academic to perform, will create a workload and a single point of failure.
CHALLENGE #4 Students are pragmatic. There needs to be reason for them to do something, especially something new.
CHALLENGE #5 The open nature of blogs can create some fear, especially for those who are less confident (this may/could be linked to what is known about shy students posting to discussion forums i.e. they tend not to)

The same student commented

The thing that i found that reduced the “transactional distance” was simply having an email/post/message from someone that called me by my name. The feeling that someone knows your name rather than your student number made a more profound difference to my study experience than i would have thought likely.

One of the points to remember is that under this model the distance education students were receiving a different type of support than the on-campus students.

At the moment, it appears that CQU’s course evaluation system has no record of any evaluations for this course. Will have to dig into that.


On the whole, this use of blogs was not a great success in terms of overall numbers. However, it did provide an opportunity to experiment, to test the waters. Obviously this experiment was largely positive because of what came next.


Jones, D. (2005). “Enhancing the learning journey for distance education students in an introductory programming course.” from

Murphy, L. and D. Wolff (2005). “Take a minute to complete the loop: using electronic Classroom Assessment Techniques in computer science labs.” Journal of Computing Sciences in Colleges 21(1): 150-159.

Improving L&T at Universities – The emperor has no clothes

The following comes from a place of frustration. The approaches universities are using to improve learning and teaching don’t work. But they continue to be held in reverence because they have become a purpose proxy. The people within universities charged with improving learning and teaching are no longer focused on or measured by improving learning and teaching. They are focused on and measured by the purpose proxy. i.e. how many L&T seminars they have run, how many teaching awards they’ve given out etc.

It wouldn’t be so bad if there weren’t already fairly significant amounts of research to show that they don’t work.

Common methods

The most common methods adopted with universities to improve learning and teaching (that I’ve seen) are:

  • Require staff (but only new staff) to get formal learning and teaching qualifications.
  • Run special L&T seminars.
  • Hand out L&T grants.
  • Give out teaching awards.
  • Develop new policies, standards etc and expect academics to meet them.
  • Communities of practice.

I don’t think any of them work, in terms of making a noticeable widespread (i.e. approaching 50%) improvement in L&T.


Let’s look at teaching awards. Essentially giving rewards to the individual teachers for their quality.

Being good corporate citizens we in universities are applying what the business world has found. According to this post quoting from this book there have been three articles in the Harvard Business Review on incentive pay and organisational performance, the common point between all three

compensating people for only individual performance creates more problems than it solves, so rewards should emphasize organizational, not just individual, performance

So, while teaching awards probably aren’t strictly incentive pay, they are the sectors attempt to provide some incentive to be a good teacher and they do suffer the same problem.

To make it worse, they don’t work. I could quote on and on from studies of higher education which show that academics still perceive research to be the most rewarded activity. The perception is that teaching is not rewarded. And lots not forget the rewards for research are essentially individual, the good researcher gets promoted. Rewards for individual performance anyone?

As for the other strategies?

  • Formal qualifications.
    If formal teaching qualifications == good teaching, then the education programs at a university should be bursting at the seams with consistent, good quality, innovative teaching. Ahh, no.

    While it is important to improve/change the conception of learning/teaching of the teacher in order for change to happen (Ho, Watkins et al. 2001). There is little evidence to show that pedagogue’s conceptions of teaching will develop with increasing teaching experience or from formal training (Richardson 2005). Pedagogue’s approaches to teaching change slowly, with some change coming after a sustained training process (Postareff, Lindblom-Ylanne et al. 1997). While pedagogue’s are likely to adopt teaching approaches that are consistent with their conceptions of teaching there may be differences between espoused theories and theories in use (Leveson 2004). While pedagogues may hold higher-level view of teaching other contextual factors may prevent use of those conceptions (Leveson 2004). Environmental, institutional, or other issues may impel pedagogues to teach in a way that is against their preferred approach (Samuelowicz and Bain 2001). While conceptions of teaching influence approaches to teaching, other factors such as institutional influence and the nature of students, curriculum and discipline may also influence teaching approaches (Kember and Kwan 2000). Prosser and Trigwell (1997) found that pedagogue’s with a student-focused approach were more likely to report that their departments valued teaching, that their class sizes were not too large, and that they had control over what was taught and how it was taught. Other contextual factors that frustrate pedagogues’ intended approaches to teaching may include senior staff with traditional teacher-focused conceptions raising issues about standards and curriculum coverage and students who induce teachers to adopt a more didactic approach (Richardson 2005).

  • L&T seminars and L&T grants.
    I haven’t summarised the literature yet, but the little I’ve read backs up my experience. The people that apply and attend L&T grants and seminars are the same people. They are also, generally, the people who don’t need these. They are already doing good work. They should be supported but these activities are never going to reach the vast majority of academics.
  • New practices, policies etc.
    While academics see the greatest rewards from research, they will game the system. Policies won’t change that.
  • Communities of practice.
    I haven’t seen research on this, but I’ve seen what’s happened in the local context (so the foundation of this particular argument is fairly weak). Initially the CoP here were based around a particular method or interest (portfolios, active learning, peer learning etc.). That is examples of Technology I or II and technological gravity. None of these had any demonstrable widespread (approaching 50%) difference.

    The CoP approach has now morped to a focus on heads of program. Ad hoc feedback indicates a much greater engagement. Which is not surprising given that the head of program job is a thankless task and that the incumbents in these positions have a set of common problems. There’ll probably be some good work done here.

    However, it will be limited in its reach – mostly the heads of program – and will face the same problem. You have to get individual academics to change their practice. The heads of program may be in a better position than most, but I think they’ve still got a snowball’s chance if the context is not conducive to change. In addition, this CoP runs the problem of enshrining an “us and them” mentality within the heads of program. Get a group of people with similar problems together and they will share lots of stories about those problems. This will get the group feeling like one. However, the more stories they tell will lead them towards negative thoughts about the others – the non-head of program, teaching academic. This will start to colour their actions and thoughts. In much the same way it’s already coloured other folk in management roles.

The alternative

People’s beliefs and actions about what is right, what is accepted practice arises from their day to day experience. The day to day experience of academics within the currently learning context at universities teaches them what is right and what is accepted. It’s not focusing on teaching and learning.

If you change their day to day experience, you can change what is right and what is accepted. Guskey (1986; 2002) suggests it is the experience of successful implementation that changes the attitudes and beliefs of pedagogues. Pedagogues believe change will work because they have seen it work and this experience is what changes their conceptions of teaching and learning (Guskey 2002).

This isn’t about big bang changes, because they rarely work, are rarely successful and consequently rarely change conceptions. Instead it is about a system that focuses on the day to day experience of the academic and helps make it better. That shows that this day to day experience is important to the university and consequently should be important to the academic. A continual experience of successful change that makes their life easier and improves the quality of teaching and learning in ways that are consistent with teachers conceptions lays the groundwork for on-going change in conceptions.

If universities continue to focus on the wrong time frame, I believe they will continue to fail to make improvements in learning and teaching.

Some of the above strategies may be useful, but only after the system has started to successfully focus on the day to day experience of the academic. They’ll never be truly useful (in terms of gaining levels of approvment approaching 50% of all teaching) before that focus occurs. Before the focus occurs, the above approaches may well help a small percentage of staff, but they ignore the vast majority. And I feel that this is wrong. Especially when the small percentage success is trumpeted as major achievement while hiding the reality of very, very limited overall change.


Ho, A., D. Watkins, et al. (2001). "The conceptual change approach to improving teaching and learning: An evaluation of a Hong Kong staff development programme." Higher Education 42(2): 143-169.

Kember, D. and K.-P. Kwan (2000). "Lecturers’ approaches to teaching and their relationship to conceptions of good teaching." Instructional Science 28(5): 469-490.

Leveson, L. (2004). "Encouraging better learning through better teaching: a study of approaches to teaching in accounting." Accounting Education 13(4): 529-549.

Postareff, L., S. Lindblom-Ylanne, et al. (1997). "The effect of pedagogical training on teaching in higher education." Teaching and Teacher Education 23(5): 556-571.

Prosser, M. and K. Trigwell (1997). "Relations between perceptions of the teaching environment and approaches to teaching." British Journal of Educational Psychology 67(1): 25-35.

Richardson, J. (2005). "Students’ approaches to learning and teachers’ approaches to teaching in higher education." Educational Psychology 25(6): 673-680.

Samuelowicz, K. and J. Bain (2001). "Revisiting academics’ beliefs about teaching and learning." Higher Education 41(3): 299-325.

Protected: The conditions for Anna's phone

This content is password protected. To view it please enter your password below:

Page 1 of 2

Powered by WordPress & Theme by Anders Norén