Assembling the heterogeneous elements for (digital) learning

Category: Chapter 4

The need for a third way

One of the themes for this blog is that the majority of current approaches to improving learning and teaching within universities simply don’t work. At least not in terms of enabling improvement in a majority of the learning and teaching at an institution. Recently I finally completed reading the last bits of the book Nudge by Thaler and Sunstein. Chapter 18 is titled “The Real Third Way”. This post explores how that metaphor connects with some of the thinking expressed here.

The real third way

Thaler and Sunstein mention that the “20th century was pervaded by a great deal of artificial talk about the possibility of a ‘Third Way'” in politics. Their proposal is that libertarian paternalism, the topic of the book, represents a real third way. I’m not talking politics but there appears to be the same need to break out of a pointless dichotomy and move onto something more useful.

The characterisations of the two existing ways provided by Thaler and Sunstein are fairly traditional (stereotypical?) extremes of the political spectrum. i.e.:

  1. Liberal/Democrat – “enthusiasm for rigid national requirements and for command-and-control regulation. Having identified serious problems in the private market, Democrats have often insisted on firm mandates, typically eliminating or at least reducing freedom of choice.”.
  2. Conservative/Republican – have argued against government intervention and on behalf of a laissez-fair approach with freedom of choice being a defining principle. They argue that “in light of the sheer diversity of Americans one size cannot possibly fit all”.

Thaler and Sunstein’s third way – libertarian paternalism – is based on two claims:

  1. Choice architecture is pervasive and unavoidable.
    Small features of social situations have a significant impact on the decisions people make. The set of these features – the choice architecture – in any given social situation already exists and is already influencing people toward making good or bad decisions.
  2. Choice architecture can be manipulated while retaining freedom of choice.
    It is possible to make minor changes to the set of features in a social situation such that it encourages people to make “better” decisions, whilst still allowing them to make the “bad” decision, if that’s what they want.

Connections with improving learning and teaching

Early last year I borrowed and slightly modified Bigg’s 3 levels of teaching to identify 3 levels of improving learning and teaching. Obviously there is a numerical connection between these 3 levels and the “3 ways” outlined above. The more I’ve thought about it, the more I realise that the connections are more significant than that, and that the “3rd way” seems to be a useful way to position my beliefs about how to improve learning and teaching within a university. Here goes my first attempt at explicating it.

Expanding upon the 3 levels of improving L&T

The 3 levels I initially introduced can be expanded/morphed into ways or into stages. In terms of stages, I could probably argue that the levels/stages represent a historical evolution of how learning and teaching has been dealt within in Universities. Those three stages are:

  1. What the teacher is (i.e. ignore L&T).
    This is the traditional/historical stage that some long term academics look back on with fond memories. Where university management didn’t really get involved with teaching and learning. Individual academics were left to teach the course they way they felt it should be taught. There was little over sight and little need for outside support.

    The quality of the teaching was solely down to the nature of the teacher. If they were a good teacher, good things happened. If bad….. This was the era of selective higher education where, theoretically, only the best and the brightest went to university and most were seen to have the intellectual capability and drive to succeed regardless.

    For a surprising number of universities, especially those in the top rank of universities, this is still primarily how they operate. However, those of us working in “lesser” institutions are now seeing a different situation.

  2. What management does (i.e. blame the teacher).
    Due to the broadly publicised characteristics of globalisation, the knowledge economy, accountability etc. there is now significant pressure upon universities to demonstrate that the teaching at their institutions is of high quality. Actually, this has morphed into proxy measures where the quality of teaching is being measured by ad hoc student memories of their experience (CEQ surveys), how many of the academics have been forced to complete graduate certificates in higher education, what percentage of courses have course websites and how well the institution has filled out forms mapping graduate attributes.

    All of these changes to the practice of teaching and learning are projects that are initiated and “led” by senior university management. The success of the institution is based on how well senior university management have been in completing those projects.

    As each new fad arises within government of the university sector, there is a new set of projects to be completed. Similarly, when a new set of senior management start within an institution, there is a new set of projects to be completed. In this case, however, the projects aren’t typically all that new. Instead they are simply the opposite of what the last management did. i.e. if L&T support was centralised by the last lot of management, it must now be de-centralised.

    Most academics suffering through this stage would like to move back to the first stage, I think they and their institutions need to move onto the next one.

  3. What the teacher does.
    For me this is where the institution its systems, processes etc are continually being aligned to encourage and enable academics to improve what they are doing. The focus is on what the teacher does. This has strong connections with ideas of distributive leadership, the work of Fullan (2008) and Biggs (2001).

    For me implementing this stage means taking an approach more informed by complex adaptive systems, distributive leadership, libertarian paternalism, emergent/ateleological design and much more. This stage recognises that in many universities stage 1 doesn’t work any longer. There are too many people and skills that need to be drawn upon for successful teaching that academics can’t do it by themselves (if they ever did). However, that doesn’t mean that the freedom of academics to apply their insights and knowledge should be removed.

So, now I’ve expanded on those, time to connect these three ways with some other triads.

Connections with politics

The following table summarises what I see as the connections with the 3 stages of improving learning and teaching and the work of Thaler and Sunstein (2008).

  1. Conservative/republican == What the teacher is.
    i.e. the laissez-faire approach to teaching and learning. Academics are all too different, no one system or approach to teaching can work for us.
  2. Liberal/democrat == What management does.
    There are big problems with learning and teaching at universities that can only be solved by major projects led by management. Academics can’t be trusted to teach properly we need to put in place systems that mandate how they will teach and force them to comply.
  3. Libertarian paternalism == What the teacher does.
    The teaching environment (including the people, systems, processes, policies and everything else) within a university has all sorts of characteristics that influence academics to make good and bad decisions about how they teach. To improve teaching you need to make small and on-going changes to the characteristics of that environment so that the decisions academics are mostly likely will improve the quality of their teaching and learning. A particular focus should be on encouraging and enabling academics to reflect on their practice and take appropriate action.

Approaches to planning

This morning George Siemens pointed to this report (Baser and Morgan, 2008) and made particular mention of the following chart that compares assumptions between two different approaches to planning.

Comparison of assumptions in different approaches to planning (adapted from )
Aspect Traditional planning Complex adaptive systems
Source of direction Often top down with inputs from partners Depends on connections between the system agents
Objectives Clear goals and structures Emerging goals, plans and structures
Diversity Values consensus Expects tension and conflict
Role of variables Few variables determine the outcome Innumerable variables determine outcomes
Focus of attention The whole is equal to the sum of the parts The whole is different than the sum of the parts
Sense of the structure Hierarchical Interconnected web
Relationships Important and directive Determinant and empowering
Shadow system Try to ignore and weaken Accept most mental models, legitimacy and motivation for action is coming out of this source
Measures of success Efficiency and reliability are measures of value Responsiveness to the environment is the measure of value
Paradox Ignore or choose Accept and work with paradox, counter-forces and tension
View on planning Individual or system behaviour is knowable, predictable and controllable Individual and system behaviour is unknowable, unpredictable and uncontrollable
Attitude to diversity and conflict Drive for shared understanding and consensus Diverse knowledge and particular viewpoints
Leadership Strategy formulator and heroic leader Facilitative and catalytic
Nature of direction Control and direction from the top Self-organisation emerging from the bottom
Control Designed up front and then imposed from the centre Gained through adaptation and self-organisation
History Can be engineered in the present Path dependent
External interventions Direct Indirect and helps create the conditions for emergence
Vision and planning Detailed design and prediction. Needs to be explicit, clear and measurable. A few simple explicit rules and some minimum specifications. But leading to a strategy that is complex but implicit
Point of intervention Design for large, integrated interventions Where opportunities for change present themselves
Reaction to uncertainty Try to control Work with chaos
Effectiveness Defines success as closing the gap with preferred future Defines success as fit with the environment

I was always going to like this table as it encapsulates, extends and improves my long term thinking about how best to improve learning and teaching within universities. I’ve long ago accepted (Jones, 2000; Jones et al, 2005)) that universities are complex adaptive systems and that any attempt to treat them as ordered systems is doomed to failure.

I particularly liked the row on shadow systems as it corresponds with what some colleagues and I (Jones et al, 2004) suggested sometime ago.

In terms of connections with the stages of improving learning and teaching,

  1. No planning == What the teacher is.
    i.e. there is no real organisational approach to planning how to improve learning and teaching. It’s all left up to the academic.

    Often “traditional planning” proponents will refer to the complex adaptive systems approach to planning as “no planning”. Or worse they’ll raise the spectre of no control, no discipline or no governance over the compelx adaptive systems planning approach. What they are referring is actually the no planning stage. A CAS planning approach, done well, needs as much if not more discipline and “governance” as a planning approach, done well.

  2. Traditional planning == What management does.
    University management (at least in Australia) is caught in this trap of trying to manage universities as if they were ordered systems. They are creating strategic plans, management plans, embarking on analysis and then design of large scale projects and measuring success by the completion of those projects, not on what they actually do to the organisation or the quality of learning and teaching.
  3. Complex adaptive systems == What the teacher does.
    The aim is to increase the quantity and quality of the connections between agents within the university. To harness the diversity inherent in a large group of academics to develop truly innovative and appropriate improvements. To be informed by everything in the complex adaptive systems column.

Orders of change

There also seems to be connections to yet another triad described by Bartunek and Moch (1987) when they take the concept of schemata from cognitive science and apply it to organisational development. Schemata are organising frameworks or frames that are used (without thinking) to make decisions. i.e. you don’t make decisions about events alone, how you interpret them is guided by the schemata you are using. Schemata (Bartunek and Moch, 1987):

  • Help identify entities and specify relationships amongst them.
  • Act as data reduction devices as situations/entities are represented as belonging to a specific type of situation.
  • Guide people to pay attention to some aspects of the situation and to ignore others.
  • Guide how people understand or draw implications from actions or situations.

In moving from the cognition of individuals to organisations, the idea is that different organisations (and sub-parts thereof) develop organisational schemata that a sustained through myths, stories and metaphors. These organisational schemata guide how the organisation understands and responds to situations in much the same way as individual schemata. e.g. they influence what is important and what is not.

Bartunek and Moch (1987) then suggest that planned organisational change is aimed at trying to change organisational schemata. They propose that successful organisational change achieves one or more of three different orders of schematic change (Bartunek and Moch, 1987, p486):

  1. First-order change – the tacit reinforcement of present understandings.
  2. Second-order change – the conscious modification of present schemata in a particular direction.
  3. Third-order change – the training of organisational members to be aware of their present schemata and thereby more able to change these schemata as they see fit.

Hopefully, by now, you can see where the connection with the three stages of improving teaching and learning are going, i.e.

  1. First-order change == What the teacher is.
    Generally speaking how teaching is understood by the academics doesn’t change. Their existing schemata are reinforced.
  2. Second-order change == What management does.
    Management choose a new direction and then lead a project that encourages/requires teaching academics to accept the new schemata. When the next fad or the next set of management arrives, a new project is implemented and teaching academics once again have to accept a new schemata. If you’re like me, then you question whether or not the academics are actually accepting this new schemata or they are being seen to comply.

    The most obvious current example of this approach is the current growing requirements for teaching academics to have formal teaching qualifications. i.e. by completing the formal teaching qualification they will change their schemata around teaching. Again, I question (along with some significant literature) the effectiveness of this.

  3. Third-order change == What the teacher does.
    The aim here is to have an organisational environment that encourages and enables individual academics to reflect on their current schemata around teaching and be able to change it as they see problems.

    From this perspective, I see the major problem within universities not being that academics don’t have appropriate schemata to improve teaching, but that the environment within which they operate doesn’t encourage nor enable them to implement, reflect or change their schemata.


I think there is a need for a 3rd way to improving learning and teaching within universities. It is not something that is easy to implement. The 2nd way of improving learning and teaching is so embedded into the assumptions of government and senior management that they are not even aware of (or at best not going to mention) the limitations of their current approach or that there exists a 3rd way.

Look down the “Traditional planning” column in the table above and you can see numerous examples of entrenched, “common-sense” perspectives that have to be overcome if the 3rd way is to become possible. For example, in terms of diversity and conflict, most organisational approaches place emphasis on consensus. Everyone has to be happy and reading from the same hymn sheet, “why can’t everyone just get along?”. The requirement to have a hero leader and hierarchical organisational structures are other “common-sense” perspectives.

Perhaps the most difficult aspect of implementing a 3rd way is that there is no “template” or set process to follow. There is no existing university that has publicly stated it is following the 3rd way. Hence, there’s no-one to copy. An institution would have to be first. Something that would require courage and insight. Not to mention that any attempt to implement a 3rd way should (for me) adopt an approach to planning based on the complex adaptive systems assumptions from the above table.


Baser, H. and P. Morgan (2008). Capacity, Change and Performance Study Report, European Centre for Development Policy Management: 166.

Bartunek, J. and M. Moch (1987). “First-order, second-order and third-order change and organization development interventions: A cognitive approach.” The Journal of Applied Behavoral Science 23(4): 483-500.

Biggs, J. (2001). “The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning.” Higher Education 41(3): 221-238.

Fullan, M. (2008). The six secrets of change. San Francisco, CA, John Wiley and Sons.

Jones, D. (2000). Emergent development and the virtual university. Learning’2000. Roanoke, Virginia.

Jones, D., J. Luck, et al. (2005). The teleological brake on ICTs in open and distance learning. Conference of the Open and Distance Learning Association of Australia’2005, Adelaide.

Thaler, R. and C. Sunstein (2008). Nudge: Improving decisions about health, wealth and happiness. New York, Penguin.

Some thinking on analysing Webfuse usage

I’ve been back working at the PhD thesis, hopefully down to months before submission. At this stage, I need to work on the 2 chapters that are reflecting on the usage of Webfuse during two periods: 1996 through 1999 and 1999 through 2004 (and a bit later). In doing some, the main tasks I need to achieve are:

  • Show the difference in usage between the first and second stages.
  • Show how usage in the second stage is, hopefully, different and “better” than that reported elsewhere.

Now, I could do this in the same way I’ve been doing it in the past, an ad hoc collection of Perl scripts and spreadsheets. The benefit of this approach is most of them already exist. The drawback is, however, that this makes it more difficult to compare usage with other systems. This is a problem that the indicators project is attempting to address. It might make sense to start thinking about something a little more platform independent.

The following is an attempt to think about what I might be able to do that would progress both the PhD and the indicators project. It eventually morphs into some rough initial design ideas of how I can implement something as a trial run.

Indicators in a box

A major aim of the Indicators Project is to enable and engage in cross-platform, cross-institutional, longitudinal comparisons of LMS usage. The “indicators in a box” idea is one we’ve been talking about since at least November 2009. Here’s an attempt to give one description of what it might be.

The indicators in a box is a zip file containing an application (probably a set of applications) that help an individual or institution examine LMS usage regardless of their LMS or institution. In a perfect world, you would:

  • Download the zip file and unpack it on your system.
    It would probably be a PHP web application so that it is broadly platform independent, simple enough for just about anyone to run and have a easy-to-use and useful interface.
  • Configure the indicators in a box for your context.
    i.e. tell it which LMS you are using and where the usage data from the LMS is setting (typically, which type of database, and details of how to get a connection to it).
  • Configure additional information sources.
    At the very least it would probably be useful to have some information about the students, courses, terms/semesters used at your institution. Some of this data might be put in configuration files, some in a database.
  • Choose which analysis you wish to do.
    Eventually there would be a broad collection of different analyses and comparisons to perform. You select the ones you want to implement.
  • Wait for the conversion and preparation.
    It’s likely, though not certain, that the indicators in a box would rely on a set of scripts that are abstracted away from the specific details of an LMS. i.e. the data in your LMS usage database would need to be converted into another database so that comparisons could be made. There are some drawbacks of this approach and some possible alternatives. The types of analysis you want to do would rive what conversions are done.
  • Use the indicators in a box user interface to view the results of the analysis.
    At this stage, you should be able to use the web-based interface to view various analysis of the usage data and perhaps compare it with other institutions etc. This is where graphs like this might get displayed.
  • Share the results.
    The real aim is to allow you to specify what you want to share with others. i.e. an aim is to allow different folk using the indicators to share their results, to enable more research.
  • Write your own analysis code and share it.
    The aim of having a cross-platform foundation is that anyone could write their analysis code that could be shared with the other folk using the platform.
  • Incorporate the generated patterns into your LMS.
    The idea is that academic staff and students need to be able to use the analysis and resultant patterns to inform what they do. They do things within the LMS. So, there needs to be a way to include the patterns/analysis into the LMS in a simple and visible way.

This is still early days. Lot’s of options in the above. But the basic aim is to have something that is easy to install and which will start generating useful stats and allowing all the people using it to share the data and the analysis in appropriate ways.

What do I need to do?

This should be needs driven. I need to focus now on what I need for the PhD and hope it can be abstracted later.

In terms of comparison between the two stages, I am interested in seeing changes in:

  • Percentage of staff and students using Webfuse.
  • Overall amount of Webfuse usage by staff and students.
  • Feature usage within Webfuse course sites.

The first two could be viewed as total counts and counts by types of feature. Which links to the feature usage within course sites.

Currently, chapter 4 of the thesis has the following table for feature usage by course sites in Webfuse 1997-1999. Like the Indicators ascilite paper this categorisation depends on the work of Malikowski et al (2007) .

Category Malikowski et al % 1997 1998 1999
Transmitting content >50% 45% 40.6% 41.2%
Class interactions 20-50% 1.8% 3.6% 7.9%
Evaluating students 20-50% 1.8% 1.5% 2.6%
Evaluating course and instructors <20% 9.2% 1% 9.5%

Feature usage calculation

This seems to suggest a need to:

  1. Identify a way of categorising LMS features – such as Malikowski et al (2007) – and mapping the LMS functionality into that feature set.
    Malikowski doesn’t include class management. Also, in a Webfuse context, the idea of a page update usage would be useful. Implies some flexibility in the feature categorisation. e.g. page Update might be a new category, course design and updating – it would be interesting to track the amount of time academics have to spend creating a course site over time. I suspect the more times they teach a course, the less they edit it.
  2. Calculating the percentage of a course site features which belong to each feature category.
  3. Calculating the number of times each feature is used by a student and staff member.
  4. Being able to examine some usage by date/time periods.

Overall statistics

  • Total number of courses sites for a given period.
  • Total number of students and total number per course
  • Total number of staff and total per course.

Implies need data from outside the LMS. Also a way of specifying term/semester/period and grouping/recognising course offerings by that term/period/semester. Also starts leading into generic demographic information about the students and perhaps the courses.

Design of feature usage

In order to track usage of features by staff/students it would be necessary to have a list of when they access features, who they are etc. Something like:

  • username – unique id for the user
  • feature id – unique id for the feature
  • descriptor – might be the URL from the system or some other descriptor. A connection back to the platform dependent data to help in debugging and understanding.
  • date time
  • course
  • period
  • year

The feature id would be some unique id that connects to a feature categorisation table (as well as other things?):

  • feature id – link back to feature usage
  • feature category id

Could also link to a category_descriptor table:

  • feature category id
  • category name
  • category scheme

The idea of all of this is so that you can choose to perform the feature analysis using one of a number of different categorisation schemes.

Implementation with Webfuse

I have two main sources of data about Webfuse:

  • The Webfuse course sites; and
    These are the actual files/directories from the web server, What is shown and used by students/staff. This includes information about the types of feature a particular page is.
  • server logs.
    These contain who accessed what and when. Though, in the case of “who” is often anonymous because most of the Webfuse course sites were not restricted.

To take the Webfuse data I have and put it into a set of tables like that described above, I would have to:

  • Come up with some way to categorise every URL in the server logs in a category scheme.
  • Read all the entries in the server logs and apply the categorise function to each entry and then populate the usage table.

At this stage, other functions (mostly web-based) can be run on the resulting data to do the analysis.

mmmm, something to do.


Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

External factors associated with CMS adoption

This post follows on from a previous post and continues an examination of some papers written by Malikowski and colleagues examining the adoption of features of an LMS/VLE/CMS. This one focuses on the 2006 paper.

External factors associated with CMS adoption

The abstract of this paper (Malikoski, Thompson and Theis, 2006) is

Course management systems (CMSs) have become a common resource for resident courses at colleges and universities. Researchers have analyzed which CMS features faculty members use most primarily by asking them which features are used. The study described builds on previous research by counting the number of CMS features a faculty member used and by analyzing how three external factors are related to the use of CMS features. The external factors are (a) the college in which a course was offered, (b) class size, and (c) the level of a class—such as 100 or 200. The only external factor showing a statistically significant relationship to the use of CMS features was the college in which a course was offered. Another finding was that CMSs are primarily used to transmit information to students. Implications are described for using external factors to increase effective use of more complex CMS features.

Implication: repeat this analysis with the Webfuse and Blackboard courses at CQU We can do this automatically for a range of external factors beyond those.

Point of the research

Echoing what was said in the 2008 paper and one reason I am interesting in this work

Faculty members often receive help in using a CMS (Arabasz, Pirani, & Fawcett, 2003; Grant, 2004). This help typically comes from professionals who focus on instructional design or technology. Information about which features are used most could provide these professionals with a starting point for promoting effective use of more complex CMS features. Information about how external factors influence use could identify situations in which more complex features can be successfully promoted.

Prior research

Points of difference between this research and prior work are listed as:

  • Few studies have focused on the use of the CMS in resident courses.
  • Generated from surveys.
  • Morgan’s suggestion that faculty use more features over time may only be partially correct.
  • Only one study used a statistical analysis.
  • Previous studies analyse usage for all staff or for a broad array of staff – focusing on a few factors might be a contribution.
  • Lastly, add to research by including examination of how people learn.

    Currently, research into CMS use has considered CMS features, opinions from teachers about these features, and student satisfaction with CMS features. Gagné, Briggs, and Wager summarize the importance of considering both learning psychology and technology, which they refer to as “media.” They emphasize “the primacy of selecting media based on their effectiveness in supporting the learning process. Every neglect of the consideration of how learning takes place may be expected to result in a weaker procedure and poorer learning results.” (Gagné, Briggs, & Wager, 1992, p. 221). For decades, researchers have studied how teaching methods affect learning outcomes. Several recent publications describe seminal research findings, research that has built on these findings, and learning theories that have emerged from this research (Driscoll, 2005; Gagné et al., 1992; Jonassen & Association for Educational Communications and Technology, 2004; Reigeluth, 1999).

    That is as may be. But given my suspicion that most academic don’t really make a rational judgements about how they teach based on educational literature – would such an analysis be misleading and pointless?

They argue that the model from Malikowski, Thompson and Theis (2007) is what they use here and that it combines both features and theory and can be used to synthesise research.


Interestingly, they have a spiel about causation and relationship

An important point to clarify is that the method applied in this study was not intended to determine if external factors caused the use of CMS features. Identifying causation is an important but particularly challenging research goal (Fraenkel &Wallen, 1990). Instead, the current method and study only sought to determine if significant relationships existed between external factors and the adoption of specific CMS features.

Looks like basically the same methodology and perhaps same data set as the 2008 paper. They do note some problems with the manual checking of course sites

This analysis was a labor intensive process of viewing a D2LWeb site for a particular course and completing a copy of the data collection form, by counting how often features in D2L were used. In some cases, members of the research team counted thousands of items.


The definition of adoption used is different than that in the 2008 paper

In this study, a faculty member wasconsidered to have adopted a feature if at least one instance of the feature was present. For example, if a faculty member had one quiz question for students, that faculty member was considered as having adopted the quiz feature

Only 3 of the 13 LMS features available were used by more than half the faculty – grade book, news/announcements, content files. Also the only 3 features where the percentage of adoption was greater than the standard deviation.

Implication: A comparison of Webfuse usage using different definitions of adoption could be interesting as part of a way to explore what would make sense as a definition of adoption.

In some cases STDDEV was twice as large as the percentage of faculty members using a feature.

They include the following pie chart that is meant to use the model from Malikowski et al (2007). But I can’t, for the life of me, figure out how they get to it.

Categories of CMS Features

Found that only the college (discipline) could be said to be the only external factor that was a significant predictor of feature usage.


Raises the question of norms and traditions within disciplines driving CMS feature adoption. I’m amazed more isn’t made of these being residential courses. This might play a role.

Implication: It might be argued that norms and tradition are more than just discipline based. I would argue that at CQU, when it comes to online learning that there were three main traditions based on faculty structures from the late 1990s through to early noughties:

  1. Business and Law – some courses with AIC students, very different approach to distance education and also online learning. Had a very strong faculty-based set of support folk around L&T and IT.
  2. Infocom – similar to Business and Law in terms of AIC courses and distance education. But infected by Webfuse and similar to BusLaw had a strong faculty-based set of support folk around L&T and IT.
  3. Others – essentially education, science, engineering and health. Next to no AIC students. Some had no distance education. No strong set of faculty-based support folk around IT and L&T> Though, education did have some.

Would be interesting to follow/investigate these norms and traditions and how that translated to e-learning. Especially since the faculty restructure around 2004/2005 meant there was a mixing of the cultures. BusLaw and large parts of Infocom merged. Parts of Infocom merged with education and arts….


Study involved 81 faculty members as opposed to 862, 730, 192 and 191 in other studies. Argument is that those other studies used surveys, not the more resource intensive approach used by this work.

The recognise the problem with change

The current study analyzed CMS Web sites when they were on a live server. The limitation in this case is that a faculty member can change a Web site while it is being analyzed. Fortunately, the university at which this study occurred has faculty members create a different CMS Web site each time a course is offered.


Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

Factors related to the breadth of use of LMS/VLE features

As a step towards thinking about how you judge the success of an LMS/VLE, this post looks at some work done by Steven Malikowski. Why his work? Well he is co-author on three journal papers that provide one perspective on the usage of features of an LMS, including one that proposes a model for research into course management systems. A list of the papers in the references section.

This post focuses on looking at the 2008 paper. On the whole, there seems to be a fair bit of space for research to extend and improve on this work.

Factors related to breadth of use

The abstract of this paper (Malikowski, 2008) is

A unique resource in course management systems (CMSs) is that they offer faculty members convenient access to a variety of integrated features. Some featurs allow faculty members to provide information to students, and others allow students to interact with each other or a computer. This diverse set of features can be used to help meet the variety of learning goals that are part of college classes. Currently, most CMS research has analyzed how and why individual CMS features are used, instead of analyzing how and why multiple features are used. The study described here reports how and why faculty members use multiple CMS features, in resident college classes. Results show that nearly half of faculty members use one feature or less. Those who use multiple features are significantly more likely to have experience with interactive technologies. Implications for using and encouraging the use of multiple CMS features are provided.

Suggests that cognitive psychology is the theoretical framework used. In particular, the idea that there are discrete categories of learning goals ranging from simple to complex and that learners that don’t master the simple, first, will have difficulties if they attempt the more complex. An analogy is made with the use of a CMS, there are simple features that need to be learned before using complex features.

In explaining previous research on adoption of features of an LMS (mostly his own quantitative evaluations) the author reports that the college/discipline an academic is in explains most variation.

How to use these findings

The point is made that a CMS is used to transmit information more than twice as much as it is used for anything else. Also, that there are cheaper and better ways to transmit information.

The suggestion is then made that

Instructional designers, researchers, and others interested in increasing effective CMS use can use the research just summarized to emphasize factors that are related to the use of uncommon CMS features and deemphasize factors that are not related to increased use.

But the best advice that is presented is that if you wish to promote use X, then encourage it in discipline Y first since they have shown interest in related features. Then, after generating insight, seek to take it elsewhere…….??

Use of multiple features

Only a small number of studies have focused on use of multiple features. Most achieved by asking academics how they use the CMS. Suggests that a second way is to visit course sites and observe which features are used. Suggests that observing behaviour is more accurate than asking them how they behave.

IMPLICATION: the approach Col and Ken are using for Blackboard and what I’m using for Webfuse is automated. Not manual. A point of departure.


Three bits of data were used

  1. Usage of 6 common CMS features
    • Random sample of 200 staff at US institution using D2L were asked to participate – 81 chose to participate.
    • 154 D2L sites were analysed as staff teach more than one course a semester
    • 2 research team members visited and manually analysed each course site – repeating until no discrepancies.
  2. External factors: class size, the college/discipline and class level (1st, 2nd year etc)
    Gathered manually from the course site.
  3. 10 internal factors focused primarily on the faculty members’ previous experience with technology.
    Gathered by surveying staff.

Limitation: I wonder if D2L has any adaptive release mechanisms like Blackboard. Potentially, if the team member visiting each course site has an incorrectly configured user account, they may not be able to see everything within the site.

Purpose was to determine if internal or external factors were related to adoption of multiple CMS features. Established using a regression analysis with the dependent variable being the number of features adopted and the independent variables being the 3 external and 10 internal factors.

What is adoption?

This is a problem Col and I have talked about and which I’ve mentioned in some early posts looking at Webfuse usage. The definition Malikowski used in this study was

In this study, adopting a feature was defined as a situation where a D2L Web site contained enough instances of a feature so this use was at or above the 25th percentile, for a particular feature. For example, if a faculty member created a D2L Web site with 10 grade book entries, the grade book feature would have been adopted in this Web site, since the 25th percentile rank for the grade book feature is 7.00. However, if the same Web site contained 10 quiz questions, the quiz feature would not have been adopted since the 25th percentile rank for quiz questions is 12.25

I find this approach troubling. Excluding a course from adopting the quiz feature because it has only 10 questions seems harsh. What if the 10 questions were used for an important in class test and was a key component of the course. What if there are a few courses that have added all of the quiz questions provided with the textbook into the system.

Implication: There’s an opportunity to develop and argue for a different – better – approach to defining adoption.

Sample of results

  • 36% of sites used only 1 feature
  • 72% of sites used 2 or less features
  • 0% of sites used all 6 features
  • Only four of the external/internal factors could be used to predict the number of CMS features adopted
    1. Using quizzes
    2. College of social science
    3. Using asynchronous discussions
    4. Using presentation software (negative correlation)


Suggests that the factors found to predict multiple feature use can be used to guide instructional designers to work with these faculty to determine what works before going to the others.

Limitation: I don’t find this a very convincing argument. I start to think of the technologists alliance and the difference between early adopted and the majority. The folk using multiple LMS features are likely to be very different than those not using many. Focusing too much on those already using many might lead to the development of insight that is inappropriate for the other category of user.

Implication: There seems to be some research opportunities that focuses on identifying the differences between these groups of users by actually asking them. i.e. break academics into groups based on feature usage and talk with them or ask them questions designed to bring out differences. Perhaps to test whether they are early adopters or not.


Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

How do you measure success with institutional use of an LMS/VLE?

My PhD is essentially arguing that most institutional approaches to e-learning within higher education (i.e. the adoption and long term use of an LMS) has some significant flaws. The thesis will/does describe one attempt to formulate an approach that is better. (Aside: I will not claim that the approach is the best, in fact I’ll argue that the notion of there being “one best way” to support e-learning within a university is false.) The idea of “better” raises an interesting/important question, “How do you measure success with institutional use of an LMS?” How do you know if one approach is better than another?

These questions are important for other reasons. For example, my current institution is currently implementing Moodle as its new LMS. During the selection and implementation of Moodle there have been all sorts of claims about its impact on learning and teaching. During this implementation process, management have also been making all sorts of decisions made about how Moodle should be used and supported (many of which I disagree strongly with). How will we know if those claims are fulfilled? How will we know if those plans have worked? How will we know if we have to try something different? In the absence of any decent information about how the institutional use of the LMS is going, how can an organisation and its management make informed decisions?

This question is of increasing interest to me for a variety of reasons, but the main one is the PhD. I have to argue in the PhD and resulting publications that the approach described in my thesis is in some way better than other approaches. Other reasons include the work Col and Ken are doing on the indicators project and obviously my beliefs about what the institution is doing. Arguably, it’s within the responsibilities of my current role to engage in some thinking about this.

This post, and potentially a sequence of posts after, is an attempt to start thinking about this question. To flag an interest and start sharing thoughts.

At the moment, I plan to engage with the following bits of literature:

  • Malikowski et al and related CMS literature.
    See the references section below for more information. But there is an existing collection of literature specific to the usage of course management systems.
  • Information systems success literature.
    My original discipline of information systems has, not surprisingly a big collection of literature on how to evaluate the success of information systems. Some colleagues and I have used bits of this literature in some publications (see references).
  • Broader education and general evaluation literature.
    The previous two bodies of literature tend to focus on “system use” as the main indicator of success. There is a lot of literature around the evaluation of learning and teaching, including some arising from work done at CQU. This will need to be looked at.

Any suggestions for other places to look? Other sources of inspiration?

Why the focus on use?

Two of the three areas of literature mentioned above draw heavily on the level of use of a system in order to judge its success. Obviously, this is not the only measure of success and may not even be the best one. Though the notion of “best” is very subjective and depends on purpose.

The advantage that use brings is that it can, to a large extent, be automated. It can be easy to generate information about levels of “success” that are at least, to some extent, better than having nothing.

At the moment, most universities have nothing to guide their decision making. Changing this by providing something is going to be difficult. After all, providing the information is reasonably straight forward. Changing the mindset and processes at an institution to take these results into account when making decisions…..

Choosing a simple first step, recognising it’s limitations and then hopefully adding better measures as time progresses is a much more effective and efficient approach. It enables learning to occur during the process and also means if priorities or the context changes, you lose less as you haven’t invested the same level of resources.

In line with this is that the combination of Col’s and Ken’s work on the indicators project and my work associated with my PhD provides us with the opportunity to do some comparisons of two different systems/approaches within the same university. This sounds like a good chance to leverage existing work into new opportunities and develop some propositions about what works around the use of an LMS and what doesn’t.

Lastly, there are some good references that suggest that looking at use of these systems is a good first start. e.g. Coates et al (2005) suggest that it is the uptake and use of features, rather than their provision, that really determines their educational value.


Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Coates, H., James, R., & Baldwin, G. (2005). A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning. Tertiary Education and Management, 11(1), 19-36.

Jones, D., Cranston, M., Behrens, S., & Jamieson, K. (2005). What makes ICT implementation successful: A case study of online assignment submission. Paper presented at the ODLAA’2005, Adelaide.

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a CMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Malikowski, S. (2008). Factors related to breadth of use in course management systems. Internet and Higher Education, 11(2), 81-86.

PhD Update #19 – Falling just a little short

I’ve fallen a bit short of what I wanted to achieve this week, however, overall I’m feeling pretty good about progress. In particular, because some of the initial evaluation results point to the “Webfuse way” having some quantitative benefits. Also, if the work in the last week get’s the okay from the esteemed supervisor it should make completing chapter 5 pretty straight forward.

What I’ve done

In the last update I said I would get a draft of chapter 4 complete and off to the supervisor.

Well, that didn’t happen but I’m just about there. Here’s a breakdown of progress on chapter 4 so far:

  • Introduction (4.1) and problem definition (4.2) – done.
    The e-learning@CQU post this week was the last part of section 4.2.
  • Intervention (4.3) – done
    This is where I spent most of this week and its covered in these posts: early section on why build a system and design guidelines and three posts on the design and implementation of Webfuse (1, 2 and 3)
  • Evaluation (4.4) – basically done.
    I’ve spent the last couple of days thinking about, preparing and doing some initial evaluation of the use of Webfuse from 1996 through 1999. Explained somewhat in three posts: early thinking, some more thinking and some early results and results of evaluation for 2006-2009.

    This is taking a bit longer because I’m essentially establishing the evaluation process that I’ll use in both chapters 4 and 5. I’m also having to grab the necessary archives so I can perform the evaluations.

    The structure of this section is essentially complete, as is the content. I’m waiting to see if I can get some data for a couple of years.

  • Reflection and learning (4.5) — nothing done yet.

What I’ll do next week

The overall aim remains pretty much the same as for last week:

  • Get a draft of chapter 4 off to the supervisor.
    This means completing the rough outline of section 4.4 – the evaluation – and leaving some holes for the data I don’t have access to at the moment. The main task will be converting existing descriptions of the ISDT from the Walls et al model to the Gregor and Jones one.
  • Get re-started on the remaining components of the Ps Framework – chapter 2.

Some early results from Webfuse evaluation

The following contains some early results from the evaluation of Webfuse course sites as mentioned in the last post. The aim is to get a rough initial feel for how the course sites created for Webfuse in the late 90s and early 00s stack up using the framework produced by Malikowski et al (2007). As opposed to other PhD work, this is a case of “showing the working”.

How many page types?

First, let’s see how many page types were used each year. The following table summarises the total number of pages and number of different page types (in some years there were page types with different names that had only very slightly different functionality – the following stats are rough and don’t take that into account) used in each year.

Year # pages # page Types
1999 4376 27
2000 3058 39
2001 1155 23
2002 9099 42
2003 9302 40

Can see from the above that the number of overall pages managed in Webfuse drops significant drop from 2000 to 2001. 2001 is when the new default course site structure was put in place and when (I think) the courses 85321 and 85349 (which I taught) stopped included the archives of previous offerings. Check this. May need to look at excluding from consideration some of these?

During this time there were some page types which had different names, so would be counted more than once in the above, but were essentially the same. Count the same page types once.

I have to save the commands to do this somewhere, may as well do it here

find . -name CONTENT -exec grep PageType {} ; > all.pageTypes
sed -e '1,$s//1/' all.pageTypes | sort | uniq -c > all.pageTypes.count

Calculate the percentage of page type usage per framework

The next step is a simple calculation. Allocate each page type to one of one of the categories of the Malikowski et al (2007) framework and show the percentage of the pages managed by Webfuse that fall into each category. This isn’t exactly what Malikowski et al (2007) count, the count the percentage of courses that use features in each category.

The Malikowski et al (2007) framework includes the following categories:

  • transmitting content;
  • creating class interactions;
  • evaluating students;
  • evaluating course and instructors;
  • computer based instruction.
    Not included – there are no Webfuse page types that provide functionality that fits with this category.

The following table shows the percentage of pages managed by Webfuse that fall into each category per year. It’s fairly obvious from the first year done – 1999, and confirmed with the second, that this approach doesn’t really say a lot. Time to move on.

Category 1999 2000 2001 2002 2003
Transmitting content 97.5% 84.5%
Class interactions 1.9% 13.5%
Evaluating students 0.1% 1.5%
Evaluating course 0.5% 0.6%

Calculate the % of courses using each category

In this stage I need to:

  • Count the number of courses in each year.
  • Count the % of courses that have features of each category.

Technically, all of these courses will have features for transmitting content, so all those will be 100%, so I’ve not included it. Need to recheck the Malikowski definition.

Also, 2001 seems to be missing a couple of the main terms, so it’s had to be excluded – for now. See if the missing terms can be retrieved.

Category 1999 2000 2002 2003
Number of course sites 190 175 315 309
Class interactions 7.9% 43.5% 11% 66.6%
Evaluating students 2.6% 6.3% 12% 21.7%
Evaluating course 9.5% 7.5% 14% 91.6%

Commands I used to generate the above

find aut2000 spr2000 win2000 -name CONTENT -exec grep -H PageType {} ; > course.pageTypes to get period/course:pageTypeName
sort course.pageTypes | uniq  | sort -t: -k2 > course.pageTypes.uniq
... edit to move the page types around

Now, there are some interesting results in the above. Have to check the 2000 and 2002 result for class interactions, unusual dip..

The almost 92% of courses with a course evaluation feature in 2003 is due to the raise of the course barometer explained in Jones (2002)

Too late to reflect anymore on this. Off to bed.


David Jones, Student feedback, anonymity, observable change and course barometers, World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado, June 2002, pp. 884-889.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Thinking about evaluating Webfuse (1996 through 1999) – evaluation of an LMS?

For the last couple of weeks I’ve been working on chapter 4 of my thesis. I’ve worked my way through explaining the context (general context and (use of e-learning), the design guidelines and the implementation (parts 1, 2 and 3). I’ve now reached the evaluation section, where I’m meant to describe what happened with the use of Webfuse and make some judgement calls about how it went.

The purpose of this post is to make concrete what I’m thinking about doing. A sort of planning document. I don’t think it will be of much use to most others, though the following section on related work might be some interest.

Other related work

Indicators project

Col and Ken, two colleagues at CQU have started the indicators project which is seeking to provide academics with tools to reflect on their own usage of LMSes. There most recent presentation is up on Slideshare (where’s the video Col?).

They are currently drawing primarily on data from the Blackboard LMS which was used at CQU from about 2004 through 2009. Webfuse was essentially a parallel system, but it ran from 1997 through 2009. Both are being replaced by Moodle in 2009.

At some stage, I am hoping to mirror the work they are doing with Blackboard on Webfuse. This will complete the picture to encompass all e-learning at CQU and also potentially provide some interesting comparisons between Webfuse and Blackboard. This will be somewhat problematic as there are differences in assumptions between Webfuse and Blackboard. For example, Webfuse generally doesn’t require students to login to visit the course website. Most are freely available.

Some of the data from Ken’s and Col’s presentation about Blackboard:

  • 5147 courses – would be interesting to hear the definition of course as a number of Blackboard courses during this period were simply pointers to Webfuse courses.
  • Feature adoption using a framework adopted from Malikowski (2007) as a percentage of online courses from 2005 through 2009
    • Files: ranging from 50% to 78%
      Which raises the question, what did the other 22-50% of courses have in them, if no files? Just HTML?
    • News/Announcements: ranging from 77% to 91% (with a peak in 2007.
    • Gradebook: ranging from 17% to 41%
    • Forums: ranging from 28% to 61%
    • Quizzes: ranging from 8 through 15%
    • Assignment submission: ranging from 4 to 20%.

    An interesting peak: In most of the “lower level” features there seems to have been a peak, in percentage terms, in 2007. What does that mean? A similar, though to less an extent peak is visible in the forums, quizzes and assignment submission categories.

    Might be interesting to see these figures as a percentage of students. Or perhaps with courses broken down into categories such as: predominantly AIC (CQU’s international campuses), predominantly CQ campuses, predominantly distance education, large (300+ students), small, complex (5+ teaching staff), simple.

  • Hits on the course site
    There’s a couple of graphs that show big peak at the start of term with slow dropping off, with the occasional peak during term.

    It might be interesting to see the hit counts for those courses that don’t have discussion forums, quizzes or assignment submission. I feel that these are the only reasons there might be peaks as the term progresses as students use these facilities for assessment.

  • Student visits and grades.
    There are a few graphs that show a potentially clear connection between number of visits on a course site and the final grade (e.g. High Distinction students – top grade – average bit over 500 hits, students who fail average just over 150 hits). It is more pronounced for distance education students than for on-campus students (e.g. distance ed high distinction students average almost 900 hits).
  • Average hits by campus.
    Distance education students averaged almost 600 hits. Students at the AICs, less than 150.
  • Average files per course in term 1.
    Grown from just over 10 in 2005 to just over 30 in 2009.

    I wonder how much of this is through gradual accretion? In my experience most course sites are created by copying the course site from last term and then making some additions/modifications. Under this model, it might be possible for the average number of files to grow because the old files aren’t being deleted.

Malikowski, Thompson and Theis

Malikowski et al (2007) proposed a model for evaluation the use of course management systems. The following figure is from their paper. I’ve made use of their work when examining the quantity of usage of features (read this if you want more information on their work) of an LMS in my thesis.

Malikowski Flow Chart

Purpose of the evaluation

The design guidelines underpinning Webfuse in this period were:

  • Webfuse will be a web publishing tool
  • Webfuse will be an integrated online learning environment
  • Webfuse will be eclectic, yet integrated
  • Webfuse will be flexible and support diversity
  • Webfuse will seek to encourage adoption

I’m assuming that the evaluation should focus on the achievement (or not) of those guidelines. The limitations I have is that I’m restricted to archives of websites and system logs. I won’t be asking people as this was 1996 to 1999.

Some initial ideas, at least for a starting place:

  • Webfuse will be a web publishing tool
    How many websites did it manage? How many web pages on those sites? How much were the used by both readers and authors?
  • Webfuse will be an integrated online learning environment
    Perhaps use the model of Malikowski et al (2007) to summarise the “learning” functions that were present in the course sites. Some repeat of figures from the above.

    I recognise this doesn’t really say much about learning. But you can’t really effectively judge learning any better when using automated analysis of system logs.

  • Webfuse will be eclectic, yet integrated
    This will come down to the nature of the structure/implementation of Webfuse. i.e. it was eclectic, yet integrated
  • Webfuse will be flexible and support diversity
    Examine the diversity of usage (not much). Flexibility will arise to some extent from the different systems implemented.
  • Webfuse will seek to encourage adoption.
    This will come back to the figures above. Can be a reflection on the statistics outlined in the first two guidelines.


So, there’s a rough idea of what I’m going to do, what about a rough idea of how to implement it? I have access to copies of the course websites for 1998 and 1999. I’m hoping to have access to the 1997 course sites in the next couple of weeks, but it may not happen – some things are just lost to time – though the wayback machine may be able to help out there. I also have the system logs from 1997 onwards.

In terms of meeting Malikowski et al’s (2007) framework, I’ll need to

  • Unpack each year’s set of course websites.
  • Get a list of all the page types used in those sites.
  • Categories those page types into the Malikowski framework.
  • Calculate percentages.

In terms of looking at the files uploaded to the sites, I’ll need to repeat the above, but this time on all the files and exclude those that were produced by Webfuse.

Author updates – I can parse the web server logs for the staff who are updating pages. The same parsing will be able to get records for any students who had to login. This will be a minority.


Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

The design and implementation of Webfuse – Part 3

The following is the last of, what is now, a three part series of blog posts outlining the design and implementation of the Webfuse system. These are part of chapter four of my thesis. The previous two parts are here and here.

The structure of this section is based on the design guidelines developed for Webfuse and outlined in a section in this post. Each of the three posts outlining the design and implementation of Webfuse are using the design guidelines as the structure through which to explain the implementation of Webfuse. This post closes out the implementation by looking at the final two guidlines – be flexible and support diversity, and encourage adoption.

Webfuse will be flexible and support diversity

The aims which flexibility and support for diversity, as outlined in Section 4.3.2, were meant to achieve included enabling a level of academic freedom, being able to handle the continual change seen as inherent in the Web, and providing a platform to enable the design and use of Webfuse to change and respond in response to increased knowledge due to experience and research. It was intende to achieve these aims through a number of guidelines outlined in Section 4.3.2. The following seeks explain how the design and implementation of Webfuse fulfilled these guidelines and subsequently fulfil the stated goals.

Do not specifically support any one educational theory. The design of Webfuse as a web publishing system and integrated online learning environment gave no consideration to educational theory. The design of the functionality offered by the page types was seen to be at a level below educational theory. That is, the four categories of tasks required of a Web-based classroom – information distribution, communication, assessment, and class management – were seen as building blocks that could be used to implement a number of different educational theories. For example, a social constructivist learning theory might use a simple combination of a discussion board and an interactive chat room as the primary tools on the course site. A more information centric or objectivist approach would focus more on the use of the information distribution tools and the quiz tool. In addition, if a strong case was built for providing greater support for a particular educational theory then this could be provided by developing a collection of page types – using COTS products where appropriate – specific to that educational theory. Only those staff interested in using that educational theory would be required to use those page types.

Separation of content and presentation. The separation of content and presentation was achieved through a combination of the page types and the Webfuse styles. As shown in Figure 4.1 and Figure 4.5 it was possible to change the appearance of a Webfuse web page without modifying the content.

Platform independence and standards. This guideline was achieved through an emphasis on the use of platform independent open-source software, the use of the Perl scripting language and active support for compliance with Web standards. Webfuse was written in the Perl scripting language with user interaction occurring via the Webfuse CGI scripts. To run a copy of Webfuse it was necessary to have a web-server, simple relational database, a version of Perl and a small number of other open source products used to implement some of the “micro-kernel” services and page types (e.g. Ewgie required Java). During 1997 two project students successfully ported Webfuse to the Windows platform (Walker, 1997).

Provide the tools not the rules. The main support for this guideline was the absence of any specification of how an online course might be structured. An academic was free to choose the structure and the page types used in the design of the online course. Including simply using the Content page type that would allow them to provide any HTML content. With the development resources available and the widespread novelty of the Web, it was not possible to develop functionality that would enable academics to modify the available styles or write their own page types. However, the design of Webfuse did initially attempt to provide enough flexibility in the presentation of the pages managed by Webfuse to enable students and staff to adapt use of the system to their personal situation. At the time of the development of Webfuse, Internet access for the majority of students was through fairly slow modem access, which was charged on a time basis and made it important to minimise time spent connected (Jones & Buchanan, 1996). To support this goal Webfuse automatically produced three different versions of every page: a text only version, a graphical version and a version using frames. Figure 4.4 shows a graphical version of a page from the original site and near the top of the page it is possible to see navigation links to the three versions of the page. Figure 4.6 is the text only version of the page shown in Figure 4.4.

The Units web page (text version) for M&C for Term 2, 2007

Figure 4.6 – The Units web page (text version) for M&C for Term 2, 2007

Webfuse will seek to encourage adoption

In order to encourage adoption of Webfuse four separate design guidelines were established and described in Section 4.3.2. The following seeks to explain how those guidelines were realised in the implementation of Webfuse.

Consistent interface. The Webfuse authoring interface was implemented through the page update script and supported through the use of page types. The page update script implemented a consistent model and main interface for the authoring process. The page types, working as software wrappers, provided a “Webfuse encapsulation” interface to work within the page update script. Whether using the TextIndex page type or the EwgieChatRoom page type the editing interface behaved in a consistent way. The websites produced by Webfuse also produced a consistent interface through the HTML produced by the page types and the Webfuse styles.
Increased sense of control and ownership. It is unlikely that technology alone could achieve this guideline. Webfuse sought to move towards fulfilling this guideline by providing academics with the ability to control their own course sites where previously this was out of the reach of many. It was also hoped that the flexibility and support for diversity provided by Webfuse would help encourage a sense of ownership.

Minimise new skills. In 1996, the Web was for many people a brand new environment. Any web-publishing tool was going to require the development of new skills. Webfuse sought to minimise this by supporting and enhancing existing practice and by using common institutional terminology. This was achieved through the provision of page types such as Lecture, StudyGuide and Email2WWW that connected with existing practice and enabled it to be taken onto the Web. The page types also allowed for the use of CQU specific terminology in the interface. With the page type’s wrapper capability performing the translation between CQU and COTS product terminology. Lastly, the flexibility of Webfuse as a web publishing system allowed the use of URLs that used CQU specific terminology. The URL for the course site used in Table 4.2 was The components of this URL, including “Academic Programs”, “Units”, “85321” and “mc”, were all common terms used by the members of the M&C community. Not a feature of other e-learning tools.

Automate. As described above Webfuse automatically produced text only and graphical versions of all pages to help those users who required it, to minimse download times. Each of the page types were designed, where possible, to automate tasks that staff or students might have to do manually. For example, the Lecture page type automatically converted Powerpoint slides into individual lecture slides. The LectureSlide page type automatically converted audio into four different format to support the diversity of computer platforms of the time. The StudyGuide page type automatically produced tables of content.


Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. Paper presented at the Proceedings of ASCILITE’96, Adelaide.

Walker, M. (1997). Porting Webfuse to the Windows platform. Retrieved 29 July, 2009, from

The design and implementation of Webfuse – Part 2

This post continues the description of the design and implementation of Webfuse started with this post.

Webfuse will be an integrated online learning environment

The idea of Webfuse as an integrated online learning environment encapsulated three main ideas: there would be a consistent, easy-to-use interface; all tools and services would be available via that interface; and that the system would, where possible, automate tasks for the teachers and students. The design of Webfuse as a web publishing system based on hypermedia templates was designed to achieve this goal.

The primary interface for Webfuse was the web. All services provided by Webfuse were managed and accessed through a Web browser. All services were provided by web pages implemented through hypermedia templates. Templates that could, where appropriate, provide additional support by automating tasks (e.g. the Lecture page type described in Table 4.3). The interface to create, modify and manage the websites was provided by the page update process and the hypermedia templates using the same consistent model.

Webfuse will be eclectic, yet integrated

The focus of this requirement was to achieve a system that could be more responsive to changes in requirements and the external context through the inclusion of existing services and tools. The eclectic, yet integrated structure of Webfuse was informed by a combination of concepts including: micro-kernel architecture for operating systems, hypermedia templates, and software wrappers. The following provides more detail of this design and how it was implemented and finishes with a complete listing of the functionality provided by Webfuse in the period from 1996 through 1999.

Micro-kernel architecture

The kernel of an operating system is the part that is mandatory and common to all software, the idea of a micro-kernel is to minimize the kernel in order to enforce a more modular system structure and make the system more flexible and tailorable (Liedtke, 1995). The micro-kernel approach helps meet the need to cope with growing complexity and integrate additional functionality by structuring the operating systems as a modular set of system servers sitting on top of a minimal micro-kernel (Gien, 1990). The micro-kernel should provide higher layers with a minimal set of appropriate abstractions that are flexible enough to allow implementation of arbriatry services and allow exploitation of a wide range of hardware (Liedtke, 1995).

The initial design of Webfuse included the idea of establishing a core “kernel” of abstractions and services relevant to the requirements of web publishing. These abstractions were built on underlying primitives provided by a basic Web server. Continuing the micro-kernel metaphor, the Webfuse page types were the modular set of system servers sitting on top of the minimal micro-kernel. The initial set of Webfuse “kernel” abstractions were implemented as libraries of Perl functions and included:

  • authentication and access control;
    The services of identifying users as who they claimed to be and checking if they were allowed to perform certain operations was seen as a key component of a multi-user web publishing system. The functionality was built on the minimal services provided by web servers and supplemented with institution specific information, for example, the concepts of courses.
  • validation services;
    In the early days of the Web the primitive nature of the publishing tools meant that there was significant need for validation services such as validating that correctness of HTML and the search for missing links.
  • presentation;
    This encapsulated the Webfuse style functionality that allowed the representation of pages to be changed independent of the content.
  • data storage; and
    Content provided by content experts was a key component of the Webfuse publishing model. Page types needed to be able to store, retrieve and manipulate that content in standard ways.
  • page update.
    The page update process was the core of the Webfuse publishing model. It involved how the content experts provided and managed content and how that content was then converted into a web pages. A part of this aspect of the Webfuse architecture was a specification of how the Webfuse page types would communicate and interact.

Hypermedia templates as software wrappers

The simple “TableList” page type discussed above and used to produce the web page shown in Figure 4.1 and the page update form in Figure 4.2 was written entirely by the Webfuse developers. A key aspect of the design of Webfuse was the recognition that there would not be sufficient Webfuse developer time available to allow implementation, from scratch, of all the necessary page types. Especially those page types necessary for more complex functionality, such as synchronous, interactive chat rooms. The idea of implementing hypermedia templates as software wrappers around commercial-off-the-shelf (COTS) software – mostly open source software – was adopted to address this problem.

In software engineering, the term wrapper refers to a type of encapsulation whereby a software component is an encased within an alternative abstraction and it is only through this alternative interface that clients access the services of the wrapped component (Bass et al., 1998, p. 339). A wrapper leaves the existing code of the encapsulated component as is, however, new code is written around it to connect it to a new context (Sneed, 2000). In the case of Webfuse, the hypermedia templates – in the form of Webfuse page types – were used to encapsulate a variety of existing open source software applications and connect them to the Webfuse and CQU context.

Sneed (2000) identifies the introduction of the concept of wrappers with Dietrich, Nackman and Gracer (1989) and its use to re-use legacy applications within an object-oriented framework. Wrappers have also been used in reverse and re-engineering (Sneed, 2000) and security. Wrappers were also one method used by the hypermedia community to integrate complex hypermedia systems with the World-Wide Web (e.g. Bieber, 1998; Gronbaek & Trigg, 1996). Wrappers were also used to integrate third-party applications into open hypermedia systems that emphasize delivery of hypermedia functionality to the applications populating a user’s computing environment (e.g. Whitehead, 1997).

In the case of Webfuse the intent was that the Webfuse wrappers would wrap around commercial-off-the-shelf software (COTS) products, mostly in the form of open-source applications. In the mid to late 1990s there was, in part because of the spiraling cost of custom-developed software, a shift on the part of government from discouraging the use of commercial software to encourage its use (Braun, 1999). Increasingly solutions were built by integrating COTS products rather than building from scratch (Braun, 1999). By 2001, Sommerville (2001, p. 34) describes it as more normal for some sub-systems to be implemented through the purchase and integration of COTS products.

Boehm (1999) identifies four problems with the integration of COTS products: lack of control over functionality and performance; problems with COTS system interoperability; no control over system evolution; and support from COTS vendors. The use of software wrappers to encapsulate COTS products into the CQU context and the general reliance on using open source COTS products was intended to help Webfuse address these issues. Another issue that arises when using a diverse collection of COTS products is the significant increase in the diversity and duplication in the user and management interfaces for each of the COTS products. It was intended that the Webfuse page types, in their role as software wrappers, would also be designed to provide Webfuse users with a consistent user interface. A user interface, where possible, which made use of CQU terms and labels rather than those of the COTS product.

Harnessing hypermedia templates, software wrappers and COTS products allowed Webfuse to combine the benefits of hypermedia templates – simplified authoring process, increased reuse, and reduced costs (Catlin et al., 1991; Nanard et al., 1998) – with the benefits of the COTS approach – shorter development schedules and reduced development, maintenance, training and infrastructure costs (Braun, 1999). While the use of open source COTS products provided access to source code and removed the influence of a commercial vendor (Gerlich, 1998), it did increase the level of technical skills required.

One example of the type of COTS product included into Webfuse through the use of software wrappers is the MHonArc email to HTML converter (Hood, 2007). As mentioned previously M&C courses were already making increasing use of Internet mailing lists as a form of class communication. An obvious added service that Webfuse could provide was a searchable, web-based archive of these mailing lists for use by both staff and students. Rather than develop this functionality from scratch a Email2WWW page type was written as a wrapper around MHonArc. The Email2WWW page type also integrated with the Webfuse styles system to enable automatic modification of appearance and was connected with the mailing list system used at CQU and so was able to regularly and automatically update the web-based archives of course mailing lists.


The complete functionality provided by Webfuse is a combination of the services provided by the Webfuse “micro-kernel” (described above) and the functionality implemented in each of the available Webfuse page types. This section seeks to provide a summary of the functionality available in the Webfuse page types as at the end of 1999 – the end of this action research cycle. The initial collection of page types was designed on the basis of the four major tasks required of a Web-based classroom identified in McCormack and Jones (1997, p. 367): information distribution, communication, assessment, and class management.

The original purpose of the Web was to enable the distribution and access to research information, which means that the Web can be extremely useful for the distribution of information (McCormack & Jones, 1997, p. 13). By the end of 1999 Webfuse had a collection of 11 page types providing information distribution related services. Table 4.3 provides a summary of these page types, their purpose and what, if any, COTS products the page types used for implementation of their purpose. The FAQ page, like a number of other page types, was written by a project student (Bytheway, 1997).

Table 4.3 – Webfuse information distribution related page types – 1999
Page Type COTS Product Purpose
Lecture, Lecture Slide Webify (Ward, 2000) for Postscript conversion to slides.
SoX (SoX, 2009) for conversion of audio into various formats
raencoder (RealNetworks, 1996) for audio conversion into Real Audio format
Convert Postscript file of a lecture (usually generated by Powerpoint) into an integrated collection of lecture slides. Each lecture slide could have audio converted into any one of four available formats.
Study guide, STudy guide chapter None Conversion of a study guide into chapters of online material broken up into individual pages, single chapter print versions and the production of table of contents and index
PersonContent, PersonDetails None Display information about teaching staff
FAQ (Bytheway, 1997) None Creation and management of lists of frequently asked questions
Content None Enable simple management of HTML content
File upload None Allow most people to upload files to the web site
TableList, Index, ContentIndex None Provide mechanisms to create index and associated child nodes in a hierarchical web structure
Search htdig ((The ht://Dig group, 2005) Search content of site

Communication is an essential part of the learning experience and a task for which the Web offers a number of advantages and supports through a number of forms (McCormack & Jones, 1997, p. 15). Table 4.4 provides a summary of the five different communication related page types provide by Webfuse by the end of 1999. This list of page types illustrates two points: there are fuzzy boundaries and overlap between these categories and the Webfuse eclectic, yet integrated structure meant it was possible to have multiple page types performing similar roles.

The FormMail page type listed in Table 4.4 could be used as a form of communication but was generally used to perform surveys that could fit under the Assessment category below. Table 4.4 also shows that there were two page types providing web-based discussion boards. Within a few years a third would be added. Each additional discussion board was added as it improved upon the previous functionality. However, it was not necessary to remove the other previous discussion boards and there were instances where this was useful as some authors preferred the functionality of the older versions.

Table 4.4 – Webfuse communication related page types – 1999
Page Type COTS Product Purpose
EwgieChat Ewgie (Hughes, 1996) An itneracitve chat-room and shared whiteboard system
WWWBoard WWWBoard (Wright, 2000) Web-based asyncrhonous discussion board
WebBBS WebBBS (AWSD, 2009) Web-based asyncrhonous discussion board
Email2WWW MHonArc (Hood, 2007) Searchable, web-based archives of mailing list disussions
FormMail FormMail (Wright, 2002) HTML form to email gateway, implementation of surveys

Assessment is an important part of every course, it is essential for knowing how well students are progressing (student assessment) and also for being aware how well the method of instruction is succeeding (evaluation) (McCormack & Jones, 1997, p. 233). Table 4.5 provides a summary of the four Webfuse page types associated with assessment that were in place by the end of 1999. Two of these page types (online quiz and assignment submission) are connected with student assessment, while the other two (UnitFeedback and Barometer) are associated with evaluation. The FormMail page type mentioned in Table 4.4 was also primarily used for evaluation purposes and is somewhat related to the far more CQU specific UnitFeedback page.

Table 4.5 – Webfuse assessment related page types – 1999
Page Type COTS Product Purpose
Online quiz None Management and delivery of online quizzes – multiple choice and short answer
Assignment submission None Submission and management of student assignments
UnitFeedback None Allow paper-based CQU course survey to be applied via the Web
Barometer No software, but concept based on idea from Svensson et al (1999) Allow students to provide informal feedback during a course

Class management involves the clerical, administrative and miscellaneous support tasks necessary to ensure that a learning experience operates efficiently (McCormack & Jones, 1997, p. 289). Table 4.6 summarises the three Webfuse page types associated with class management by the end of 1999. There is some overlap between this category and that of assessment in terms of the management and marking of student assignments.

Table 4.6 – Webfuse class management related page types – 1999
Page Type COTS Product Purpose
Results management None Allows the display and sharing of student progress and results
Student tracking Follow (Nottingham, 1997) Session analysis of student visits to course web pages
TimetableGenerator None Allow students and staff to generate a personalised timetable of face-to-face class sessions


AWSD. (2009). WebBBS.   Retrieved 29 July, 2009, from

Bass, L., Clements, P., & Kazman, R. (1998). Software Architecture in Practice. Boston: Addison-Wesley.

Bieber, M. (1998). Hypertext and web engineering. Paper presented at the Ninth ACM Conference on Hypertext and Hypermedia, Pittsburgh, Pennsylvania.

Boehm, B. (1999). COTS integration: plug and pray? IEEE Computer, 32(1), 135-138.

Braun, C. L. (1999). A lifecycle process for the effective reuse of commercial off-the-shelf (COTS) software. Paper presented at the 1999 Symposium on Software Reusability, Los Angeles.

Bytheway, S. (1997). FAQ Project Report.   Retrieved 29 July, 2009, from

Catlin, K., Garret, L. N., & Launhardt, J. (1991). Hypermedia Templates: An Author’s Tool. Paper presented at the Proceedings of Hypertext’91.

Dietrich, W. C., Nackman, L. R., & Gracer, F. (1989). Saving legacy with objects. Paper presented at the Object-oriented programming systems, languages and applications, New Orleans, Louisiana.

Gerlich, R. (1998). Lessons Learned by Use of (C)OTS. Paper presented at the 1998 Data Systems in Aerospace, Athens, Greece.

Gien, M. (1990). Micro-kernel architecture: Key to modern operating systems design. UNIX Review, 8(11).

Gronbaek, K., & Trigg, R. (1996). Toward a Dexter-based model for open hypermedia: unifying embedded references and link objects. Paper presented at the Seventh ACM Conference on Hypertext, Bethesda, Maryland.

Hood, E. (2007). MHonArc: A mail-to-HTML converter.   Retrieved 10 January, 2008, 2007, from

Hughes, K. (1996). EWGIE – Easy Web Group Interaction Enabler.   Retrieved 29 July, 2009, from

Liedtke, J. (1995). On micro-kernel construction. Operating Systems Review, 29(5), 237-250.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design Patterns and Constructive Templates. Paper presented at the Proceedings of the 9th ACM Conference on Hypertext and Hypermedia.

Nottingham, M. (1997). Follow 1.5.1.   Retrieved 29 July, 2009, from

RealNetworks. (1996). Release notes: RealAudio encoder 2.0 for UNIX.   Retrieved 29 July, 2009, from

Sneed, H. (2000). Encapsulation of legacy software: A technique for reusing legacy software components. Annals of Software Engineering, 9(1-4), 293-313.

Sommerville, I. (2001). Software Engineering (6th ed.): Addison-Wesley.

SoX. (2009). SoX – Sound eXchange – Home page.   Retrieved 29 July, 2009, from

Svensson, L., Andersson, R., Gadd, M., & Johnsson, A. (1999). Course-Barometer: Compensating for the loss of informal feedback in distance education. Paper presented at the EdMedia’99, Seattle, Washington.

The ht://Dig group. (2005). ht://Dig – Internet search engine software.   Retrieved 29 July, 2009, from

Ward, S. (2000). Webify: Build web presentations from postscript.   Retrieved 29 July, 2009, from

Whitehead, E. J. (1997). An architectural model for application integration in open hypermedia environments. Paper presented at the Eighth ACM Conference on Hypertext, Southhampton, UK.

Wright, M. (2000). WWWBoard.   Retrieved 29 July, 2009, from

Wright, M. (2002). FormMail.   Retrieved 29 July, 2009, from

The design and implementation of Webfuse – Part 1

This continues the collection of content that goes into Chapter 4 of my PhD thesis. Chapter 4 is meant to tell the story of the first iteration of Webfuse from 1996 through 1999. The last section I posted describes the design guidelines that informed the implementation of Webfuse. This post and at least one following post seeks to describe the details of the design and implementation of Webfuse.

As with all the previous posts of content from the thesis, this content is in a rough first draft form. It will need more work. Comments and suggestions are more than welcome.

Design, implementation and support

This section outlines how the design guidelines for Webfuse introduced in the previous section (Section 4.3.2) were turned into a specific system design and how that system was implemented and supported during the period from 1996 through 1999. First it briefly outlines the process, people and technology used during this period to design and implement Webfuse. It then explains how the abstractions that form the design of Webfuse were intended to fulfil the design guidelines introduced in Section 4.3.2. Lastly, it offers a description of the functionality offered by Webfuse towards the end of 1999. The next section (Section 4.3.4) will provide an overview of using Webfuse from both a student and academic staff member perspective.

Process, People and Technology

The initial design and implementation of Webfuse occurred over a period of about 12 months starting in mid-1996. The author performed most of the initial design and implementation work with additional assistance from a small number of project students who worked on particular components. In 1997, Webfuse was taken over by the Faculty of Informatics and Communication. The Faculty appointed a full-time Webmaster and used Webfuse for their faculty website and online learning. The Faculty webmaster helped staff use Webfuse, did some development and was supported by a small number of other Faculty technical staff. The development processes used Webfuse functionality during this period were fairly ad hoc.

From 1996 through 1999, Webfuse was implemented primarily as a collection of Perl CGI scripts and various support libraries and tools. The Perl scripting language was chosen because it was platform independent and scripting languages like Perl allowed rapid development of application via the gluing together of existing application and development was 5 to 10 times faster than through the use of traditional systems programming languages (Ousterhoust, 1998). An Apache web server served the Webfuse CGI scripts and the resulting web pages. For information storage, Webfuse used the file system and a variety of relational databases. All of the applications used in Webfuse were open source. During this the available open source relational databases were not full-featured, the lack of a full-feature relational database influenced some design decisions.

The design

The set of abstractions and decisions that underpin in the initial design of Webfuse drew on a number of existing concepts from the operating systems, information systems and hypermedia communities. The informing concepts included hypermedia templates (Catlin, Garret, & Launhardt, 1991), software wrappers (Bass, Clements, & Kazman, 1998, p. 339), micro-kernel architectures of operating systems (Liedtke, 1995) and known limitations of the World-Wide Web and its hypermedia model (Bieber, Vitali, Ashman, Balasubramanian, & Oinas-Kukkonen, 1997). The design was informed by the understanding of these concepts and the desire to fulfil the five broad design guidelines outlined in Section 4.3.2. The following links these guidelines to the informing concepts and explains the design of Webfuse.

A web publishing tool

From the start Webfuse was seen as a web-publishing tool. The implication of this is that Webfuse was seen as a system that produced web pages and web sites. In particular, Webfuse was intended to manage the website of the Faculty of Applied Science which includes a range of different departments and would be managed by a number of different people. There were a number of known problems with the authoring process of websites at this point in time. The authoring process was usually carried out without a defined process, lacked suitable tool support, and did little to separate content, structure and appearance (Coda, Ghezzi, Vigna, & Garzotoo, 1998). The process also made limited reuse of previous work (Rossi, Lyardet, & Schwabe, 1999) and required better group access mechanisms and online editing tools (K. Andrews, 1996).

The difficulty of authoring on the Web makes it difficult to create and maintain large websites and often the management of such content was, at this stage, assigned to one person or group who became the bottleneck for maintenance (Thimbleby, 1997). This is especially troubling when Nielsen (1997) suggested that rule of thumb that the annual maintenance budget for a website should be at least 50 percent, and preferably the same as, the initial cost of building the site. The nature of learning and teaching and its reliance on communication and collaboration suggested that for e-learning such a recommendation might need to be increased.

The World-Wide Web, at this stage, was a particularly primitive hypermedia system where the lack of functionality made the authoring process more difficult (Gregor et al., 1999). One recognition of this was that a key part of the problem definition outlined in Section 4.2.2 was the difficult and time-consuming nature of web-based learning. It was also recognised that ease of use was a key part of encouraging adoption amongst academic staff. To address this problem it was decided that Webfuse would make use of the concept of hypermedia templates (Catlin et al., 1991; Nanard, Nanard, & Kahn, 1998).

Hypermedia templates (Catlin et al., 1991) are an approach to simplifying the authoring process while still ensuring the application of good information design principles. Hypermedia templates would enable content experts to become responsible for maintaining Websites and thus increases ownership, decreases costs and addresses the authoring bottleneck problem (Jones, 1999b). Hypermedia templates also aid in reuse which is a strategic tool for reducing the cost and improving the quality of hypermedia design and development (Nanard et al., 1998). There initial purpose was to improve the application of information design principles to hypermedia collections (Catlin et al., 1991).

In their initial development hypermedia templates were sets of pre-linked documents that contain both content and formatting information used by authors to create a new set of information (Catlin et al., 1991). The intent was that graphic designers would create the templates, which would subsequently be used by content experts to place material into hypermedia (Catlin et al., 1991). The content experts would not need to become experts in information design, nor would the graphic designers need to become content experts. Editing a template did not require learning any new software or knowledge.

Nanard, Nanard and Kahn (1998) extended the idea into constructive templates with the intent of extending reuse in hypermedia design beyond information and software component reuse into the capture and reuse of design experience. A constructive template is a generic specification which makes it easier for a developer to build a hypermedia structure and populate it with its data (Nanard et al., 1998). While a model describes a structure, a constructive template helps produces instances of that structure by mapping source data into a hypertext structure (Nanard et al., 1998). Template-based hypermedia generation can be implemented using either programming or declarative means. Constructive templates are built on the principle of separating source data from hypermedia presentation and enables work on the structure to be done independently from the content, reducing the burden of production. Through automating large parts of the production process constructive templates drastically reduce cost (Nanard et al., 1998).

As a web-publishing system the primary output of Webfuse was web pages. Each Web page was of a specific type. The type of page specified which Webfuse hypermedia template, during this period they were called page types, would be used to produce the web page. A page type was implemented as a collection of pre-defined Perl functions that would obtain the necessary content from the author, convert that content into the HTML necessary to display the body of the page and carry out any additional necessary steps. Figure 4.1 is an example of a web page produced by Webfuse.

Content index page example

Figure 4.1 – A simple web page produced by Webfuse

On each web page produced by Webfuse there will be an “Edit” link. If an authorised person clicks on this link they are presented by a web form – called a page update form – that allows them to provide, edit and modify the content used to produce the web page. The structure and features of the page update form, as well as the conversion process applied to the content, is unique to the page type.

Figure 4.2 shows the page update form for the web page from Figure 4.1. A page type called TableList produces the web page shown in Figure 4.1. As the name suggests this page type is used to manage a series of lists containing individual elements, which are displayed in a series of separate tables. Each element in the list points to another web page that is created and then managed through Webfuse. In Figure 4.1 there is one list called “Years” which consists of the elements “2008” and “2009”. Figure 4.2 contains HTML form elements to manage two lists. One for the existing list called “Years” and one that can be used to add a new list. As well as managing the elements of lists the form in Figure 4.2 also provides some formatting options including how to sort the list elements, how many columns to have in the table and how big the table borders should be.

Page update form for content index page

Figure 4.2 – Page update form for the web page shown in Figure 4.1

The design of Webfuse as a web publishing system made it necessary to include into the Webfuse an abstraction for the websites it would manage. Such an abstraction was necessary in order to implement the services and interfaces Webfuse would provide to authors to manage their websites. Hypermedia and hypertext, of which the World-Wide Web is an example, have been defined on the basis of their support for non-linear traversal and navigation through a maze of interactive, linked, multiple format information (Kotze, 1998). The “disorientation problem” – getting “lost in hyperspace” – refers to the greater potential for the use to become lost or disoriented within a large hypertext network (Conklin, 1987).

The topology or structure of a hypertext directly affects navigation performance (McDonald & Stevenson, 1996). Oliver, Herrington and Omari (R. Oliver, Herrington, & Omari, 1999) identifies three main structures within hypermedia environments: linear, hierarchical and non-linear or networked. Shin, Schallert and Savenye (1994) suggests that the most popular structure for hypertext and hierarchical and network (non-linear) structures. Garzotto, Paolini and Schwabe (1993, p. 8) point to the observation of many authors that hierarchies are very useful to help user orientation when navigating in a hypertext. Advantages of hierarchies include: a strong notion of place; documents have clear superior/inferior relationships that are sometimes augmented with linear precedence relationships between nodes; they are familiar due to their use in other domains; and the rigidity, which creates some inflexibility, aids comprehension (Durand & Kahn, 1999). Hierarchical structures have also been recommended as the most appropriate structures for large websites (Sano, 1996).

The previous paragraphs draw on research literature to identify a number of advantages to justify the selection of a hierarchical structure for the model of a website use by Webfuse. There were, however, also two pragmatic reasons for this choice of structure. The open source relational databases that were available at the time and used in the implementation of Webfuse were not capable of storing amount and type of data that a large website would require. The use of a relational database to store information was limited to authentication and authorization data. For the most part, the storage of content to be used in generating web pages were stored on the file system of the computer hosting the Web server. The file systems of computers did and continue to use a hierarchical structure of directories and files. Having the website structure used by Webfuse match the structure used to store the information considerably simplified implementation.

Figure 4.3 is a partial, graphical representation of the hierarchical structure of the Faculty of Applied Science website created and managed via Webfuse during 1997. At the top level is the main science home page. The next level down has five main sections including one for the Faculty’s research centre’s and one for each of its four departments – Maths and Computing, Applied Physics, Biology and Chemistry. Each of the department websites followed a similar structure with main sections for information, staff, academic programs, students, research and community. The websites for individual courses – prior to 1998 these were called units – are all contained in their own folders with names based on the course codes (e.g. 85321, Systems Administration).

Partial hierarchy of pages - 1997

Figure 4.3 – A partial hierarchy of the Faculty of Applied Science website in 1997

Each of the boxes shown in Figure 4.3 represents an individual web tree but also represents a collection of related material. The “Units” box represents the “Units” web page (Figure 4.4) and the folder “Units” that contains all of the web sites for the units offered by the Department of Mathematics and Computing in the second term of 2007. By default all Webfuse pages are freely available to anyone on the Web. There is an access control facility that can optionally restrict access to specific people or groups.

The Webfuse access control system does not make any distinction between types of accounts; there is not concept of a course designer, administrator, or student account in Webfuse (McCormack & Jones, 1997, p. 365). Each user account belongs to a number of groups. Groups can be assigned permissions to perform certain operations on Webfuse objects, which are either individual web pages or entire websites. The directory path that specifies where the object resides on the web server is used to uniquely identify each object. Initially, there were three valid operations that could be performed on an object (McCormack & Jones, 1997, p. 366):

  • access;
    The ability to access or view the page. By default all objects are able to be viewed by anyone on the web.
  • update; and
    The ability to modify the page using the page update process.
  • all.
    The ability to perform any and all operations on the object.

Home page for M&C in 1997

Figure 4.4 – The Units web page for M&C for Term 2, 2007

Some page types recognise additional operations that are specific to the operation of the page. For example, an early assignment management page type recognised a “mark assignment” operation (McCormack & Jones, 1997).

Table 4.2 provides an example of two different Webfuse permissions. One which gives permission for members of the group “jones” to perform all operations on the entire website for the unit 85321, Systems Administration. Another which gives permission to edit just the home page for the 85321 website. An object that ends with a slash (/) indicates everything within that directory while an object without the slash at the end indicates just that web page.

Table 4.3 – Example Webfuse permissions
  Modify 85321 Web site Modify 85321 Web page
Object /mc/Academic_Programs/Units/85321/ /mc/Academic_Programs/Units/85321
Operation All update
Group jonesd jonesd

A Perl script, called the page update script, included a check of the permissions system to determine if a particular person could edit the requested page. The page update script was also responsible for identifying the type of page being edited, accessing the appropriate code for the page type and adding other information and services to the page update form. Other services available on the page update form fall into two main categories:

  1. Webfuse services; and
    A number of support services such as HTML validation, link checking, access control, file management and hit counters could be accessed via the page update form.
  2. Page characteristics.
    As well as the content managed by the page type each web page also contained a number of the characteristics including the page type, title, colours used and the style template.

The notion of a style or style template was used to further separate the appearance of a page from the content. This enabled the appearance of the same page, containing the same content to evolve over time for whatever reason (this feature was added before the concept of cascading style sheets – CSS – was widely used). Figure 4.5 is the same web page as shown in Figure 4.1, however, it is using a 1998 style for the Faculty of Informatics and Communication. This was done by editing the page, changing the style template and updating the page.

Content Index page example

Figure 4.5 – Guides web page (Figure 4.1) with a different style


Andrews, K. (1996). Position paper for the workshop, Hypermedia Research and the World-Wide Web. Paper presented at the Applying Hypermedia Research to the World-Wide Web, Hypertext’96.

Bass, L., Clements, P., & Kazman, R. (1998). Software Architecture in Practice. Boston: Addison-Wesley.

Bieber, M., Vitali, F., Ashman, H., Balasubramanian, V., & Oinas-Kukkonen, H. (1997). Fourth Generation Hypermedia: Some Missing Links for the World-Wide Web. International Journal of Human-Computer Studies, 47, 31-65.

Catlin, K., Garret, L. N., & Launhardt, J. (1991). Hypermedia Templates: An Author’s Tool. Paper presented at the Proceedings of Hypertext’91.

Coda, F., Ghezzi, C., Vigna, G., & Garzotoo, F. (1998). Toward a Software Engineering Approach to Web Site Development. Paper presented at the 9th International Workshop on Software Specification and Design, Isobe, Japan.

Conklin, E. J. (1987). Hypertext: An introduction and survey. IEEE Computer, 20, 17-41.

Durand, D., & Kahn, P. (1999). MAPA: a system for inducing and visualizing hierarchy in websites. Paper presented at the Hypertext’98, Pittsburg, PA.

Garzotto, F., Paolini, P., & Schwabe, D. (1993). HDM – A model-based approach to hypertext application design. ACM Transactions on Information Systems, 11(1), 1-26.

Gregor, S., Jones, D., Lynch, T., & Plummer, A. A. (1999). Web information systems development: some neglected aspects. Paper presented at the Proceedings of the International Business Association Conference, Cancun, Mexico.

Jones, D. (1999). Webfuse: An integrated, eclectic web authoring tool. Paper presented at the Proceedings of EdMedia’99, World Conference on Educational Multimedia, Hypermedia & Telecommunications, Seattle.

Kotze, P. (1998). Why the hypermedia model is inadequate for computer-based instruction. Paper presented at the Sixth Annual Conference on the Teaching of Computing and the 3rd Annual Conference on Integrating Technology into Computer Science Education, Dublin City University, Ireland.

Liedtke, J. (1995). On micro-kernel construction. Operating Systems Review, 29(5), 237-250.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

McDonald, S., & Stevenson, R. (1996). Disorientation in hypertext: the effects of three text structures on navigation performance. Applied Ergonomics, 27(1), 61-68.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design Patterns and Constructive Templates. Paper presented at the Proceedings of the 9th ACM Conference on Hypertext and Hypermedia.

Nielsen, J. (1997). Top ten mistakes of web management.   Retrieved 27 July, 2009, from

Oliver, R., Herrington, J., & Omari, A. (1999). Creating effective instructional materials for the World Wide Web. Paper presented at the AUSWEB’96, Gold Coast, Australia.

Ousterhoust, J. (1998). Scripting: Higher Level Programming for the 21st Century. IEEE Computer, 31(3), 23-30.

Rossi, G., Lyardet, F., & Schwabe, D. (1999). Developing Hypermedia Applications with Methods and Patterns. ACM Computing Surveys, 31(4es).

Sano, D. (1996). Designing large scale web sites. New York: John Wiley & Sons.

Shin, E. C., Schallert, D. L., & Savenye, W. C. (1994). Effect of learner control, advisement, and prior knowledge on young students’ learning in a hypertext environment. Educational Technology, Research and Development, 42(1), 33-46.

Thimbleby, H. (1997). Gentler: A Tool for Systematic Web Authoring. International Journal of Human-Computer Studies, 47, 139-168.

Another spectrum for using indicators to place course websites

This post adds another perspective borrowed from Gonzalez (2009) as a framework to report or evaluate findings from Col and Ken’s indicators project. Col added an update on his work recently. Like previous post this one borrows a table of dimensions around conceptions of online learning because it may be helpful.

First the table and then how it might be used.


Dimensions delimiting approaches to online teaching – (Gonzalez, 2009: p311)
Informative/individual learning focuses Communicative/Networked learning focused
Intensity of use Small range on media and tools used to support learnign tasks and activities (mainly sources of information with small opportunities for interaction and communication) Wide range of media and tools used to support learning tasks and activities (with emphasis on interaction and communication)
Resources Web pages with information. Lecture notes. Links to websites. Web pages with information. Lecture notes. Links to web sites. Discussion boards. Chat. Blogs. Spaces for sharing. Animations. Videos. Still images.
Role of the learner Select and present information Design spaces for sharing and communication. Support the process.
Role of the students Study individually information provided Participate in a process of knowledge building

How might it be used

The above dimensions could be used to develop “analysis routines” that would place courses within these dimensions. Some potential approaches:

  • Variety and use of tools and media within a course site. (Intensity of use and Resources)
    Group the different tools available in the course management system into different types. e.g. those used for information distribution and those for interaction/communication. Count the number of different types of tools present in a course site and the level of usage.

    The difficulty here is the increasing use of non-CMS based tools for communication. e.g. I know of an increasing number of staff and students who are using external tools such as Messenger to work around the limitations of CMS services.

  • Measure student and staff activity (Role of the lecturer/students)
    I believe Blackboard, the main CMS at our institution, tracks the activity in some detail of each course site participant. If the type of activity can be categorised into groups (e.g. adding information to the site, using information on the site, posting to a discussion forum, responding to a post in a discussion forum etc.) then analysis could be run against the activity of all participants. This would identify the type of role the main groups are taking on.

What’s the value of this?

I can hear some thinking, “so what!”. What is the value of this sort of thing? A couple of thoughts.

  • As a framework to help make sense of the data.
    From my perspective it appears that the project is “drowning in data” and could use some sort of reviewed framework with which to organise or structure their investigations. These dimensions might provide it.
  • Enable institutions to get a handle on what is happening.
    Most of it ain’t great. The combination of the dimensions and the data potentially enable institutions, that are spending a lot of money on course management systems, to improve the awareness they have of what is actually happening. At the very least some sort of indication of where online courses site within the institution, as imperfect as it will be, sit within the dimensions might start some conversations about online practice that is actually somewhat informed by the reality of what is going on.
  • As a demonstration of building on the work of others.
    It is possible to argue with the value/validity of the knowledge generated by Gonzalez (2009) – but then it’s possible to argue against the validity of just about the knowledge generated by any research project depending on your perspective. However, this work is in a fairly prestigious journal, so it comes with a certain stamp of approval. This will help Col and Ken.
  • Perfect opportunity for a publication.
    Building on the last point, suggests that complimenting the qualitative nature of Gonzalez (2009) with some more quantitative measures from a broad collection of students and courses sounds like a pretty good publication opprotunity (or three).

It’s the potential for discussion within the organisation that is, I believe, of potentially the most beneficial for the most people.

The potential for publication is probably the most interesting to the project participants and frankly by far the easiest.

Further publications

The publication idea would be strengthened if previous work in this area (e.g. the recent ALTC project Learning and teaching performance indicators report either doesn’t do something like this or uses a different set of dimensions.

In addition, Gonzalez interviewed only 7 academics within a single discipline within a single institution. Chances are the results and dimesions identified in the paper are going to exhibit some sort of limitation, potentially caused by the nature of the context. Using a different approach in a different context will at least compliment/reinforce the findings and potentially identify additional dimensions.


Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314.

Powered by WordPress & Theme by Anders Norén