Assembling the heterogeneous elements for (digital) learning

Month: July 2009 Page 1 of 3

Some early results from Webfuse evaluation

The following contains some early results from the evaluation of Webfuse course sites as mentioned in the last post. The aim is to get a rough initial feel for how the course sites created for Webfuse in the late 90s and early 00s stack up using the framework produced by Malikowski et al (2007). As opposed to other PhD work, this is a case of “showing the working”.

How many page types?

First, let’s see how many page types were used each year. The following table summarises the total number of pages and number of different page types (in some years there were page types with different names that had only very slightly different functionality – the following stats are rough and don’t take that into account) used in each year.

Year # pages # page Types
1999 4376 27
2000 3058 39
2001 1155 23
2002 9099 42
2003 9302 40

Can see from the above that the number of overall pages managed in Webfuse drops significant drop from 2000 to 2001. 2001 is when the new default course site structure was put in place and when (I think) the courses 85321 and 85349 (which I taught) stopped included the archives of previous offerings. Check this. May need to look at excluding from consideration some of these?

During this time there were some page types which had different names, so would be counted more than once in the above, but were essentially the same. Count the same page types once.

I have to save the commands to do this somewhere, may as well do it here

find . -name CONTENT -exec grep PageType {} ; > all.pageTypes
sed -e '1,$s//1/' all.pageTypes | sort | uniq -c > all.pageTypes.count

Calculate the percentage of page type usage per framework

The next step is a simple calculation. Allocate each page type to one of one of the categories of the Malikowski et al (2007) framework and show the percentage of the pages managed by Webfuse that fall into each category. This isn’t exactly what Malikowski et al (2007) count, the count the percentage of courses that use features in each category.

The Malikowski et al (2007) framework includes the following categories:

  • transmitting content;
  • creating class interactions;
  • evaluating students;
  • evaluating course and instructors;
  • computer based instruction.
    Not included – there are no Webfuse page types that provide functionality that fits with this category.

The following table shows the percentage of pages managed by Webfuse that fall into each category per year. It’s fairly obvious from the first year done – 1999, and confirmed with the second, that this approach doesn’t really say a lot. Time to move on.

Category 1999 2000 2001 2002 2003
Transmitting content 97.5% 84.5%
Class interactions 1.9% 13.5%
Evaluating students 0.1% 1.5%
Evaluating course 0.5% 0.6%

Calculate the % of courses using each category

In this stage I need to:

  • Count the number of courses in each year.
  • Count the % of courses that have features of each category.

Technically, all of these courses will have features for transmitting content, so all those will be 100%, so I’ve not included it. Need to recheck the Malikowski definition.

Also, 2001 seems to be missing a couple of the main terms, so it’s had to be excluded – for now. See if the missing terms can be retrieved.

Category 1999 2000 2002 2003
Number of course sites 190 175 315 309
Class interactions 7.9% 43.5% 11% 66.6%
Evaluating students 2.6% 6.3% 12% 21.7%
Evaluating course 9.5% 7.5% 14% 91.6%

Commands I used to generate the above

find aut2000 spr2000 win2000 -name CONTENT -exec grep -H PageType {} ; > course.pageTypes
...vi to get period/course:pageTypeName
sort course.pageTypes | uniq  | sort -t: -k2 > course.pageTypes.uniq
... edit to move the page types around

Now, there are some interesting results in the above. Have to check the 2000 and 2002 result for class interactions, unusual dip..

The almost 92% of courses with a course evaluation feature in 2003 is due to the raise of the course barometer explained in Jones (2002)

Too late to reflect anymore on this. Off to bed.

References

David Jones, Student feedback, anonymity, observable change and course barometers, World Conference on Educational Multimedia, Hypermedia and Telecommunications, Denver, Colorado, June 2002, pp. 884-889.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Thinking about evaluating Webfuse (1996 through 1999) – evaluation of an LMS?

For the last couple of weeks I’ve been working on chapter 4 of my thesis. I’ve worked my way through explaining the context (general context and (use of e-learning), the design guidelines and the implementation (parts 1, 2 and 3). I’ve now reached the evaluation section, where I’m meant to describe what happened with the use of Webfuse and make some judgement calls about how it went.

The purpose of this post is to make concrete what I’m thinking about doing. A sort of planning document. I don’t think it will be of much use to most others, though the following section on related work might be some interest.

Other related work

Indicators project

Col and Ken, two colleagues at CQU have started the indicators project which is seeking to provide academics with tools to reflect on their own usage of LMSes. There most recent presentation is up on Slideshare (where’s the video Col?).

They are currently drawing primarily on data from the Blackboard LMS which was used at CQU from about 2004 through 2009. Webfuse was essentially a parallel system, but it ran from 1997 through 2009. Both are being replaced by Moodle in 2009.

At some stage, I am hoping to mirror the work they are doing with Blackboard on Webfuse. This will complete the picture to encompass all e-learning at CQU and also potentially provide some interesting comparisons between Webfuse and Blackboard. This will be somewhat problematic as there are differences in assumptions between Webfuse and Blackboard. For example, Webfuse generally doesn’t require students to login to visit the course website. Most are freely available.

Some of the data from Ken’s and Col’s presentation about Blackboard:

  • 5147 courses – would be interesting to hear the definition of course as a number of Blackboard courses during this period were simply pointers to Webfuse courses.
  • Feature adoption using a framework adopted from Malikowski (2007) as a percentage of online courses from 2005 through 2009
    • Files: ranging from 50% to 78%
      Which raises the question, what did the other 22-50% of courses have in them, if no files? Just HTML?
    • News/Announcements: ranging from 77% to 91% (with a peak in 2007.
    • Gradebook: ranging from 17% to 41%
    • Forums: ranging from 28% to 61%
    • Quizzes: ranging from 8 through 15%
    • Assignment submission: ranging from 4 to 20%.

    An interesting peak: In most of the “lower level” features there seems to have been a peak, in percentage terms, in 2007. What does that mean? A similar, though to less an extent peak is visible in the forums, quizzes and assignment submission categories.

    Might be interesting to see these figures as a percentage of students. Or perhaps with courses broken down into categories such as: predominantly AIC (CQU’s international campuses), predominantly CQ campuses, predominantly distance education, large (300+ students), small, complex (5+ teaching staff), simple.

  • Hits on the course site
    There’s a couple of graphs that show big peak at the start of term with slow dropping off, with the occasional peak during term.

    It might be interesting to see the hit counts for those courses that don’t have discussion forums, quizzes or assignment submission. I feel that these are the only reasons there might be peaks as the term progresses as students use these facilities for assessment.

  • Student visits and grades.
    There are a few graphs that show a potentially clear connection between number of visits on a course site and the final grade (e.g. High Distinction students – top grade – average bit over 500 hits, students who fail average just over 150 hits). It is more pronounced for distance education students than for on-campus students (e.g. distance ed high distinction students average almost 900 hits).
  • Average hits by campus.
    Distance education students averaged almost 600 hits. Students at the AICs, less than 150.
  • Average files per course in term 1.
    Grown from just over 10 in 2005 to just over 30 in 2009.

    I wonder how much of this is through gradual accretion? In my experience most course sites are created by copying the course site from last term and then making some additions/modifications. Under this model, it might be possible for the average number of files to grow because the old files aren’t being deleted.

Malikowski, Thompson and Theis

Malikowski et al (2007) proposed a model for evaluation the use of course management systems. The following figure is from their paper. I’ve made use of their work when examining the quantity of usage of features (read this if you want more information on their work) of an LMS in my thesis.

Malikowski Flow Chart

Purpose of the evaluation

The design guidelines underpinning Webfuse in this period were:

  • Webfuse will be a web publishing tool
  • Webfuse will be an integrated online learning environment
  • Webfuse will be eclectic, yet integrated
  • Webfuse will be flexible and support diversity
  • Webfuse will seek to encourage adoption

I’m assuming that the evaluation should focus on the achievement (or not) of those guidelines. The limitations I have is that I’m restricted to archives of websites and system logs. I won’t be asking people as this was 1996 to 1999.

Some initial ideas, at least for a starting place:

  • Webfuse will be a web publishing tool
    How many websites did it manage? How many web pages on those sites? How much were the used by both readers and authors?
  • Webfuse will be an integrated online learning environment
    Perhaps use the model of Malikowski et al (2007) to summarise the “learning” functions that were present in the course sites. Some repeat of figures from the above.

    I recognise this doesn’t really say much about learning. But you can’t really effectively judge learning any better when using automated analysis of system logs.

  • Webfuse will be eclectic, yet integrated
    This will come down to the nature of the structure/implementation of Webfuse. i.e. it was eclectic, yet integrated
  • Webfuse will be flexible and support diversity
    Examine the diversity of usage (not much). Flexibility will arise to some extent from the different systems implemented.
  • Webfuse will seek to encourage adoption.
    This will come back to the figures above. Can be a reflection on the statistics outlined in the first two guidelines.

Process

So, there’s a rough idea of what I’m going to do, what about a rough idea of how to implement it? I have access to copies of the course websites for 1998 and 1999. I’m hoping to have access to the 1997 course sites in the next couple of weeks, but it may not happen – some things are just lost to time – though the wayback machine may be able to help out there. I also have the system logs from 1997 onwards.

In terms of meeting Malikowski et al’s (2007) framework, I’ll need to

  • Unpack each year’s set of course websites.
  • Get a list of all the page types used in those sites.
  • Categories those page types into the Malikowski framework.
  • Calculate percentages.

In terms of looking at the files uploaded to the sites, I’ll need to repeat the above, but this time on all the files and exclude those that were produced by Webfuse.

Author updates – I can parse the web server logs for the staff who are updating pages. The same parsing will be able to get records for any students who had to login. This will be a minority.

References

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Gaps, shadow systems and the VLE/LMS

One of my continuing “rants” that long-time readers of this blog will be familiar with is the lack of fit between enterprise systems and what people want to do with them. I’ve blogged about this with enterprise systems, learned to live and thrive in spite of that gap and drawn some lessons from it for enterprise systems.

It’s even become a bit of a family activity with my wife’s Masters research being aimed at attempting to explain the most common response to the lack of fit between people’s requirements and the enterprise systems put in place to fulfill them – shadow systems. The following image is of the model that arose out of Sandy’s work (Behrens and Sedera, 2004).

Sandy's Shadow System Model

One description of the model is that a gap arises (almost inevitably in my opinion) between the enterprise system and the needs of the users. It is created by a range of conditions and can be increased or reduced by two others. The existence of this gap leads to the development of shadow systems. These might simply be employing lots of other people to perform tasks manually that a system should provide. Or it might include developing additional systems to fill the gap.

The gap, shadow systems and the VLE/LMS

My current institution is in the process of adopting Moodle as its sole VLE/LMS. From one perspective Moodle will become another enterprise system supported by the IT folk to achieve business outcomes. That’s certainly one perspective of how Moodle is being rolled out at the institution. For me this raises some interesting questions:

  • Will Moodle suffer the same problems in terms of the gap and shadow system as other enterprise systems?
    I tend to think this is almost certain to happen given the diversity and contrary nature of academics, the diversity inherent in L&T, attempts at the institution to standardise L&T at the same minimum and on-going lack of institutional support for L&T.
  • What form will those shadow systems take?
    The rise of social media, web 2.0 etc tools and their broader use by academics – especially those in the younger generation – and students offers one likely source of the shadow systems.
  • How will the organisation respond?
    The traditional, almost reactionary, response from organisations is that shadow systems are evil and need to be stamped out. That is, if the organisation even becomes aware of them.
  • What are more appropriate ways for the organisation to respond?
    Some colleagues and I have made suggestions previously.
  • Where will the gaps arise?

Where will the gaps arise? An example.

Thomas Duggan another member of staff at my current institution has recently posted an outline of a paper he is working on which seems to detail one of the sources for the gap. It’s a source that seems to potentially fall within the “People” causal condition from Sandy’s model above.

Tom teaches at the institution’s indigenous learning centre Nulloo Yumbah. The paper Tom is working on is built on literature around indigenous learning styles and seeks to see how well Moodle can accommodate those styles. It will be interesting to see what he finds out.

My guess is that how well Moodle will fit these learning styles will depend on many of the factors covered in Sandy’s model. For example:

  • Technology/People;
    Moodle is meant to embody/support a specific learning theory. If you accept that (I still question it) there will be a good fit if that learning theory matches the indigenous learning styles as outlined in the literature Tom is drawing up. If the Moodle learning theory and the indigenous learning styles don’t match, then there will be trouble.
  • Organisation;
    Much is made of Moodle being open source and open source meaning flexible and able to be changed. This point was pushed quite hard in the various sessions promoting Moodle at the institution. However, such a point misses the significant role played by the policies adopted by an institution implementing Moodle. Open source might mean more flexibility, but if the organisation decides to implement as vanilla, that is meaningless. If the organisation doesn’t set up the resources and processes to support and inform that flexiblity, then open source is meaningless.
  • Business processes;
    The institution as adopted a minimum standard for online courses. If there is a mismatch between the indigenous learning styles and the minimum standards, then it might be interesting.
  • Organisation (again);
    The minimum standards are mostly (almost entirely) being driven by the two faculties at our institution and their management. Nulloo Yumbah and the courses it teachers do not, I believe, fit within a faculty. Perhaps the minimum standards don’t apply, or don’t apply as strictly.
  • People; and
    Tom has a background in technology. This means that even if there is a mismatch between Moodle and the indigenous learning styles he may be able to come up with kludges within Moodle that overcome that mismatch. In part, this will come from Tom really understanding the Moodle model and then being able to innovate around it.
  • People (again).
    Tom, through the blogosphere, twitter and general disposition has established social networks with a range of people that also have experience in L&T and technology, including e-learning. He’s also likely to be able to draw upon those people to come up with workarounds to any gaps.

Just one example

There will be anywhere from about 500 to 1000 courses at this institution that will have to go into Moodle over the next year or so. The above process/set of conditions is likely to apply in each of them. There will be a large number of people having to go through this process. My fear/belief is that most of them, because of a range of contextual and personal reasons, simply won’t bother. They will do the bare minimum of work necessary to meet the set minimal standard and won’t bother overcoming the gap that exists.

Another fear is that many of the people who want to overcome the gaps won’t have the knowledge, time or support to overcome the gap. Instead they will have to make do with what they have and over time get increasingly dispirited.

Lastly, those people that have the knowledge and time (a very small minority) will spend a large amount of time experimenting and will end up developing workarounds. The knowledge that goes into those workarounds will never be captured and disseminated, which makes it likely that many people will repeat the exploration process over and over again. Make the same mistakes and reinvent the wheel. Worse, few or none of these people will be recognised.

Solutions?

Well, there’s probably quite a few interesting little research projects and publications in keeping a close eye on how this rolls out. What are the gaps that people face?

Even more interesting would be putting in processes and resources that would enable people to effectively respond to these gaps. And this doesn’t mean using the traditional process for requirements gathering with enterprise systems.

References

Sandy Behrens, Wasana Sedera, (2004) Why do Shadow Systems Exist after an ERP Implementation? Lessons from a Case Study, Proceedings of PACIS’2004, Shanghai, China

David Jones, How to live with ERP systems and thrive, Presented at the Tertiary Education Management Conference’2003, Adelaide

Jones, D., Behrens, S., Jamieson, K., & Tansley, E. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Paper presented at the Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

BAM into Moodle #9 – a working eStudyGuide block?

The last post finalised some bits of knowledge I needed, now it is time to put it into action and complete the eStudyGuide block to a barely useful level.

Steps required include:

  • Add the username/password to global config.
  • Retrieve the xml file for the course using curl.
  • Parse the InDesign xml using Moodle.
  • Modify the HTML produced to use that information.
  • Retrieve the file for the name of the module, chapter etc.
  • Generate the HTML for the block based on that content
  • Initially, retrieve the PDF files via normal http connections to where the guides are located (will require user to login again).
  • Replace that with the use of curl.

There’s still an outstanding problem with the naming used in some courses. i.e. those that have an “introduction”.

Add username/password to global config

Fairly simple to add the form elements to the global config – simply edit config_global.html. However, small problem the text elements are giving the following errors

Notice: Undefined property: stdClass::$block_estudy_guide_username in … on line 8

Interesting, there doesn’t seem to be any difference between the use of those variables in the code and the existing one for base_url. The one difference is that base_url already has a manually set value. Surely there should be a way to initialise these to empty?

Ahh, it turns out it’s connected with the level of debug options, had everything turned on for development. Return it to normal levels for live box – no worries.

Retrieve the xml file

All the necessary variables have been calculated, let’s add a function to return the xml file as a variable.

function getXml() {
global $CFG;

// $base_url/YEAR/PERIOD/COURSE/eStudyGuide/COURSE.xml
$url = $CFG->block_estudy_guide_base_url .
$this->content->year . “/” . $this->content->period . “/” .
$this->content->course . “/eStudyGuide/” .
$this->content->course . “.xml”;
$auth = $CFG->block_estudy_guide_username . “:” .
$CFG->block_estudy_guide_password ;

$curl_handle = curl_init();

if ( $curl_handle )
{
// $fp = fopen(“tmpfile”, “w”);
// Configure curl options
curl_setopt($curl_handle, CURLOPT_URL, $url );
curl_setopt($curl_handle,CURLOPT_CONNECTTIMEOUT,2);
curl_setopt($curl_handle,CURLOPT_RETURNTRANSFER,1);
curl_setopt($curl_handle,CULTOPT_HTTPAUTH,CURLAUTH_ANY);
curl_setopt($curl_handle,CURLOPT_USERPWD, $auth);

// get the stuff
$buffer = curl_exec($curl_handle);
curl_close($curl_handle);

return $buffer;
}
}

Well that compiles, all I have to do now is figure out how to call it properly. Ahh, $this of course.

Oops, undefined constant CULTOOPT_HTTPAUTH – dyslexic fingers – CURLOPT

Next problem, the base url doesn’t seem to be coming across properly. Ahh, web server error only gives path – the wrong course code. Testing course code doesn’t have a guide for 2092. Yep, that’s working. Now to parse the bugger.

Parse the XML

The first question is exactly what information do I need to get out of the XML file. The file basically gives a summary of the chapters and headings within the study guide. The tag is used for the chapter titles. The original version gets the titles for each chapter from the XML file and displays that next to the number of the chapter. Given the sparse real estate in a Moodle block, the title of the chapter isn’t going to fit. So we don’t need that.

Essentially, all we need to do is count the number of TOClev1 entries in the XML file.

Xmlize uses a collection of nested associative arrays where, at least for some, the key is the tag. So, from one extent should be able to simply count the number of TOClev1 keys.

Ahh, there’s a nice little function traverse_xmlize within Xmlize that displays the array Xmlize produces in a format that is pretty readable. Here’s an example

$xml_[Story][#][Heading1][0][#] = “Contents
” $xml_[Story][#][TOClev1][0][#] = “The auditing and assurance services profession 9
” $xml_[Story][#][TOClev1][1][#] = “Ethics, independence and corporate governance 19
” $xml_[Story][#][TOClev1][2][#] = “The legal liability of auditors 29
” $xml_[Story][#][TOClev1][3][#] = “The financial report audit process 43
” $xml_[Story][#][TOClev1][4][#] = “Planning and evaluating business risk 51
” $xml_[Story][#][TOClev1][5][#] = “Assessing specific business risks and materiality 59
” $xml_[Story][#][TOClev1][6][#] = “Internal control 65
” $xml_[Story][#][TOClev1][7][#] = “Tests of controls 73
” $xml_[Story][#][TOClev1][8][#] = “Substantive tests of transactions and balances 81
” $xml_[Story][#][TOClev1][9][#] = “Audit sampling 97
” $xml_[Story][#][TOClev1][10][#] = “Completion and review 105
” $xml_[Story][#][TOClev1][11][#] = “The auditor’s reporting obligations 111
” $xml_[Story][#][TOClev2][0][#] = “Introduction 9
” $xml_[Story][#][TOClev2][1][#] = “Learning objectives 9
”

Very helpful. Essentially all I need do is count the number of elements in one of the arrays. How do you count array elements in PHP? Why, the count function of course. That’s easy

print “count is ” . count( $data[‘Story’][‘#’][‘TOClev1’] ) . “
“;

Of course, Rolley was thinking about using the chapter titles in a roll over or some other GUI rubbish. We should probably get the titles after all. So, simple loop through and a bit of RE replacement to get rid of the page number.

$title = $data[‘Story’][‘#’][‘TOClev1’][0][‘#’];
$title = preg_replace( ‘/s+[0-9]+/’, ”, $title );

Modify the HTML

Well that’s worked. Simple mod of existing for loop with all the above data.

Small problem, the title is being set to “Array”, seems something is going wrong. Also no code in there to get rid of the page number either. Need to look at this.

Ahh, forgot the [‘#’] needed at the end of the array de-reference. You’ve gotta love complicated, large nested associative arrays – maybe there was some benefit of all those years of Perl programming.

And here’s the “proof”, a “working” eStudyGuide block for Moodle – though it still needs a bit of polishing.

Moodle eStudyGuide block

Retrieve the name of the module

Different courses use different titles for the chapters. So far the options include: module, chapter, topic, and week. Need the block to use the appropriate name. Am wondering if the possible options should be part of the global configuration — probably. Can I be bothered? Yes, probably should.

So, add a textarea to the global config and allow those options to be entered – one to a line. Idea will be that the code will split it up into an array and work on that. A simple kludge.

Oops, not so simple. I enter data into the chapter titles and it disappears. Why?

You know it’s getting to be a long day, perhaps past when you should stop coding when you make a mistake like this. They are disappearing because you’re not displaying the variable you are storing them in when you are showing the form.

How do you split a string in PHP into an array? Spoilt for choice. I’ll go with preg_split – like my REs.

Okay, got curl checking for the various files. However, there appears to be some issues with checking to see if the retrieval actually worked. We’re returning straight away with the first title in the config, even though there should be a file for it. When it doubt, try the negation of what you just did – and that worked – ! $buffer

So, this should be a fully working. Time for some tests.

This is why you shouldn’t test. SOCL11056 is a bit different. Not all the files use the file naming format that involves the “module title”. The first one has “introduction”. Bugger. And the old Perl scripts handle it. Will have to see what the deal is there. How did that work?

Ahh, the relied on being able to access the file system. That’s not going to be possible here. That’s going to have to change. Need to talk to some folk about that. Solution can wait.

Serve PDFs with curl

This will be interesting. The problem is that the location of the eStudyGuide PDFs is behind a HTTP basic auth. Student accounts have permission to access the files, however, they will need to login again (having already logged into Moodle). Want to avoid this. One solution to this might be to have the block generate a link that points back to itself or another service. The “other service” uses curl to go through the HTTP auth, get the file and then send it back to the use.

Question: can you generate a URL to access a service provided by a block? This sounds like it might be beyond the scope of a block.

Actually, it might be as simple as putting a “standard” PHP file into the directory for the block and calling it directly from the block. This seems to work. Probably only need to pass the URL as the form element. The getPDF.php file simply takes a URL, checks that it is within the BASE_URL and sends it back to the user’s browser.

That means, I need to figure out how to:

  • send a URL via http appropriately – urlencode probably, maybe not worry about it, at least for now.
  • have the getPDF.php file access the global variables so it can get base_url

Ahh, there’s a whole rigmarole (good value though) setting up a form processing. No time to do that. Will have to leave it there.

BAM into Moodle #8 – finishing the eStudyGuide building block

The last post in this series described the start of a little project to learn more about PHP/Moodle programming in order to get BAM into Moodle. Essentially everything is done, there are two main tasks left:

  • Identify how to “properly” retrieve a file over http in PHP/Moodle and figure out how to use it.
  • Confirm the phpxml is the best way to parse XML in PHP/Moodle and figure out how to use it.

Once those are done, a rudimentary eStudyGuide block will be complete and I’ll have filled in two of the main holes in my knowledge necessary to put BAM into Moodle.

How to retrieve a file over http in PHP/Moodle

What a difference some time makes. I spent a bit of time Tuesday hunting the web and Moodle for information on this. This morning, apparently, it took 5 minutes. curl seems to be the go.

Starting with this curl tutorial – not to mention the examples here

Here’s a list of questions I think I need to answer around the use of curl, and hopefully the answers I’ve found:

  • How do you use curl to get through basic auth?
     curl_setopt($curl_handle,CULTOPT_HTTPAUTH,CURLAUTH_ANY);
    curl_setopt($curl_handle,CURLOPT_USERPWD,'username:password');

    CURLAUTH_ANY is a ?constant? that says use any HTTP auth method.

  • How do you set a mime-type on what’s going back to the client?
    The simplest examples simply get the remote file and return it to the browser. If you do this with a non-HTML file there appears to be some issues around the client handling it appropriately.

    One solution I’ve found is to use the CURLOPT_FILE option to save what is returned by curl to the file system. Then use the header and readfile functions to set everything up appropriately i.e.

    header("Content-type: image/jpeg");
    header("Content-Disposition: attachment; filename=imageName.jpg");
    readfile("tmpfile");

    Would imagine you’d have to use some sort of session variable to keep the filename unique and also remember to remove the file.

    Wonder if you can use header without the need for readfile? Yep, the works, use the CURLOPT_RETURNTRANSFER option so that the file is returned as a string and then use the following

    header("Content-type: image/jpeg");
    header("Content-Disposition: attachment; filename=imageName.jpg");
    print $buffer;

    Of course the question now becomes what if you are transferring really large files. Won’t that consume “RAM” for the web server and on a heavily used site cause some “issues”? So maybe the file option is better.

  • What are the necessary checks etc you should do when using curl?
    Seem to be all fairly standard ones, check return values etc, don’t do horrible security stuff. That said, there seems to be some variability within the existing Moodle code that is using curl – some seems to be quite anal about checks.
  • What’s TRUE in php?
    CURLOPT_BINARYTRANSFER needs to be set to TRUE for transferring binary files. What’s the numeric value for TRUE in PHP? Okay, 0 is false. Somewhat familiar.

Parsing XML

Appears, at the moment that the “xmlize” library in Moodle is the simplest method to parse XML. Produces a nested data structure with the content. Pretty similar to what is done at the moment. Is there something better?

Given that parsing XML isn’t a main requirement for BAM, I won’t bother going any further. I think I’ll be using Magpie to parse the RSS that BAM needs to manipulate.

xmlize is simple to use, looks like it is time for lunch. After lunch will be trying to code all this up. I want a working eStudyGuide block by the end of the day.

The design and implementation of Webfuse – Part 3

The following is the last of, what is now, a three part series of blog posts outlining the design and implementation of the Webfuse system. These are part of chapter four of my thesis. The previous two parts are here and here.

The structure of this section is based on the design guidelines developed for Webfuse and outlined in a section in this post. Each of the three posts outlining the design and implementation of Webfuse are using the design guidelines as the structure through which to explain the implementation of Webfuse. This post closes out the implementation by looking at the final two guidlines – be flexible and support diversity, and encourage adoption.

Webfuse will be flexible and support diversity

The aims which flexibility and support for diversity, as outlined in Section 4.3.2, were meant to achieve included enabling a level of academic freedom, being able to handle the continual change seen as inherent in the Web, and providing a platform to enable the design and use of Webfuse to change and respond in response to increased knowledge due to experience and research. It was intende to achieve these aims through a number of guidelines outlined in Section 4.3.2. The following seeks explain how the design and implementation of Webfuse fulfilled these guidelines and subsequently fulfil the stated goals.

Do not specifically support any one educational theory. The design of Webfuse as a web publishing system and integrated online learning environment gave no consideration to educational theory. The design of the functionality offered by the page types was seen to be at a level below educational theory. That is, the four categories of tasks required of a Web-based classroom – information distribution, communication, assessment, and class management – were seen as building blocks that could be used to implement a number of different educational theories. For example, a social constructivist learning theory might use a simple combination of a discussion board and an interactive chat room as the primary tools on the course site. A more information centric or objectivist approach would focus more on the use of the information distribution tools and the quiz tool. In addition, if a strong case was built for providing greater support for a particular educational theory then this could be provided by developing a collection of page types – using COTS products where appropriate – specific to that educational theory. Only those staff interested in using that educational theory would be required to use those page types.

Separation of content and presentation. The separation of content and presentation was achieved through a combination of the page types and the Webfuse styles. As shown in Figure 4.1 and Figure 4.5 it was possible to change the appearance of a Webfuse web page without modifying the content.

Platform independence and standards. This guideline was achieved through an emphasis on the use of platform independent open-source software, the use of the Perl scripting language and active support for compliance with Web standards. Webfuse was written in the Perl scripting language with user interaction occurring via the Webfuse CGI scripts. To run a copy of Webfuse it was necessary to have a web-server, simple relational database, a version of Perl and a small number of other open source products used to implement some of the “micro-kernel” services and page types (e.g. Ewgie required Java). During 1997 two project students successfully ported Webfuse to the Windows platform (Walker, 1997).

Provide the tools not the rules. The main support for this guideline was the absence of any specification of how an online course might be structured. An academic was free to choose the structure and the page types used in the design of the online course. Including simply using the Content page type that would allow them to provide any HTML content. With the development resources available and the widespread novelty of the Web, it was not possible to develop functionality that would enable academics to modify the available styles or write their own page types. However, the design of Webfuse did initially attempt to provide enough flexibility in the presentation of the pages managed by Webfuse to enable students and staff to adapt use of the system to their personal situation. At the time of the development of Webfuse, Internet access for the majority of students was through fairly slow modem access, which was charged on a time basis and made it important to minimise time spent connected (Jones & Buchanan, 1996). To support this goal Webfuse automatically produced three different versions of every page: a text only version, a graphical version and a version using frames. Figure 4.4 shows a graphical version of a page from the original science.cqu.edu.au site and near the top of the page it is possible to see navigation links to the three versions of the page. Figure 4.6 is the text only version of the page shown in Figure 4.4.

The Units web page (text version) for M&C for Term 2, 2007

Figure 4.6 – The Units web page (text version) for M&C for Term 2, 2007

Webfuse will seek to encourage adoption

In order to encourage adoption of Webfuse four separate design guidelines were established and described in Section 4.3.2. The following seeks to explain how those guidelines were realised in the implementation of Webfuse.

Consistent interface. The Webfuse authoring interface was implemented through the page update script and supported through the use of page types. The page update script implemented a consistent model and main interface for the authoring process. The page types, working as software wrappers, provided a “Webfuse encapsulation” interface to work within the page update script. Whether using the TextIndex page type or the EwgieChatRoom page type the editing interface behaved in a consistent way. The websites produced by Webfuse also produced a consistent interface through the HTML produced by the page types and the Webfuse styles.
Increased sense of control and ownership. It is unlikely that technology alone could achieve this guideline. Webfuse sought to move towards fulfilling this guideline by providing academics with the ability to control their own course sites where previously this was out of the reach of many. It was also hoped that the flexibility and support for diversity provided by Webfuse would help encourage a sense of ownership.

Minimise new skills. In 1996, the Web was for many people a brand new environment. Any web-publishing tool was going to require the development of new skills. Webfuse sought to minimise this by supporting and enhancing existing practice and by using common institutional terminology. This was achieved through the provision of page types such as Lecture, StudyGuide and Email2WWW that connected with existing practice and enabled it to be taken onto the Web. The page types also allowed for the use of CQU specific terminology in the interface. With the page type’s wrapper capability performing the translation between CQU and COTS product terminology. Lastly, the flexibility of Webfuse as a web publishing system allowed the use of URLs that used CQU specific terminology. The URL for the course site used in Table 4.2 was http://science.cqu.edu.au/mc/Academic_Programs/Units/85321/. The components of this URL, including “Academic Programs”, “Units”, “85321” and “mc”, were all common terms used by the members of the M&C community. Not a feature of other e-learning tools.

Automate. As described above Webfuse automatically produced text only and graphical versions of all pages to help those users who required it, to minimse download times. Each of the page types were designed, where possible, to automate tasks that staff or students might have to do manually. For example, the Lecture page type automatically converted Powerpoint slides into individual lecture slides. The LectureSlide page type automatically converted audio into four different format to support the diversity of computer platforms of the time. The StudyGuide page type automatically produced tables of content.

References

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. Paper presented at the Proceedings of ASCILITE’96, Adelaide.

Walker, M. (1997). Porting Webfuse to the Windows platform. Retrieved 29 July, 2009, from http://web.archive.org/web/19981205071012/webfuse.cqu.edu.au/People/Developers/Matthew_Walker/

The design and implementation of Webfuse – Part 2

This post continues the description of the design and implementation of Webfuse started with this post.

Webfuse will be an integrated online learning environment

The idea of Webfuse as an integrated online learning environment encapsulated three main ideas: there would be a consistent, easy-to-use interface; all tools and services would be available via that interface; and that the system would, where possible, automate tasks for the teachers and students. The design of Webfuse as a web publishing system based on hypermedia templates was designed to achieve this goal.

The primary interface for Webfuse was the web. All services provided by Webfuse were managed and accessed through a Web browser. All services were provided by web pages implemented through hypermedia templates. Templates that could, where appropriate, provide additional support by automating tasks (e.g. the Lecture page type described in Table 4.3). The interface to create, modify and manage the websites was provided by the page update process and the hypermedia templates using the same consistent model.

Webfuse will be eclectic, yet integrated

The focus of this requirement was to achieve a system that could be more responsive to changes in requirements and the external context through the inclusion of existing services and tools. The eclectic, yet integrated structure of Webfuse was informed by a combination of concepts including: micro-kernel architecture for operating systems, hypermedia templates, and software wrappers. The following provides more detail of this design and how it was implemented and finishes with a complete listing of the functionality provided by Webfuse in the period from 1996 through 1999.

Micro-kernel architecture

The kernel of an operating system is the part that is mandatory and common to all software, the idea of a micro-kernel is to minimize the kernel in order to enforce a more modular system structure and make the system more flexible and tailorable (Liedtke, 1995). The micro-kernel approach helps meet the need to cope with growing complexity and integrate additional functionality by structuring the operating systems as a modular set of system servers sitting on top of a minimal micro-kernel (Gien, 1990). The micro-kernel should provide higher layers with a minimal set of appropriate abstractions that are flexible enough to allow implementation of arbriatry services and allow exploitation of a wide range of hardware (Liedtke, 1995).

The initial design of Webfuse included the idea of establishing a core “kernel” of abstractions and services relevant to the requirements of web publishing. These abstractions were built on underlying primitives provided by a basic Web server. Continuing the micro-kernel metaphor, the Webfuse page types were the modular set of system servers sitting on top of the minimal micro-kernel. The initial set of Webfuse “kernel” abstractions were implemented as libraries of Perl functions and included:

  • authentication and access control;
    The services of identifying users as who they claimed to be and checking if they were allowed to perform certain operations was seen as a key component of a multi-user web publishing system. The functionality was built on the minimal services provided by web servers and supplemented with institution specific information, for example, the concepts of courses.
  • validation services;
    In the early days of the Web the primitive nature of the publishing tools meant that there was significant need for validation services such as validating that correctness of HTML and the search for missing links.
  • presentation;
    This encapsulated the Webfuse style functionality that allowed the representation of pages to be changed independent of the content.
  • data storage; and
    Content provided by content experts was a key component of the Webfuse publishing model. Page types needed to be able to store, retrieve and manipulate that content in standard ways.
  • page update.
    The page update process was the core of the Webfuse publishing model. It involved how the content experts provided and managed content and how that content was then converted into a web pages. A part of this aspect of the Webfuse architecture was a specification of how the Webfuse page types would communicate and interact.

Hypermedia templates as software wrappers

The simple “TableList” page type discussed above and used to produce the web page shown in Figure 4.1 and the page update form in Figure 4.2 was written entirely by the Webfuse developers. A key aspect of the design of Webfuse was the recognition that there would not be sufficient Webfuse developer time available to allow implementation, from scratch, of all the necessary page types. Especially those page types necessary for more complex functionality, such as synchronous, interactive chat rooms. The idea of implementing hypermedia templates as software wrappers around commercial-off-the-shelf (COTS) software – mostly open source software – was adopted to address this problem.

In software engineering, the term wrapper refers to a type of encapsulation whereby a software component is an encased within an alternative abstraction and it is only through this alternative interface that clients access the services of the wrapped component (Bass et al., 1998, p. 339). A wrapper leaves the existing code of the encapsulated component as is, however, new code is written around it to connect it to a new context (Sneed, 2000). In the case of Webfuse, the hypermedia templates – in the form of Webfuse page types – were used to encapsulate a variety of existing open source software applications and connect them to the Webfuse and CQU context.

Sneed (2000) identifies the introduction of the concept of wrappers with Dietrich, Nackman and Gracer (1989) and its use to re-use legacy applications within an object-oriented framework. Wrappers have also been used in reverse and re-engineering (Sneed, 2000) and security. Wrappers were also one method used by the hypermedia community to integrate complex hypermedia systems with the World-Wide Web (e.g. Bieber, 1998; Gronbaek & Trigg, 1996). Wrappers were also used to integrate third-party applications into open hypermedia systems that emphasize delivery of hypermedia functionality to the applications populating a user’s computing environment (e.g. Whitehead, 1997).

In the case of Webfuse the intent was that the Webfuse wrappers would wrap around commercial-off-the-shelf software (COTS) products, mostly in the form of open-source applications. In the mid to late 1990s there was, in part because of the spiraling cost of custom-developed software, a shift on the part of government from discouraging the use of commercial software to encourage its use (Braun, 1999). Increasingly solutions were built by integrating COTS products rather than building from scratch (Braun, 1999). By 2001, Sommerville (2001, p. 34) describes it as more normal for some sub-systems to be implemented through the purchase and integration of COTS products.

Boehm (1999) identifies four problems with the integration of COTS products: lack of control over functionality and performance; problems with COTS system interoperability; no control over system evolution; and support from COTS vendors. The use of software wrappers to encapsulate COTS products into the CQU context and the general reliance on using open source COTS products was intended to help Webfuse address these issues. Another issue that arises when using a diverse collection of COTS products is the significant increase in the diversity and duplication in the user and management interfaces for each of the COTS products. It was intended that the Webfuse page types, in their role as software wrappers, would also be designed to provide Webfuse users with a consistent user interface. A user interface, where possible, which made use of CQU terms and labels rather than those of the COTS product.

Harnessing hypermedia templates, software wrappers and COTS products allowed Webfuse to combine the benefits of hypermedia templates – simplified authoring process, increased reuse, and reduced costs (Catlin et al., 1991; Nanard et al., 1998) – with the benefits of the COTS approach – shorter development schedules and reduced development, maintenance, training and infrastructure costs (Braun, 1999). While the use of open source COTS products provided access to source code and removed the influence of a commercial vendor (Gerlich, 1998), it did increase the level of technical skills required.

One example of the type of COTS product included into Webfuse through the use of software wrappers is the MHonArc email to HTML converter (Hood, 2007). As mentioned previously M&C courses were already making increasing use of Internet mailing lists as a form of class communication. An obvious added service that Webfuse could provide was a searchable, web-based archive of these mailing lists for use by both staff and students. Rather than develop this functionality from scratch a Email2WWW page type was written as a wrapper around MHonArc. The Email2WWW page type also integrated with the Webfuse styles system to enable automatic modification of appearance and was connected with the mailing list system used at CQU and so was able to regularly and automatically update the web-based archives of course mailing lists.

Functionality

The complete functionality provided by Webfuse is a combination of the services provided by the Webfuse “micro-kernel” (described above) and the functionality implemented in each of the available Webfuse page types. This section seeks to provide a summary of the functionality available in the Webfuse page types as at the end of 1999 – the end of this action research cycle. The initial collection of page types was designed on the basis of the four major tasks required of a Web-based classroom identified in McCormack and Jones (1997, p. 367): information distribution, communication, assessment, and class management.

The original purpose of the Web was to enable the distribution and access to research information, which means that the Web can be extremely useful for the distribution of information (McCormack & Jones, 1997, p. 13). By the end of 1999 Webfuse had a collection of 11 page types providing information distribution related services. Table 4.3 provides a summary of these page types, their purpose and what, if any, COTS products the page types used for implementation of their purpose. The FAQ page, like a number of other page types, was written by a project student (Bytheway, 1997).

Table 4.3 – Webfuse information distribution related page types – 1999
Page Type COTS Product Purpose
Lecture, Lecture Slide Webify (Ward, 2000) for Postscript conversion to slides.
SoX (SoX, 2009) for conversion of audio into various formats
raencoder (RealNetworks, 1996) for audio conversion into Real Audio format
Convert Postscript file of a lecture (usually generated by Powerpoint) into an integrated collection of lecture slides. Each lecture slide could have audio converted into any one of four available formats.
Study guide, STudy guide chapter None Conversion of a study guide into chapters of online material broken up into individual pages, single chapter print versions and the production of table of contents and index
PersonContent, PersonDetails None Display information about teaching staff
FAQ (Bytheway, 1997) None Creation and management of lists of frequently asked questions
Content None Enable simple management of HTML content
File upload None Allow most people to upload files to the web site
TableList, Index, ContentIndex None Provide mechanisms to create index and associated child nodes in a hierarchical web structure
Search htdig ((The ht://Dig group, 2005) Search content of site

Communication is an essential part of the learning experience and a task for which the Web offers a number of advantages and supports through a number of forms (McCormack & Jones, 1997, p. 15). Table 4.4 provides a summary of the five different communication related page types provide by Webfuse by the end of 1999. This list of page types illustrates two points: there are fuzzy boundaries and overlap between these categories and the Webfuse eclectic, yet integrated structure meant it was possible to have multiple page types performing similar roles.

The FormMail page type listed in Table 4.4 could be used as a form of communication but was generally used to perform surveys that could fit under the Assessment category below. Table 4.4 also shows that there were two page types providing web-based discussion boards. Within a few years a third would be added. Each additional discussion board was added as it improved upon the previous functionality. However, it was not necessary to remove the other previous discussion boards and there were instances where this was useful as some authors preferred the functionality of the older versions.

Table 4.4 – Webfuse communication related page types – 1999
Page Type COTS Product Purpose
EwgieChat Ewgie (Hughes, 1996) An itneracitve chat-room and shared whiteboard system
WWWBoard WWWBoard (Wright, 2000) Web-based asyncrhonous discussion board
WebBBS WebBBS (AWSD, 2009) Web-based asyncrhonous discussion board
Email2WWW MHonArc (Hood, 2007) Searchable, web-based archives of mailing list disussions
FormMail FormMail (Wright, 2002) HTML form to email gateway, implementation of surveys

Assessment is an important part of every course, it is essential for knowing how well students are progressing (student assessment) and also for being aware how well the method of instruction is succeeding (evaluation) (McCormack & Jones, 1997, p. 233). Table 4.5 provides a summary of the four Webfuse page types associated with assessment that were in place by the end of 1999. Two of these page types (online quiz and assignment submission) are connected with student assessment, while the other two (UnitFeedback and Barometer) are associated with evaluation. The FormMail page type mentioned in Table 4.4 was also primarily used for evaluation purposes and is somewhat related to the far more CQU specific UnitFeedback page.

Table 4.5 – Webfuse assessment related page types – 1999
Page Type COTS Product Purpose
Online quiz None Management and delivery of online quizzes – multiple choice and short answer
Assignment submission None Submission and management of student assignments
UnitFeedback None Allow paper-based CQU course survey to be applied via the Web
Barometer No software, but concept based on idea from Svensson et al (1999) Allow students to provide informal feedback during a course

Class management involves the clerical, administrative and miscellaneous support tasks necessary to ensure that a learning experience operates efficiently (McCormack & Jones, 1997, p. 289). Table 4.6 summarises the three Webfuse page types associated with class management by the end of 1999. There is some overlap between this category and that of assessment in terms of the management and marking of student assignments.

Table 4.6 – Webfuse class management related page types – 1999
Page Type COTS Product Purpose
Results management None Allows the display and sharing of student progress and results
Student tracking Follow (Nottingham, 1997) Session analysis of student visits to course web pages
TimetableGenerator None Allow students and staff to generate a personalised timetable of face-to-face class sessions

References

AWSD. (2009). WebBBS.   Retrieved 29 July, 2009, from http://awsd.com/scripts/webbbs/

Bass, L., Clements, P., & Kazman, R. (1998). Software Architecture in Practice. Boston: Addison-Wesley.

Bieber, M. (1998). Hypertext and web engineering. Paper presented at the Ninth ACM Conference on Hypertext and Hypermedia, Pittsburgh, Pennsylvania.

Boehm, B. (1999). COTS integration: plug and pray? IEEE Computer, 32(1), 135-138.

Braun, C. L. (1999). A lifecycle process for the effective reuse of commercial off-the-shelf (COTS) software. Paper presented at the 1999 Symposium on Software Reusability, Los Angeles.

Bytheway, S. (1997). FAQ Project Report.   Retrieved 29 July, 2009, from http://web.archive.org/web/19990503041438/webfuse.cqu.edu.au/People/Developers/Scott_Bytheway/Report/index.html

Catlin, K., Garret, L. N., & Launhardt, J. (1991). Hypermedia Templates: An Author’s Tool. Paper presented at the Proceedings of Hypertext’91.

Dietrich, W. C., Nackman, L. R., & Gracer, F. (1989). Saving legacy with objects. Paper presented at the Object-oriented programming systems, languages and applications, New Orleans, Louisiana.

Gerlich, R. (1998). Lessons Learned by Use of (C)OTS. Paper presented at the 1998 Data Systems in Aerospace, Athens, Greece.

Gien, M. (1990). Micro-kernel architecture: Key to modern operating systems design. UNIX Review, 8(11).

Gronbaek, K., & Trigg, R. (1996). Toward a Dexter-based model for open hypermedia: unifying embedded references and link objects. Paper presented at the Seventh ACM Conference on Hypertext, Bethesda, Maryland.

Hood, E. (2007). MHonArc: A mail-to-HTML converter.   Retrieved 10 January, 2008, 2007, from http://www.mhonarc.org/

Hughes, K. (1996). EWGIE – Easy Web Group Interaction Enabler.   Retrieved 29 July, 2009, from http://www.alts.net/Java/Ewgie/docs/

Liedtke, J. (1995). On micro-kernel construction. Operating Systems Review, 29(5), 237-250.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design Patterns and Constructive Templates. Paper presented at the Proceedings of the 9th ACM Conference on Hypertext and Hypermedia.

Nottingham, M. (1997). Follow 1.5.1.   Retrieved 29 July, 2009, from http://www.mnot.net/follow/README

RealNetworks. (1996). Release notes: RealAudio encoder 2.0 for UNIX.   Retrieved 29 July, 2009, from http://service.real.com/help/encoder/unix2.0/how_to.html

Sneed, H. (2000). Encapsulation of legacy software: A technique for reusing legacy software components. Annals of Software Engineering, 9(1-4), 293-313.

Sommerville, I. (2001). Software Engineering (6th ed.): Addison-Wesley.

SoX. (2009). SoX – Sound eXchange – Home page.   Retrieved 29 July, 2009, from http://sox.sourceforge.net/

Svensson, L., Andersson, R., Gadd, M., & Johnsson, A. (1999). Course-Barometer: Compensating for the loss of informal feedback in distance education. Paper presented at the EdMedia’99, Seattle, Washington.

The ht://Dig group. (2005). ht://Dig – Internet search engine software.   Retrieved 29 July, 2009, from http://www.htdig.org/

Ward, S. (2000). Webify: Build web presentations from postscript.   Retrieved 29 July, 2009, from http://www.fnal.gov/docs/products/webify/webifydoc/

Whitehead, E. J. (1997). An architectural model for application integration in open hypermedia environments. Paper presented at the Eighth ACM Conference on Hypertext, Southhampton, UK.

Wright, M. (2000). WWWBoard.   Retrieved 29 July, 2009, from http://www.scriptarchive.com/wwwboard.html

Wright, M. (2002). FormMail.   Retrieved 29 July, 2009, from http://www.scriptarchive.com/formmail.html

The design and implementation of Webfuse – Part 1

This continues the collection of content that goes into Chapter 4 of my PhD thesis. Chapter 4 is meant to tell the story of the first iteration of Webfuse from 1996 through 1999. The last section I posted describes the design guidelines that informed the implementation of Webfuse. This post and at least one following post seeks to describe the details of the design and implementation of Webfuse.

As with all the previous posts of content from the thesis, this content is in a rough first draft form. It will need more work. Comments and suggestions are more than welcome.

Design, implementation and support

This section outlines how the design guidelines for Webfuse introduced in the previous section (Section 4.3.2) were turned into a specific system design and how that system was implemented and supported during the period from 1996 through 1999. First it briefly outlines the process, people and technology used during this period to design and implement Webfuse. It then explains how the abstractions that form the design of Webfuse were intended to fulfil the design guidelines introduced in Section 4.3.2. Lastly, it offers a description of the functionality offered by Webfuse towards the end of 1999. The next section (Section 4.3.4) will provide an overview of using Webfuse from both a student and academic staff member perspective.

Process, People and Technology

The initial design and implementation of Webfuse occurred over a period of about 12 months starting in mid-1996. The author performed most of the initial design and implementation work with additional assistance from a small number of project students who worked on particular components. In 1997, Webfuse was taken over by the Faculty of Informatics and Communication. The Faculty appointed a full-time Webmaster and used Webfuse for their faculty website and online learning. The Faculty webmaster helped staff use Webfuse, did some development and was supported by a small number of other Faculty technical staff. The development processes used Webfuse functionality during this period were fairly ad hoc.

From 1996 through 1999, Webfuse was implemented primarily as a collection of Perl CGI scripts and various support libraries and tools. The Perl scripting language was chosen because it was platform independent and scripting languages like Perl allowed rapid development of application via the gluing together of existing application and development was 5 to 10 times faster than through the use of traditional systems programming languages (Ousterhoust, 1998). An Apache web server served the Webfuse CGI scripts and the resulting web pages. For information storage, Webfuse used the file system and a variety of relational databases. All of the applications used in Webfuse were open source. During this the available open source relational databases were not full-featured, the lack of a full-feature relational database influenced some design decisions.

The design

The set of abstractions and decisions that underpin in the initial design of Webfuse drew on a number of existing concepts from the operating systems, information systems and hypermedia communities. The informing concepts included hypermedia templates (Catlin, Garret, & Launhardt, 1991), software wrappers (Bass, Clements, & Kazman, 1998, p. 339), micro-kernel architectures of operating systems (Liedtke, 1995) and known limitations of the World-Wide Web and its hypermedia model (Bieber, Vitali, Ashman, Balasubramanian, & Oinas-Kukkonen, 1997). The design was informed by the understanding of these concepts and the desire to fulfil the five broad design guidelines outlined in Section 4.3.2. The following links these guidelines to the informing concepts and explains the design of Webfuse.

A web publishing tool

From the start Webfuse was seen as a web-publishing tool. The implication of this is that Webfuse was seen as a system that produced web pages and web sites. In particular, Webfuse was intended to manage the website of the Faculty of Applied Science which includes a range of different departments and would be managed by a number of different people. There were a number of known problems with the authoring process of websites at this point in time. The authoring process was usually carried out without a defined process, lacked suitable tool support, and did little to separate content, structure and appearance (Coda, Ghezzi, Vigna, & Garzotoo, 1998). The process also made limited reuse of previous work (Rossi, Lyardet, & Schwabe, 1999) and required better group access mechanisms and online editing tools (K. Andrews, 1996).

The difficulty of authoring on the Web makes it difficult to create and maintain large websites and often the management of such content was, at this stage, assigned to one person or group who became the bottleneck for maintenance (Thimbleby, 1997). This is especially troubling when Nielsen (1997) suggested that rule of thumb that the annual maintenance budget for a website should be at least 50 percent, and preferably the same as, the initial cost of building the site. The nature of learning and teaching and its reliance on communication and collaboration suggested that for e-learning such a recommendation might need to be increased.

The World-Wide Web, at this stage, was a particularly primitive hypermedia system where the lack of functionality made the authoring process more difficult (Gregor et al., 1999). One recognition of this was that a key part of the problem definition outlined in Section 4.2.2 was the difficult and time-consuming nature of web-based learning. It was also recognised that ease of use was a key part of encouraging adoption amongst academic staff. To address this problem it was decided that Webfuse would make use of the concept of hypermedia templates (Catlin et al., 1991; Nanard, Nanard, & Kahn, 1998).

Hypermedia templates (Catlin et al., 1991) are an approach to simplifying the authoring process while still ensuring the application of good information design principles. Hypermedia templates would enable content experts to become responsible for maintaining Websites and thus increases ownership, decreases costs and addresses the authoring bottleneck problem (Jones, 1999b). Hypermedia templates also aid in reuse which is a strategic tool for reducing the cost and improving the quality of hypermedia design and development (Nanard et al., 1998). There initial purpose was to improve the application of information design principles to hypermedia collections (Catlin et al., 1991).

In their initial development hypermedia templates were sets of pre-linked documents that contain both content and formatting information used by authors to create a new set of information (Catlin et al., 1991). The intent was that graphic designers would create the templates, which would subsequently be used by content experts to place material into hypermedia (Catlin et al., 1991). The content experts would not need to become experts in information design, nor would the graphic designers need to become content experts. Editing a template did not require learning any new software or knowledge.

Nanard, Nanard and Kahn (1998) extended the idea into constructive templates with the intent of extending reuse in hypermedia design beyond information and software component reuse into the capture and reuse of design experience. A constructive template is a generic specification which makes it easier for a developer to build a hypermedia structure and populate it with its data (Nanard et al., 1998). While a model describes a structure, a constructive template helps produces instances of that structure by mapping source data into a hypertext structure (Nanard et al., 1998). Template-based hypermedia generation can be implemented using either programming or declarative means. Constructive templates are built on the principle of separating source data from hypermedia presentation and enables work on the structure to be done independently from the content, reducing the burden of production. Through automating large parts of the production process constructive templates drastically reduce cost (Nanard et al., 1998).

As a web-publishing system the primary output of Webfuse was web pages. Each Web page was of a specific type. The type of page specified which Webfuse hypermedia template, during this period they were called page types, would be used to produce the web page. A page type was implemented as a collection of pre-defined Perl functions that would obtain the necessary content from the author, convert that content into the HTML necessary to display the body of the page and carry out any additional necessary steps. Figure 4.1 is an example of a web page produced by Webfuse.

Content index page example

Figure 4.1 – A simple web page produced by Webfuse

On each web page produced by Webfuse there will be an “Edit” link. If an authorised person clicks on this link they are presented by a web form – called a page update form – that allows them to provide, edit and modify the content used to produce the web page. The structure and features of the page update form, as well as the conversion process applied to the content, is unique to the page type.

Figure 4.2 shows the page update form for the web page from Figure 4.1. A page type called TableList produces the web page shown in Figure 4.1. As the name suggests this page type is used to manage a series of lists containing individual elements, which are displayed in a series of separate tables. Each element in the list points to another web page that is created and then managed through Webfuse. In Figure 4.1 there is one list called “Years” which consists of the elements “2008” and “2009”. Figure 4.2 contains HTML form elements to manage two lists. One for the existing list called “Years” and one that can be used to add a new list. As well as managing the elements of lists the form in Figure 4.2 also provides some formatting options including how to sort the list elements, how many columns to have in the table and how big the table borders should be.

Page update form for content index page

Figure 4.2 – Page update form for the web page shown in Figure 4.1

The design of Webfuse as a web publishing system made it necessary to include into the Webfuse an abstraction for the websites it would manage. Such an abstraction was necessary in order to implement the services and interfaces Webfuse would provide to authors to manage their websites. Hypermedia and hypertext, of which the World-Wide Web is an example, have been defined on the basis of their support for non-linear traversal and navigation through a maze of interactive, linked, multiple format information (Kotze, 1998). The “disorientation problem” – getting “lost in hyperspace” – refers to the greater potential for the use to become lost or disoriented within a large hypertext network (Conklin, 1987).

The topology or structure of a hypertext directly affects navigation performance (McDonald & Stevenson, 1996). Oliver, Herrington and Omari (R. Oliver, Herrington, & Omari, 1999) identifies three main structures within hypermedia environments: linear, hierarchical and non-linear or networked. Shin, Schallert and Savenye (1994) suggests that the most popular structure for hypertext and hierarchical and network (non-linear) structures. Garzotto, Paolini and Schwabe (1993, p. 8) point to the observation of many authors that hierarchies are very useful to help user orientation when navigating in a hypertext. Advantages of hierarchies include: a strong notion of place; documents have clear superior/inferior relationships that are sometimes augmented with linear precedence relationships between nodes; they are familiar due to their use in other domains; and the rigidity, which creates some inflexibility, aids comprehension (Durand & Kahn, 1999). Hierarchical structures have also been recommended as the most appropriate structures for large websites (Sano, 1996).

The previous paragraphs draw on research literature to identify a number of advantages to justify the selection of a hierarchical structure for the model of a website use by Webfuse. There were, however, also two pragmatic reasons for this choice of structure. The open source relational databases that were available at the time and used in the implementation of Webfuse were not capable of storing amount and type of data that a large website would require. The use of a relational database to store information was limited to authentication and authorization data. For the most part, the storage of content to be used in generating web pages were stored on the file system of the computer hosting the Web server. The file systems of computers did and continue to use a hierarchical structure of directories and files. Having the website structure used by Webfuse match the structure used to store the information considerably simplified implementation.

Figure 4.3 is a partial, graphical representation of the hierarchical structure of the Faculty of Applied Science website created and managed via Webfuse during 1997. At the top level is the main science home page. The next level down has five main sections including one for the Faculty’s research centre’s and one for each of its four departments – Maths and Computing, Applied Physics, Biology and Chemistry. Each of the department websites followed a similar structure with main sections for information, staff, academic programs, students, research and community. The websites for individual courses – prior to 1998 these were called units – are all contained in their own folders with names based on the course codes (e.g. 85321, Systems Administration).

Partial hierarchy of science.cqu.edu.au pages - 1997

Figure 4.3 – A partial hierarchy of the Faculty of Applied Science website in 1997

Each of the boxes shown in Figure 4.3 represents an individual web tree but also represents a collection of related material. The “Units” box represents the “Units” web page (Figure 4.4) and the folder “Units” that contains all of the web sites for the units offered by the Department of Mathematics and Computing in the second term of 2007. By default all Webfuse pages are freely available to anyone on the Web. There is an access control facility that can optionally restrict access to specific people or groups.

The Webfuse access control system does not make any distinction between types of accounts; there is not concept of a course designer, administrator, or student account in Webfuse (McCormack & Jones, 1997, p. 365). Each user account belongs to a number of groups. Groups can be assigned permissions to perform certain operations on Webfuse objects, which are either individual web pages or entire websites. The directory path that specifies where the object resides on the web server is used to uniquely identify each object. Initially, there were three valid operations that could be performed on an object (McCormack & Jones, 1997, p. 366):

  • access;
    The ability to access or view the page. By default all objects are able to be viewed by anyone on the web.
  • update; and
    The ability to modify the page using the page update process.
  • all.
    The ability to perform any and all operations on the object.

Home page for M&C in 1997

Figure 4.4 – The Units web page for M&C for Term 2, 2007

Some page types recognise additional operations that are specific to the operation of the page. For example, an early assignment management page type recognised a “mark assignment” operation (McCormack & Jones, 1997).

Table 4.2 provides an example of two different Webfuse permissions. One which gives permission for members of the group “jones” to perform all operations on the entire website for the unit 85321, Systems Administration. Another which gives permission to edit just the home page for the 85321 website. An object that ends with a slash (/) indicates everything within that directory while an object without the slash at the end indicates just that web page.

Table 4.3 – Example Webfuse permissions
  Modify 85321 Web site Modify 85321 Web page
Object /mc/Academic_Programs/Units/85321/ /mc/Academic_Programs/Units/85321
Operation All update
Group jonesd jonesd

A Perl script, called the page update script, included a check of the permissions system to determine if a particular person could edit the requested page. The page update script was also responsible for identifying the type of page being edited, accessing the appropriate code for the page type and adding other information and services to the page update form. Other services available on the page update form fall into two main categories:

  1. Webfuse services; and
    A number of support services such as HTML validation, link checking, access control, file management and hit counters could be accessed via the page update form.
  2. Page characteristics.
    As well as the content managed by the page type each web page also contained a number of the characteristics including the page type, title, colours used and the style template.

The notion of a style or style template was used to further separate the appearance of a page from the content. This enabled the appearance of the same page, containing the same content to evolve over time for whatever reason (this feature was added before the concept of cascading style sheets – CSS – was widely used). Figure 4.5 is the same web page as shown in Figure 4.1, however, it is using a 1998 style for the Faculty of Informatics and Communication. This was done by editing the page, changing the style template and updating the page.

Content Index page example

Figure 4.5 – Guides web page (Figure 4.1) with a different style

References

Andrews, K. (1996). Position paper for the workshop, Hypermedia Research and the World-Wide Web. Paper presented at the Applying Hypermedia Research to the World-Wide Web, Hypertext’96.

Bass, L., Clements, P., & Kazman, R. (1998). Software Architecture in Practice. Boston: Addison-Wesley.

Bieber, M., Vitali, F., Ashman, H., Balasubramanian, V., & Oinas-Kukkonen, H. (1997). Fourth Generation Hypermedia: Some Missing Links for the World-Wide Web. International Journal of Human-Computer Studies, 47, 31-65.

Catlin, K., Garret, L. N., & Launhardt, J. (1991). Hypermedia Templates: An Author’s Tool. Paper presented at the Proceedings of Hypertext’91.

Coda, F., Ghezzi, C., Vigna, G., & Garzotoo, F. (1998). Toward a Software Engineering Approach to Web Site Development. Paper presented at the 9th International Workshop on Software Specification and Design, Isobe, Japan.

Conklin, E. J. (1987). Hypertext: An introduction and survey. IEEE Computer, 20, 17-41.

Durand, D., & Kahn, P. (1999). MAPA: a system for inducing and visualizing hierarchy in websites. Paper presented at the Hypertext’98, Pittsburg, PA.

Garzotto, F., Paolini, P., & Schwabe, D. (1993). HDM – A model-based approach to hypertext application design. ACM Transactions on Information Systems, 11(1), 1-26.

Gregor, S., Jones, D., Lynch, T., & Plummer, A. A. (1999). Web information systems development: some neglected aspects. Paper presented at the Proceedings of the International Business Association Conference, Cancun, Mexico.

Jones, D. (1999). Webfuse: An integrated, eclectic web authoring tool. Paper presented at the Proceedings of EdMedia’99, World Conference on Educational Multimedia, Hypermedia & Telecommunications, Seattle.

Kotze, P. (1998). Why the hypermedia model is inadequate for computer-based instruction. Paper presented at the Sixth Annual Conference on the Teaching of Computing and the 3rd Annual Conference on Integrating Technology into Computer Science Education, Dublin City University, Ireland.

Liedtke, J. (1995). On micro-kernel construction. Operating Systems Review, 29(5), 237-250.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

McDonald, S., & Stevenson, R. (1996). Disorientation in hypertext: the effects of three text structures on navigation performance. Applied Ergonomics, 27(1), 61-68.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design Patterns and Constructive Templates. Paper presented at the Proceedings of the 9th ACM Conference on Hypertext and Hypermedia.

Nielsen, J. (1997). Top ten mistakes of web management.   Retrieved 27 July, 2009, from http://www.useit.com/alertbox/9706b.html

Oliver, R., Herrington, J., & Omari, A. (1999). Creating effective instructional materials for the World Wide Web. Paper presented at the AUSWEB’96, Gold Coast, Australia.

Ousterhoust, J. (1998). Scripting: Higher Level Programming for the 21st Century. IEEE Computer, 31(3), 23-30.

Rossi, G., Lyardet, F., & Schwabe, D. (1999). Developing Hypermedia Applications with Methods and Patterns. ACM Computing Surveys, 31(4es).

Sano, D. (1996). Designing large scale web sites. New York: John Wiley & Sons.

Shin, E. C., Schallert, D. L., & Savenye, W. C. (1994). Effect of learner control, advisement, and prior knowledge on young students’ learning in a hypertext environment. Educational Technology, Research and Development, 42(1), 33-46.

Thimbleby, H. (1997). Gentler: A Tool for Systematic Web Authoring. International Journal of Human-Computer Studies, 47, 139-168.

BAM into Moodle #7 – an eStudyGuide block

The last post provided an overview of what is required to put BAM into Moodle and generated a list of things I have to learn in order to implement it.

This post will tell at least some of the story of developing my first CQU Moodle block. Whether the block ever gets used in action, is beside the point. The main aim is to give me the opportunity to engage in a bit of constructionism. In particular, the block I’ve decided to have a crack at will help me learn answers to the following questions developed at the end of the last post.

  • In Moodle/PHP, how do you retrieve remote documents over HTTP? Is there a LWP::Simple equivalent?
  • In Moodle/PHP, how do you parse XML?

Introducing the eStudyGuide block

CQU has a history that includes a significant investment in print-based distance education (“The institution” section in this post offers some background). That means that this year there are at least 10,500 students enrolled at CQU studying by distance education. For many of those students the primary scaffolding of their study, which occurs off-campus, is a study guide. A print based guide written by CQU staff that summarises what they should read and do each week.

For the last couple of years CDDU has been working on a variety of innovations around these study guides. Including developing a process that produces better quality versions of study guide in both hard copy and online. Some work has been done to integrate the online study guides with the VLEs used by CQU. However, the institution has now adopted Moodle and while there is a level of integration, it’s not great.

The aim here is to develop a Moodle block (an eStudyGuide block) that allows the online version of a CQU study guide to be added to a course.

Strictly speaking the online study guide should be included in the main guts of the course home page, not as a block. But the aim here is learn more while producing something reasonably useful, without wasting too much time.

Functionality

The eStudyGuide block will display a bit of HTML that will provide a list of links to each module/chapter of the study guide. The PDFs of the study guide will be stored on a remote web server. When the block is added to the course site it will need to:

  • Identify the course, period and year associated with the current course.
    I believe that CQU currently uses the format
    COIS20025_2092

    for Moodle courses. This translates into the Term 2, 2009 offering of the course COIS20025.

  • Formulate the URL of the folder containing the e-study guide.
    This will be
    $BASE_URL/Guides/$YEAR/$PERIOD/$COURSE/
  • Check that the folder/URL exists.
  • Retrieve and parse the XML file that details the study guide.
    The XML file is produced by InDesign, the publishing system used to generate the guides. It contains information such as the number of chapters/modules, the names of the files, the titles of each module/chapter etc.

    The XML file will be protected by Basic AUTH so it will need to authenticate before getting the XML file.

  • Generate a list of links to each module/chapter.
    Initially these will be just straight URLs.

The development process

The following tells the story of the process I used to put the block together, it may not be complete, but includes the following steps:

  • Create a dummy eStudyGuide block that generates dummy HTML. DONE
  • Add in a global configuration for the block for the BASE_URL for the files. DONE
  • Get it to parse the CQU Moodle course format and use the new URL in the static HTML generated. DONE
  • Get it to retrieve the XML file.
  • Get it to parse the XML file.
  • Dynamically generate the HTML.

Getting a dummy eStudyGuide block

BAM into Moodle post #5 details some of the mechanics for this process and it in turn draws heavily on this page on the Moodle site

The process goes something like this:

  • Create the dummy block_estudy_guide.html file using the template on the Moodle site.
  • Login to Moodle, click on notifications, dummy estudy_guide up and going, eStudyGuide block added to course
  • No need to add configure options for the block, in real life the block will get the course code from some variables, there’s nothing to configure.
  • Add a specialization function to set the title.
    Eventually the title will include the course code, which is set from variables. To set the title this way we need the specialization function. Set this to a constant for now. Will replace this with the real course code in a later stage.
  • Add in the global configuration data.
    In this case the BASE_URL for the location of the eStudyGuides on the external website. Needs a file with the HTML/form for the configuration, at this stage BASE_URL. Done: config is even saving from action to action.

    Had some trouble using the global configuration data in the instance, turned out I needed the

    global

    PHP statement to bring the

    $CFG

    variable into scope.

  • Create HTML guide links
    Going to do this by creating a hard coded associative array and a for loop. The idea being is that eventually the parsed XML will replace the hard coding.
  • Convert to using block_list – not done for now
    Moodle’s block abstraction includes a special case where the block is used to display a list. Where each item has it’s own image. I don’t have easy access to an image set. Addition: Talk to Rolley about the idea of a specific image.

Parse the CQU course format

The task here is to get the Moodle course ID/code, assume it’s in CQU format and parse it into it’s constitute parts.

  • Where is the course variable?
    I’m assuming this is a global variable which is discussed here in the Moodle programming course. Ahh, there’s a global $COURSE with
    $COURSE->id

    being the Moodle ID, but there’s also entries for fullname, shortname. Assume id.

  • Modify the block to use this in the title.
    Ahh, id is the unique number id. What about shortname? That seems to be the one. At least until further confirmation.

    Need to look at REs etc in PHP. Okay, that’s over. Difficult getting use to the slightly new approaches.

  • Parse the format and stick in content variables – done.

Retrieving the XML file

Now the interesting stuff.

  • Get the full path format for the XML file
    Currently it’s BASE_URL/Guides/YEAR/PERIOD/COURSE/eStudyGuide/COURSE.xml
  • Find out how to retrieve files over HTTP within Moodle/PHP
    Well, using xref it’s possible to see within lib phpxml – probably useful for XML parsing. Couldn’t see anything else useful.

    Looking through existing modules might be useful. There’s a flickr module that uses a class called RSSCache – which looks very interesting. Which is included as part of the magpie RSS parser. This came with the default install of Moodle – so one problem solved for the broader BAM project.

And that’s where I have to leave it. Haven’t found the retrieval mechanism. But once I have it, should be straight forward.

BAM into Moodle #6 – Planning and some real coding

The previous post in this series started me along the lines of actually coding something in Moodle. It was only a pretend thing but indicated that blocks are fairly simple to implement. That previous post also pondered about the need to do some planning. Which brings me to the two main tasks for today

  1. Put some rough planning down on “paper”.
    I still don’t know enough about Moodle and its model to get into detailed planning. This planning will be a rough outline of the major tasks that need to be done to give me a heads up.
  2. Start some real coding.
    I need to develop my PHP/Moodle skills and real life tasks are the best way to do this – isn’t this in keeping with the social constructionism (or is that constructivism) at the heart of Moodle – so I’m looking for useful blocks I can produce that help me develop the skills I need for BAM.

Big up front design

Traditional software projects love to promote their rationality through their use of Big Up Front Design (BUFD). i.e. they gather all the requirements, analyse all the tasks and then come up with the perfect design. At this stage they can pass the design over to the lowly technicians who will implement the design.

Far from being rational, I’m a strong believer from both a theoretical and practice perspective, that BUFD is irrational, it’s plan stupid, even insane. It doesn’t have any basis in the cognition of people nor the nature of complex systems. It’s an approach that is certain to fail.

So anyone looking for BUFD (and there will be a few) in this project is going to be disappointed. Actually, there’s a chance some might be pleased, because for them the absence of BUFD will indicate that I’m “cowboy” coding, that I’m not being rational.

To put it simply, I don’t know enough about Moodle to engage in BUFD. I doubt there is anyone at my current institution who knows enough about Moodle, BAM and how academics might like to use BAM. The aim of the next bit of planning is to allow me to identify the necessary tasks I need to undertake to increase my level of understanding. So, that at within the last steps of the project, I’ll be able to develop a BUFD.

For a related perspective, have to love Dilbert (click on the cartoon to see a version you can read).

Dilbert.com

Planning

Some “planning” points

  • BAM will not be part of the Moodle assignment system, it will integrate with it.
    Currently, BAM is used mostly for assignments. Students post to their blogs in order to be marked and receive feedback. However, BAM is not a part of the assignment submission system at CQU. It does integrate with that system, but it’s not part of it. I plan to continue this approach.
  • One Moodle module with many activities
    My current assumption is that I’ll be able to implement a single Moodle module that will be able to provide each of the activities required to implement BAM. I suspect this will be possible, given Moodle’s strong modular nature, but I don’t know for certain.
  • The main BAM activities
    The following are a summary of the main activities that users will perform with BAM
    • Configure BAM for a course – Coordinator
      The first step in using BAM is configuring it. This requires the coordinator to provide the following information:
      • Can students register their blog?
      • Should the student blogs be mirrored?
      • For each question the students have to respond to: question title, question body. Addition: include some dates about when the question should be answered??
    • Register blog with BAM – student
      Having created a blog, the student provides a copy of their blog URL to BAM. BAM checks that it can find an RSS feed associated with that blog and also that the student hasn’t made some common mistakes (e.g. registered the WordPress home page/blog, rather than their individual blog)
    • Check progress – student
      Visiting this page allows the student to check their progress with BAM on two fronts. First, what posts from their blog has BAM matched with the required questions (Addition: maybe a good idea for BAM to show the questions). Second, they can see the marks and comments made by the markers. (Addition: would be nice for the marks/comments to be able to be posted back to the students blog as comments – perhaps a step too far at the moment.
    • Check progress – staff
      Staff can see a page that lists all of their students and gives an indication of if they’ve registered their blogs, how many entries, when the last post was and a link to the live blog.
    • Check student posts – staff
      Similar to the above, but this one gives an overview of all the required questions and the status of the student’s responses to those questions. This is the main starting place for the marking process.
    • Mark a student post – staff
      Usually linked to from the previous page (Addition: would be good to provide a cooked RSS feed of student posts for markers. Each cooked item could include a link back to the mark a post activity. Would allow markers to use an RSS reader to keep up with student posts and then mark them from there)
    • Mirror blog entries – cron
      At a configured time interval visit each the RSS feed for each individual student blog and, if updated save a new copy of the RSS feed on the Moodle server.
    • Modify student blog feed/posts – staff member
      While BAM tries to match student posts to the required questions, it doesn’t always work. This interface is for the marker to handle these problems. Essentially displays a list of all the student posts and whether or not they have been allocated to a question. Allows the marker to “de-allocate” a post or allocate it as the answer to a question.
    • Manage marking – coordinator
      The coordinator of a course, based on CQU practice, needs to be able to manage if and when marks/comments are returned to the student. Depending on how Moodle works, the coordinator may also need to be able to manage which staff are marking which students. Personally, I’d like to avoid doing this.
    • Integrate with assignments – coordinator
      Provide some form of control/management over how the data within BAM is integrated with assignments in Moodle.

Functionality uncertainty

The following is an attempt to take the major user activities listed above and summarise the major functionality required to implement these as currently used in the Webfuse version of BAM. The point is that I probably don’t know how to implement this functionality in Moodle/PHP or if it can be implemented. i.e. it’s the stuff I need to learn.

  • Configure BAM for a course
    This will be a fairly standard web application. Present a form, allow the user to modify the form, store the data in a database. I don’t see this being all that difficult. Probably a good place to start with BAM coding.
  • Register blog with BAM
    Fairly standard web application, however, once the blog URL is inserted/changed there are some additional tasks including:
    • Is the URL valid?
    • Does it exist? Can it be retrieved?
    • Does it have an attached RSS/Atom feed?
      In Perl this is done using LWP::Simple to retrieve the file and XML::Feed to check the resulting file to see if it has an attached feed and to discover what the URL for that feed is.
  • Check progress – student
    A simple web application, given the student’s details, retrieve information from the database and display it for the student.
  • Check progress – staff
    Same as the above
  • Check student posts
    Same as the above
  • Mark a student post
    Same as the above
  • Mirror blog entries
    This is perhaps the most difficult one. It goes through each student blog in courses that are currently being mirrored and
    • Compares the feed against the one saved on disk. If no change, stop now. Otherwise
    • Parse the XML of the feed into internal data structures
    • Look through all the posts in the feed looking for new ones.
    • Compare each new post against the unanswered questions, if there’s a match stick details in the marking database, ready for the marker.
  • Modify student blog posts/answers
    This one is also difficult as it shares a need to parse XML with Mirror and needs to compare that with the data in the database. Needs the XML parsing functionality.
  • Manage marking – coordinator
    Fairly straight forward web application. Need to identify if there are already ways in Moodle for storing this information.
  • Integrate with assignments
    Currently, this is essentially
    • Apply a formula to translate marks for each answer to a single mark result.
    • Copy that result to the assignment system database table.

    Need to find out how this works (the assignment database) in Moodle.

Where to know?

Some specific technical questions to answer

  • In Moodle/PHP, how do you retrieve remote documents over HTTP? Is there a LWP::Simple equivalent?
  • In Moodle/PHP, how do you parse XML?
  • Can a Moodle module support multiple activities?
  • How does “integration” with the Moodle assignment system work?
  • Exactly how do you code up a fairly standard database-backed web app/form in Moodle/PHP?

Have to come up with projects that let me learn the answers to those questions.

The intervention – Webfuse design 1996-1999

The previous couple of posts (one and two) described the context in which the Webfuse e-learning system was designed. This focused primarily on the context at CQU up to 1996 or so. These posts form the definition of the problem which Webfuse was meant to address.

This post begins a description of the intervention undertaken to address this problem. i.e. the early design of Webfuse. This post introduces the Intervention section, explains why it was decided to build another system and then outlines the design guidelines that underpinned Webfuse.

Intervention

As described in the previous section, the problem to be addressed during 1996 was the development of a system, processes and resources to support the use of web-based learning in all of the courses offered by the Department of Mathematics and Computing (M&C) at Central Queensland University (CQU). The same system was also expected to support the operation of the organisational website for the Faculty of Applied Science to which M&C belonged. This section offers a description of the design and implementation of the intervention intended to address this problem.

The description starts with an explanation (Section 4.3.1) of why it was felt to build a CQU specific system rather than use one of the already existing systems. Next, the design guidelines established at the start of the intervention are explained (Section 4.3.2). Following is a detailed description (Section 4.3.3) of the overall design and implementation details of the resulting system – Webfuse – using the design guidelines as an organising structure. A part of the design and implementation section will be a summary of the functionality for e-learning provided by Webfuse during 1996 through 1999.

Why build another system?

By 1996, when this work commenced, there were already a number of existing systems offering support for web-based learning. Many of these systems themselves originated as solutions that arose at other universities for this exact problem. Still others were adaptations of CML-based systems to the Interent. For example, Web Educational Support Tools (WEST) was developed at the University College Dublin and during 1996 was being used at the University of Western Sydney (Nepean) (Pennell, 1996). The World-Wide Web Course Tool (WebCT) was developed at the University of British Columbia (Goldberg et al., 1996) and went on to be a successful commercial product used at many universities throughout the world. The decision to engage in the design and construction of a unique system at CQU could be seen as an example of the not invented here phenomenon resulting the reinvention of the wheel (Simon, 1991, p. 130). Which had been recognised as a growing problem with the development of multimedia learning resources (Bryant, 1998; Zelmer, 1996). This section explains how the mix of a number of different factors led to the decision to build another system.

The most obvious factor was the background and discipline mix with the Department of Mathematics and Computing. The computing, or information technology, side of the department included staff with interests and expertise in software development. Some of these staff, including the author, had a history of research interests in the Internet and the application of information technology to learning (Carter et al., 1995; Jones, 1994, 1995). In addition it was thought that the M&C context offered the chance of unique perspectives around e-learning based on a combination of on-campus, distance and international students and a student population with significantly greater computing expertise and access to technology (Jones & Buchanan, 1996). This was backed up existing experience that had already, within a limited time frame, identified features, approaches and ideas that had not yet been implemented in existing systems (Jones & Buchanan, 1996).

This belief that there were still discoveries to be made was based on the perception that the online learning environment was still fairly youthful and that there were new insights to be gained. In 1997, a year after this work commenced, Macpherson et al (1997) identified that experience in teaching and learning online continued to be fragmentary and that few teaching staff had the knowledge to fully assess the implications of online learning or realistically determine possible future applications. The same authors (Macpherson et al., 1997) discovered through experienced one such limitation with WEST (Pennell, 1996), one of the existing systems. The strictly sequential and linear course structure embedded in the system design required the designers to discover ways to subvert it in order to support the nonlinear design approach they were committed to (Macpherson et al., 1997).

For these reasons, it was felt that the design, implementation and use of another online learning environment within the M&C context would, as well as providing support for online learning by M&C staff and students, provide an opportunity to experiment with new services, enable a comparison to be drawn between different systems, identify mistakes to avoid and practices to replicate and hopefully identify unique possibilities for e-learning (Jones & Buchanan, 1996). Lastly, a key guideline for the design of Webfuse was to be “do not reinvent the wheel” (Jones & Buchanan, 1996).

Design guidelines

Design of a system, Webfuse, to support learning and teaching within the Department of Mathematics and Computing (M&C) and the broader website for the Faculty of Applied Science commenced in mid-1996. The design guidelines underpinning Webfuse and the associate rationale were outlined in publications written at the time (Jones & Buchanan, 1996; McCormack & Jones, 1997) and others reflecting back on that design after the fact (Gregor et al., 1999; Jones, 1999a, 1999b; Jones & Gregor, 2004, 2006). This section provides an overview of those design guidelines while the following section (Section 4.3.3) explains how those guidelines were implemented through the design and implementation of Webfuse.

Webfuse will be a web publishing tool

The problem definition required Webfuse to not only provide online learning services to the students and staff of M&C, but it also had to support the website for the Faculty of Applied Science (and later the Faculty of Informatics and Communication). This meant that from the start Webfuse was envisaged as a Web publishing tool. That is, a system that helps people create and maintain Web pages and Web sites. Webfuse was designed as a general Web publishing tool that also provided a number of specific tools and facilities to support the creation and maintenance of Web-based classrooms (McCormack & Jones, 1997, p. 362).

This is somewhat different to most of the other e-learning systems available at that time. Systems such TopClass and WebCT were designed only for learning and teaching. A consequence of this design was that these systems had a more pre-defined purpose and structure and a subsequent lack of flexibility. As a more general web publishing tool, capable of supporting an organisational website, Webfuse had to satisfy a broader set of requirements.

Webfuse will be an integrated online learning environment

It was intended that Webfuse would be a totally integrated online learning environment in that it should provide all of the features and systems required by both students and teachers using a consistent and easy-to-use interface (Jones & Buchanan, 1996). An integrated online learning environment encapsulates a set of tools, systems, procedures and documentation that supports any and all parts of the learning and teaching experience. The implication was that students and teachers could perform all necessary tasks, regardless of technology, via Webfuse.

As part of this e-learning was seen as more than converting lecture overheads and other course resources into HTML and placing them on the Web (Jones & Buchanan, 1996). An integrated online learning environment should provide support for tasks including, but not limited to, assignment submission, automated (self-)assessment, evaluation and both synchronous and asynchronous communication. As an integrated online learning environment Webfuse also had to provide appropriate support for non-Web e-learning. For example, by 1996 M&C was making increasing use of course mailing lists as a means of communication. Rather than require the use of mailing lists to cease, Webfuse should integrate with this use and preferably provide additional functionality.

Webfuse will be eclectic, yet integrated

The majority, if not all, of the e-learning systems available in 1996 were tightly integrated systems produced and supported by a single vendor. All additions and modifications to these systems had to be made by the single vendor. While the tightly, integrated nature of these tools meant they were reasonably easy to install, manage and use with the supplied documentation. It also meant that they were less than responsive to new developments from either the broader online community or the local context.

It was recognised from the start of the Webfuse project that it would not be possible for M&C to provide all the necessary human resources to build and maintain a Web authoring tool (Jones, 1999b). A tightly integrated structure with M&C providing all tools would not be possible. M&C would run the risk of either retaining an out of date system because it was too expensive to replace, or having to throw away the investment in a system because it had not kept up with change (Jones & Buchanan, 1996). This was seen as a significant problem because of recent experience with the difficulty CQU and other institutions faced in moving from text-based, computer-mediated communications systems to the more recent Internet system, and also because on-going and rapid change was seen as a key characteristic of the Internet (Jones & Buchanan, 1996). In addition it was recognised that the broader community using the Web would be better able to develop a range of tools, such as web-based discussion or interactive chat systems and that it would be more efficient for M&C to re-use those systems, rather than reinvent the wheel.

Consequently, the focus of the integrated online learning environment would be on providing the infrastructure necessary to integrate existing and yet to be developed Internet and e-learning tools developed by the broader community (Jones & Buchanan, 1996). The M&C OLE would provide the management infrastructure and consistent interface to combine existing tools such as WWW servers, online quizzes, assignment submission, discussion forums and others into a single integrated whole (Jones & Buchanan, 1996). While some components would be developed specifically for the local context, the emphasis should be on integrating existing tools into the OLE (Jones & Buchanan, 1996).

Webfuse will be flexible and support diversity

From the start, an ability to handle the diversity and continual change inherent in web-based learning (Jones, 2004) was seen as the key requirement of any web-based learning system. Freedom of choice, for both staff and students, was seen as one of the important advantages provided by e-learning (Jones & Buchanan, 1996). This was in part a reaction to the necessary consistency inherent in large-scale print-based distance education. This need for consistency created a number of problems and issues due to the diversity present in the disciplines, courses, academics and students within the department (Jones, 1996a; Jones & Buchanan, 1996). Less than user-friendly consistency had also previously extended to requiring students to have and to use specific computer platforms while studying at CQU. Flexibility and the ability to change was also seen as important since one purpose of Webfuse was to enable research and experimentation with forms of e-learning. It was important that the design of Webfuse was not frozen before experience gained in using the system was able to inform on-going change.

To achieve the desired levels of flexibility and support for diversity a number of guidelines were adopted. These included (Jones & Buchanan, 1996):

  1. do not specifically support any one educational theory;
    There is a large variety of possible learning theories with different theories being more appropriate depending on the context and individuals involved (Leidner & Jarvenpaa, 1995). Rather than seek to embody the principles of a single learning theory, Wefuse should enable individual academics to use those theories they deem most suitable, and also handle change in preferred learning theories as experience and knowledge expand.
  2. platform independence and standards; and
    In an era of diverse and changing computer platforms placing artifical constrains on the computer platforms that could use Webfuse was seen as unnecessarily restrictive. Dependence on a single or limited number of platforms would restrict choice, limit the number of people that can use the system, and could influence future use of the system as platforms become dated. It was intended that the M&C OLE would use platform independent technologies such as scripting languages and broadly accepted standards.
  3. provide the tools, not the rules.
    Computer systems, unlike human organizations, are rigid and incapable of adaptation on their own and consequently tend to better support the regularities than the particularities of a situation (Harris & Henderson, 1999). For an activity like learning and teaching that is characterised by diversity, rigid computer systems that expect consistent, regular practices are less than appropriate. Strict procedures leave little room for the unique characteristics of individual disciplines, courses, academics and students (Jones & Buchanan, 1996). Where possible, Webfuse should aim to provide the tools to assist in the development of Web-based classrooms, but have sufficient flexibility to enable staff and students to adapt these tools to their personal situation (Jones & Buchanan, 1996).

Webfuse will seek to encourage adoption

In 1996, it was recognised that “if you build it, they will come” is not an approach likely to work within an academic environment where staff development and improvements in learning and teaching has been described as “herding cats” (Jones & Buchanan, 1996). It was recognised that once the system is built staff must be: encouraged to use the system, convinced of the system’s usefulness, and provided with appropriate training and documentation (Jones & Buchanan, 1996). Design guidelines intended to help encourage use of the system included (Jones & Buchanan, 1996):

  • consistent interface;
    The eclectic, yet integrated guideline requires that Webfuse have a consistent interface and system metaphor for all tools. This should help ease-of-use and subsequently adoption.
  • increased sense of control and ownership;
    One rationale for requiring Webfuse to support diversity and flexibility was so that staff and students could adapt the system to their needs and subsequently encourage a greater sense of control and ownership.
  • minimise new skills; and
    Even in 1996, the students and staff with M&C brought existing experience with computers, software and the Internet. For example, many students already had email accounts and associated email programs. Academics were already using mailing lists and other aspects of the Internet. Rather than reinvent the wheel and force these people to learn new skills and tools, Webfuse should leverage these existing skills, software and processes to minimise the need for new skills and reduce workload.
  • automate.
    Where possible the system should automate those tasks possible while maintaining a balance with other guidelines. This would include both support or administrative services specific to the Web (e.g. HTML validation and link checking) and other higher level tasks such as creating an initial course website.

References

Bryant, S. (1998). Overcoming the ‘Not Invented Here’ Syndrome – Experience with Sourcing Education Multimedia Developed Elsewhere. Paper presented at the Proceedings of ASCILITE’98.

Carter, B., Lockwood, J., O’Kelly, S., Parry, C., Atkinson, S., Manderson, T., et al. (1995). CQ-PAN: Putting schools into cyberspace. Paper presented at the Information On-Line and On-Disk’95, Sydney.

Goldberg, M., Salari, S., & Swoboda, P. (1996). World-Wide Web – Course Tool: An environment for building WWW-based courses. Computer Networks and ISDN Systems, 28, 1219-1231.

Gregor, S., Jones, D., Lynch, T., & Plummer, A. A. (1999). Web information systems development: some neglected aspects. Paper presented at the Proceedings of the International Business Association Conference, Cancun, Mexico.

Harris, J., & Henderson, A. (1999). A better mythology for system design. Paper presented at the SIGCHI conference on Human factors in computing systems: the CHI is the limit, Pittsburgh, Pennsylvania.

Jones, D. (1994). A workstation in every home! Paper presented at the Asia Pacific Information Technology in Education Conference, Brisbane.

Jones, D. (1995). 1000 users on a 486. Paper presented at the SAGE-AU’95, Wollongong.

Jones, D. (1996). Computing by distance education: Problems and solutions. Paper presented at the Integrating Technology into Computer Science Education.

Jones, D. (1999a). Solving some problems with university education: Part II. Paper presented at the Ausweb’99, Balina, Australia.

Jones, D. (1999b). Webfuse: An integrated, eclectic web authoring tool. Paper presented at the Proceedings of EdMedia’99, World Conference on Educational Multimedia, Hypermedia & Telecommunications, Seattle.

Jones, D. (2004). The conceptualisation of e-learning: Lessons and implications. Best practice in university learning and teaching: Learning from our Challenges.  Theme issue of Studies in Learning, Evaluation, Innovation and Development, 1(1), 47-55.

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. Paper presented at the Proceedings of ASCILITE’96, Adelaide.

Jones, D., & Gregor, S. (2004). An information systems design theory for e-learning. Paper presented at the Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D., & Gregor, S. (2006). The formulation of an Information Systems Design Theory for E-Learning. Paper presented at the First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Leidner, D., & Jarvenpaa, S. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19(3), 265-291.

Macpherson, C., Bennett, S., & Priest, A.-M. (1997). The DDCE Online Learning Project. Paper presented at the ASCILITE’97, Perth.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

Pennell, R. (1996). Managing online learning. Paper presented at the AUSWEB’96. from http://ausweb.scu.edu.au/aw96/educn/pennell/index.htm.

Simon, H. (1991). Bounded rationality and organizational learning. Organization Science, 2(1), 125-134.

Zelmer, A. C. L. (1996). The more things change…memoirs of a computer-based educator. Paper presented at the ASCILITE’96, Perth.

Use of "e-learning" @ CQU up to 1996 (or so)

The following is the next completed (to a rough first draft stage) section of chapter 4 of my thesis. It follows on from a post from yesterday that started defining the problem being faced. This section completes the definition of this problem by giving a broad summary of the use of “e-learning” at CQU up until 1996.

Apologies to all those folk at CQU whose work I have not referenced. If you are such a person, please let me know what I’ve missed and I’ll add your work in. You should be able to see a bias towards work from the Department of Mathematics and Computing which was the organisational unit I belonged to back then.

Use of e-learning

In defining e-learning, this thesis draws on the OECD (2005) definition in which e-learning is “the use of information and communications technology to enhance and/or support learning in tertiary education”. By 1996 there was a long history at CQU of individual experimenting with e-learning (Buchanan & Farrands, 1995; Chernich, Jamieson, & Jones, 1995; Clayton, Farrands, & Kennedy, 1990; Philip Farrands & Cranston, 1993; Phillip Farrands & Lynch, 1996; Gregor & Cuskelly, 1994; Jones, 1994, 1996b; Dave Oliver, 1985, 1994; Zelmer & Pace, 1994). The limitations, problems and lessons learned from these experiments contributed to the understanding and definition of the problem to be solved. This section offers a brief overview of this work, using the broadest possible definition of e-learning, to illustrate this contribution. The previous work is divided along the lines of the technologies used and includes: audio and video; multimedia and computer simulations; and computer-mediated communication and the Internet.

Audio and video

For much of its existence the nature of learning and teaching at CQU has been characterised by significant geographic distance between individual students and the teaching staff. Given the established expectation of learning and teaching involving face-to-face interactions this geographic distance has created significant disquiet amongst both students and staff. As a consequence CQU has a history of fairly significant usage and experimentation with various technologies intended to provide students with audio and video and in some way re-create the face-to-face learning experience.

For distance education students audio teleconferencing and telephone tutorials have been used to provide better access and support (Davison, 1996). For many distance education students the telephone remained the main form of interaction with academic staff. The importance of this medium led to a variety of hotline services, first provided by the central distance education division and subsequently by at least one academic department, that provided a managed approach to answering student queries (Jones, 1996a). By the mid-1990s, the installation of an institutional telephone voicemail system enable some academics to create short lectures and responses to study questions on the voicemail system that students could access as the need arose (Davison, 1996).

During the early 1990s conditions became conducive to more widespread consideration of audiographics (Rehn & Towers, 1994). Ellis, Debreceny and Crago (1996) define audiographics as the linking of educational sites into a distributed classroom to provide a combination of audio, over a telephone line, and graphics, shared by computers linked by modems. During the mid-1990s there was some encouragement and use at CQU (Crock & Andrews, 1997; Thompson, Winterfield, & Flanders, 1998) thought there were some problems with the preparedness of students and staff and the accessibility and cost of the required technology. The use of audiographics at CQU mirrored the broader context and it largely disappeared with the increasing availability of the Web (Rowe & Ellis, 2008).

During the early 1990s, audio cassettes were used in a first year programming course, primarily for distance education students, to provide an example tutorial sessions between lecturer and student (Jones, 1996a). Tutored Video Instruction (TVI) was a more organised approach to the use of recording media to capture face-to-face interaction, in this case video-tapes-tapes, aimed primarily at students on CQU’s regional campuses. The identification that school leavers, the primary students at the regional campuses, did not have the independent learning skills to study successfully from predominantly print-based distance education materials was a major reason for the adoption of TVI (T. Andrews & Klease, 1998). TVI was first experimented with in 1983 and used more broadly thereafter (McConachie et al., 2006) TVI involved the production of videotapes of regular classroom lectures at the main delivery campus and the physical distribution of these tapes to non-delivery campuses at which they were played for students while in the presence of a tutor (T. Andrews & Klease, 1998). TVI was first experimented with at CQU in 1983 with broader use to follow. Some conclusions about TVI were generally positive (Appleton, Dekkers, & Sharma, 1989). However, this only worked if the TVI was not used simply to watch the tape, but instead was used as a stimulus for discussion by students and interaction with the tutor (T. Andrews & Klease, 1998).

The ability to provide a more interactive learning experience across campuses became possible in 1992, when interactive video-conferencing facilities were introduced at CQU using a ‘rollabout’ system in which all the technology was located on a trolley that could be wheeled in and out of rooms as required (Luck, 2009). In 1996, to address attrition and in order to become a true regional institution students were able to complete the second and third years of some degress at the non-Rockhampton regional campuses (David Oliver & Van Dyke, 2004). The interactive videoconferencing facilities were significantly expanded to support the necessary multi-campus teaching of advanced courses (Luck, 1999).

Multimedia and computer aided learning

By the late 1980s and early 1990s, in keeping with the broader history of technology-mediated learning (insert cross reference to chapter 2), a number of CQU-based projects were experimenting with computer-mediated and computer-assisted learning (CML/CAL). Zelmer and Pace (1994) report on such work in disciplines including biology, chemistry, mathematics and health science. By the mid to late 1990s the rise of multimedia capable personal computers increased interest, especially given improving audio and video capabilities. By this time the CQU distance education centre had created an Interactive Multimedia Unit that included instructional designers (Macpherson & Smith, 1998). The unit provided assistance in the production of multimedia resources to supplement traditional distance education resources (e.g. Stewart & Cardnell, 1998) and the development of multimedia training materials for external clients (Bennett & Reilly, 1998). While some useful multimedia resources were developed, there remained problems around this approach including inadequate development tools, incompatible computer platforms, large development costs and concerns about equity and access (Zelmer, 1995; Zelmer & Pace, 1994). By the mid-1990s, with growing recognition of the benefits of the World-Wide Web, such personal computer based applications were no longer considered state of the art (Zelmer, 1995).

The nature of print-based distance education is such that approaches often used to help students understand difficult concepts, such as live demonstrations, are not possible. CQU staff, especially those within M&C have developed a number of computer aided learning packages to address these problems and assist student learning (Jones, 1996a) with concepts such as calculus (Clayton et al., 1990), procedures and parameter passing (Buchanan & Farrands, 1995) and the internals of operating systems and the operation of concurrent programming (Chernich et al., 1995; Chernich & Jones, 1994). Even with the use of computing project students the development of quality computer-aided learning tools still requires considerable resources in providing suitable documentation and the integration of the tools into teaching (Jones, 1996a).

Computer-mediated communications and the Internet

Australian universities are linked to each other and the broader Internet through the Australian Academic and Research network (AARNET) which was introduced in June 1990 (McCann, Christmass, Nicholson, & Stuparich, 1998, p. 4). Until this time the use of e-learning was limited to dial up terminal access to mainframe computers. As early as 1985 the university provided access to mainframe computers for information technology students via dial up terminals (Dave Oliver, 1985). Difficulties associated with this practice arose from the poor quality of telephone exchanges and the high cost of telephone connections due to the distances involved (Dave Oliver, 1985). In the early 1990s the cost of these connections was addressed by the formation of the Australian Distance Education Network (ADEnet) as a way to provide low cost computer communications capabilities for distance education students from anywhere in Australia (Atkinson & Castro, 1991).

The main form of computer-mediated communication used by staff and students was still provided by institutional main-frame computers through text-based email and discussion forums such as bulletin boards and Usenet newsgroups. Oliver (1994) reports on the use of Usenet newsgroups as forums for discussion about a collection of readings in a software engineering course in 1990 and 1991. Gregor and Cuskelly (1994) report on the use of similar technologies within a postgraduate information systems course. While experiencing high levels of participation there remained significant usability were problems with learning the primitive software and low amounts of social student/student and student/instructor interaction (Gregor & Cuskelly, 1994).

Throughout the early 1990s the application of computer-mediated communication moved away from a host-centric approach towards a more Internet and distributed approach increased. The use of Internet mailing lists with M&C commenced in 1992, with 13 courses having a course mailing list in 1995 (Jones, 1995) and 22 courses in the first semester of 1996 (Jones, 1996a). Other applications included use of email for individual student/teacher communication, use of email for automated assignment submission (Jones & Jamieson, 1997), and starting in 1994 the use of the World-Wide web for the distribution of learning material. By 1995, the Department of Maths and Computing had 11 courses with a web presence. By 1996 at least three of these courses were making significant use of “hand-coded” web sites to distribute course material including the institution’s first fully online course (Jones, 1996b).

The rise of the Internet and commercial Internet Services Providers (ISP) during the mid-1990s both reduced the cost of such access and helped improve the ease-of-use. However, for some CQU students, asking them to use this technology represented a misunderstanding of their reality with the necessary costs of having a computer and using an ISP being equivalent to the deposit on a reasonable care the subsequent higher purchase repayments (Davison, 1996). For some, this and other work was seen by some as indicating that CQU was evolving into a fourth generation university through the incorporation of interactive multimedia and computer-mediated communication technologies (Crock & Andrews, 1997).

There remained, however, the issue of widespread staff adoption and use. By 1996, many CQU academics used no more than the written word for distanced education, with some making little or no attempt to utilise other existing technologies such as teleconferencing, audio-cassettes or even pictures within study materials (Davison, 1996). It was also observed that although pockets of expertise existed at CQU, and there had been some useful dabbling in online delivery, the majority of academics and administrators had little or no idea of what this new approach to teaching was all about (Macpherson, Bennett, & Priest, 1997). This was in line with the broader recognition that it was difficult for educators that lack technical background to create sophisticated WWW-based courses (Goldberg, Salari, & Swoboda, 1996).

It was recognised within M&C that the Web and online learning offered one approach that could address problems with existing teaching media and methods, improve the overall learning experience of the students, and possibly expand the student base (Jones & Buchanan, 1996). However, given the difficulties and time-consuming nature of web-based learning, it was believed that for web-based learning to become widespread within M&C it would be necessary to implement appropriate tools, automated systems, procedures, documentation and training to reduce the burden (Jones & Buchanan, 1996). This was the problem set for the author when he was given teaching relief for the second half of 1996. The task was to lead the development of a system, processes and resources to support the use of web-based learning in all of the department’s courses (Jones & Buchanan, 1996). From the perspective of M&C it was expected that the resulting system would enable the use of online learning in all department courses and provide M&C with a distinct advantage over its competitors (Jones & Buchanan, 1996). As an additional requirement it was expected that the same system would be used to provide the organistional website for the Faculty of Applied Science, the broader faculty to which M&C belonged.

References

Andrews, T., & Klease, G. (1998). Challenges of multisite video conferencing: The development of an alternative teaching/learning model. Australian Journal of Educational Technology, 14(2), 88-97.

Appleton, A., Dekkers, J., & Sharma, R. (1989). Improved teaching excellence by using tutored video instruction: an Australian case study. Paper presented at the 11th EAIR Forum.

Atkinson, R., & Castro, A. (1991). The ADEnet project: Improving computer communications for distance education students. Paper presented at the Quality in Distance Education: ASPESA Forum 91, Bathurst, NSW: Australia.

Bennett, S., & Reilly, P. (1998). Using interactive multimedia to improve operator training at Queensland Alumina Limited. Australian Journal of Educational Technology, 14(2), 75-87.

Buchanan, R., & Farrands, P. (1995). Can simulations help students understand programming concepts: a case study. Paper presented at the The Twelfth Annual Conference of the Australian Society for Computers in Learning in Tertiary Education, Melbourne, Victoria.

Chernich, R., Jamieson, B., & Jones, D. (1995). RCOS: Yet another teaching operating system. Paper presented at the First Australasian Conference on Computer Science Education, Sydney.

Chernich, R., & Jones, D. (1994). The design and construction of a simulated operating system. Paper presented at the Asia Pacific Information Technology in Education Conference, Brisbane.

Clayton, D., Farrands, P., & Kennedy, M. (1990). Using the microcomputer to enhance calculus teaching. Collegiate Microcomputer, 8(1), 47-50.

Crock, M., & Andrews, T. (1997). Providing staff and student support for alternative learning environments [Electronic Version]. utilBASE. Retrieved 19 July, 2009 from http://ultibase.rmit.edu.au/Articles/dec97/crock1.htm.

Davison, T. (1996). Distance learning and information technology: Problems and solutions in balancing caring, access and success for students. Distance Education, 17(1), 145-158.

Ellis, A., Debreceny, R., & Crago, R. (1996). Half a decade of audiographics development: A case history of Electronic Classroom and its users. Paper presented at the Third International Interactive Multimedia Symposium, Perth, Western Australia.

Farrands, P., & Cranston, M. (1993). Computing facilities of distance students. Paper presented at the Distance Education Futures, 11th Biennial ASPESA Forum.

Farrands, P., & Lynch, T. (1996). Using computer generated software metrics to improve the quality of students’ programs. Paper presented at the 1st Australasian Conference on Computer Science Education, Sydney.

Goldberg, M., Salari, S., & Swoboda, P. (1996). World-Wide Web – Course Tool: An environment for building WWW-based courses. Computer Networks and ISDN Systems, 28, 1219-1231.

Gregor, S., & Cuskelly, E. (1994). Computer-mediated communication in distance education. Journal of Computer Assisted Learning, 10(3), 161-181.

Jones, D. (1994). A workstation in every home! Paper presented at the Asia Pacific Information Technology in Education Conference, Brisbane.

Jones, D. (1995). 1000 users on a 486. Paper presented at the SAGE-AU’95, Wollongong.

Jones, D. (1996a). Computing by distance education: Problems and solutions. Paper presented at the Integrating Technology into Computer Science Education.

Jones, D. (1996b). Solving Some Problems of University Education: A Case Study. Paper presented at the AusWeb’96, Gold Coast, QLD.

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. Paper presented at the Proceedings of ASCILITE’96, Adelaide.

Jones, D., & Jamieson, B. (1997). Three Generations of Online Assignment Management. Paper presented at the ASCILITE’97, Perth, Australia.

Luck, J. (1999). Teaching and learning using interactive videoconferencing: screen-based classrooms require the development of new ways of working. Paper presented at the AARE-NZARE, Melbourne, Australia.

Luck, J. (2009). Fusing technological design with social concerns: A socio-technical study of implementing interactive videoconferencing. Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009, Honolulu, Hawaii.

Macpherson, C., Bennett, S., & Priest, A.-M. (1997). The DDCE Online Learning Project. Paper presented at the ASCILITE’97, Perth.

Macpherson, C., & Smith, A. (1998). Academic authors’ perceptions of the instructional design and development process for distance education: A case study. Distance Education, 19(1), 124-141.

McCann, D., Christmass, J., Nicholson, P., & Stuparich, J. (1998). Educational technology in higher education. Canberra, ANU: Department of Employment, Education, Training and Youth Affairs.

McConachie, J., Harreveld, R. E., Luck, J., Nouwens, F., & Danaher, P. (2006). Editor’s introduction. In J. McConachie, R. E. Harreveld, J. Luck, F. Nouwens & P. Danaher (Eds.), Doctrina perpetua: brokering change, promoting innovation and transforming marginalisation in university learning and teaching. Teneriffe, Qld: Post Pressed.

OECD. (2005, 17 January 2006). Policy Brief: E-learning in Tertiary Education.   Retrieved 5 December, 2006, from http://www.oecd.org/dataoecd/55/25/35961132.pdf

Oliver, D. (1985). Off campus computing. ACM SIGCSE Bulletin, 17(2), 21-26.

Oliver, D. (1994). Software engineering project work in combined distance and on campus modes. ACM SIGCSE Bulletin, 26(2), 31-35.

Oliver, D., & Van Dyke, M. (2004). Looking back, looking in and looking on: Treading over the ERP battleground. In L. von Hellens, S. Nielsen & J. Beekhuyzen (Eds.), Qualitative case studies on implementation of enterprise wide systems (pp. 123-138). Hershey, PA: Idea Group.

Rehn, G., & Towers, S. (1994). Audiographic teleconferencing: The Cinderella of interactive multimedia. Paper presented at the Second International Interactive Multimedia Symposium, Perth, Western Australia.

Rowe, S., & Ellis, A. (2008). Can one size fit all? Using web-based audio-graphics to support more flexible delivery and learning. Paper presented at the ASCILITE’2008, Melbourne, Victoria.

Stewart, S., & Cardnell, D. (1998). Computer Hardware Fundamentals using multimedia: The sequel. Paper presented at the ASCILITE’1998, Wollongong, NSW.

Thompson, R., Winterfield, J., & Flanders, M. (1998). Into the world of electronic classrooms: a passport to flexible learning. British Journal of Educational Technology, 29(2), 177-179.

Zelmer, A. C. L. (1995). Re-examining the myth: Developing truly affordable multimedia. Paper presented at the Learning with Technology: The 12th Annual Conference of the Australian Society for Computers in Learning in Tertiary Education, Melbourne, Victoria.

Zelmer, A. C. L., & Pace, S. (1994, 23-28 January, 1994). Unrealised expectations: Developing (truly) affordable multimedia. Paper presented at the Second International Interactive Multimedia Symposium, Perth, Western Australia.

PhD update #18 – moving along

Last week’s update reported on a bit of a brick wall that had been struck. Thankfully, the strategies outlined in that update and feedback from the esteemed supervisor has well and truly destroyed said brick wall and progress is steaming ahead with a renewed sense of vigour and perhaps just the vaguest glimmers of light at the end of a long, long tunnel.

What I’ve done

Last week, I said I would

  • Bundle up chapter 2, send it to the supervisor and await some independent feedback. DONE
    This one was already done when I completed the last update. Since then I’ve received feedback from the supervisor, positive feedback and also some good directions on where to go next.
  • Aim to complete a first draft of chapter 4.
    This hasn’t been completed. However, progress has been made. I’ve just posted a first draft of the introduction and section 4.2.1. Status on other parts of this chapter are:
    • Section 4.2.2 (last bit of 4.2) is essentially done. Some minor additional references to add in.
    • Section 4.3 (a description of the design and rationale of Webfuse) is almost all done. About 16 pages so far, including a few graphics.
    • Section 4.4 (evaluation of Webfuse: 1996 to 1999) some initial thoughts and stats, but needs some work. However, shouldn’t be too large.
    • Section 4.5 (essentially the abstraction into an ISDT) a vague collection of quotes of previous papers all in the Walls et al format for an ISDT. Need to be updated and put into the Gregor and Jones format.
  • Complete first draft of chapter 6. – obviously not done.

What I’ll do next week

The main aim is to get a complete first draft of chapter 4 completed and sent off to the esteemed supervisor.

At this stage, I’ll revisit feedback on chapter 2 and set out again to get the last remaining 3.5 components of the Ps Framework complete.

Build it and they will come – starting with the institution

In the last PhD update I outlined a change to tack. I’ve moved from working on chapter 2 (the lit review) to working on chapter 4. Chapter 4 is the first of two chapters, each describing one iteration of the 2 action research cycles that make up the core contribution of the thesis. Chapter 4 focuses on the period from 1996 through to 1999 and is titled “Build it and they will come”.

The following is the introduction and first part of the first major section of that chapter. Most of the content seeks to describe CQU as it stood in 1996. i.e. it’s attempting to outline the context in which the development of Webfuse arose. The next post/section will offer a description of the state of “e-learning” use at CQU by the end of 1996.

You should be aware, as with previous posts containing sections of the thesis, the following is at a rough draft stage. Most of the prose is there, in the right structure but it hasn’t been gone over with a fine tooth comb.

Introduction

The aim of this work is to formulate an Information Systems Design Theory (ISDT) for e-learning within a university setting. As outlined in Chapter 3, the work has used an iterative, action-research process over a number of years to develop and evolve a real information system with thousands of users and to provide the foundation and insight to formulate the ISDT. Previous publications (Jones & Gregor, 2004, 2006) have described the formulation of the ISDT using three separate phases, this thesis will use two. This chapter describes the first phase of ISDT formulation from 1996 through 1999 and its use of a somewhat unique technical solution married with fairly naïve, traditional and misguided approach to dissemination. Chapter 5 takes up the story from 2000 through 2004 and adopts more informed approaches to both technology and process with improved outcomes.

Both chapters use a common structure adapted from the synthesised design and action research approach proposed by Cole, Purao, Rossi and Sein (2005). This structure starts with a definition of the problem (Section 4.2) to be addressed in terms of the context in which this work commenced in 1996 and the organisational requirements at that stage. Next, section 4.3 describes the design and implementation of the information system designed to fulfil those organisational requirements. Section 4.4 presents an evaluation of the resulting system and its use from 1996 through 1999. The chapter closes with a reflection and learning section (Section 4.5) that seeks to abstract the knowledge gained during this intervention with the aim of making a practical and theoretical contribution. For this work this abstraction will take the form of the first generation of the ISDT using the anatomy of an ISDT proposed in Gregor and Jones (2007).

While originally conceptualised in 1996 (Jones & Buchanan, 1996) as a research project, the implementation of the system discussed in this thesis was not initially seen as a process that would produce an ISDT. This is one reason why the first three sections of this chapter do not mention design theory or design research. Instead, they seek to describe the principles, ideas and approaches taken as expressed during 1996 to 1999. This description draws upon a number of publications from that time (Gregor, Jones, Lynch, & Plummer, 1999; Jones, 1995, 1996a, 1996b, 1999a, 1999b; Jones & Buchanan, 1996; McCormack & Jones, 1997), supplemented with email and log archives. This description has also been shared with other individuals involved in the activities. The abstraction into an ISDT is outlined in Section 4.5 and is being written in 2009 and has been informed by prior attempts to abstract the principles and processes from 1996-1999 into an ISDT (Jones & Gregor, 2004, 2006; Jones, Gregor, & Lynch, 2003).

Section 4.2 – Problem definition

This work commences in mid-1996 within the Department of Mathematics and Computing (M&C) at Central Queensland University (CQU) with the recognition that the department needed to make greater use of the World-Wide-Web and other Internet-based technologies in its teaching and learning. This need arose due to the increasing quantity and diversity of the department’s students, prior experience with e-learning, increasing interest in the Web and perceived limitations with traditional teaching methods. The problem was how to enable the department to adopt e-learning across its teaching and learning. This section provides more background to this problem by first describing the institutional context (Section 4.2.1) within which this research takes place and the experience with e-learning within this institution in the period leading up to 1996 (Section 4.2.2). Section 4.3 moves onto to describe the design and nature of the intervention undertaken to address the problem.

4.2.1 – The institution

Central Queensland University (CQU) is an Australian university which started life in the town of Rockhampton in 1967 (Bowser, Danaher, & Somasundaram, 2007). Since that time it has undergone a series of name changes starting with the Queensland Institute of Technology (QIT) Capricornia in 1967, Capricornia Institute of Advance Education in June 1971, the University College of Central Queensland in 1990, the University of Central Queensland in 1992, Central Queensland University in 1994 and CQUniversity in 2008 (McConachie, Harreveld, Luck, Nouwens, & Danaher, 2006; David Oliver & Van Dyke, 2004). as the Queensland Institutue of Technology (Capricornia) in 1967. It became the Capricornia Institute of Advanced Education in 1971 and the University College of Central Queensland in 1990 (Central Queensland University, 2006). The 1990 name change was part of the abolution of the binary system within Australian higher education and marked the institution transition to full university status. Full university status was achieved in January 1992 with the initial name the University of Central Queensland which was changed to Central Queensland University in 1994 (Central Queensland University, 2006).

Throughout the 70s, 80s and 90s significant changes were made to how and where the institution drew its students. These changes arose from a combination of institutional need, environmental and sector influences and an on-going need to increase student enrolment to ensure long-term viability. Three significant shifts in student population and methods of learning and teaching experienced by CQU included: the adoption of distance education; development of additional Central Queensland campuses; and expansion into international campuses through commercial partnership. Each of these is briefly explained in the following.

The adoption of distance education. The large geographic distances and small population based within the institution’s local area made distance education an appropriate response to community needs for higher education (Dave Oliver & Romm, 2001). In 1974 the institution became the first Australian provider of a Bachelor of Applied Science via distance education (David Oliver & Van Dyke, 2004) with Biology, Mathematics and Management following in subsequent years. By 1983 the number of students enrolled to study via distance education exceeded the number enrolled as on-campus students (Cryle, 1992). By 1995 of the approximately 9000 people enrolled with CQU, 4500 were studying by distance education with many of these unable to easily access the various sites supporting distance education (Davison, 1996).

The development of additional Central Queensland regional campuses. From the mid-1980s a variety of community pressures contributed to the establishment of additional campuses in the Central Queensland towns of Mackay (350 kilometres to the north), Gladstone (120 kilometres to the south), Bundaberg (330 kilometres to the south) and Emerald (280 kilometres to the west). This produced a network of campuses covering a geographical area of some 616,121 square kilometres (Dave Oliver & Romm, 2001). Until 1996, these campuses only offered the first year of courses with students having to move to Rockhampton or study by distance education to complete their studies (Luck, 1999). This resulted in some students transferring to other universities after their first year. To address this attrition and become a true regional institution second and third years of some degress were introduced on other regional campuses (David Oliver & Van Dyke, 2004). Interactive videoconferencing facilities (discussed in more details in the Section 4.2.2) were implemented to support the necessary multi-campus teaching of advanced courses (Luck, 1999).

The development of the international campuses through commercial partnership. During 1998, CQU’s Vice-Chancellor continued an on-going argument that the survival of regional university, like CQU, was dependent on it being able to raise funds from a non-government source. At this time CQU had commenced planned growth into overseas student markets, both internationally and within Australia, in order to strengthen CQU’s local campuses (Singh, 1998, pp. 13-14). Throught the 1990s CQU formed partnerships with a small number of overseas companies to teach students within Singapore, Hong Kong, Fiji and Dubai. In the early 1990s, through a commercial partnership with a private company, the institution established a number of campuses in major Australian cities – Sydney (1994), Melbourne (1996), Brisbane (1998), Fiji (1998) and the Gold Coast (2001) – to cater specifically for overseas students (David Oliver & Van Dyke, 2004). Students at these campuses are tutored by locally appointed academic staff, specifically employed for teaching rather than research, giving face-to-face tutorials and lectures supplemented with distance education materials (Marshall & Gregor, 2002, p. 29). Consequently, it was possible that some courses with large enrolments at multiple campuses could have 40 or more academic staff teaching the course in different locations.

Table 4.1 provides an overview of the student cohort at CQU during the time period 1996 through 1999. The overview shows the percentage of individual students enrolled at CQU through the various modes. Distance education students relied on primarily on print-based materials and rarely attended a campus. Regional campus students attended one of the institution’s Central Queensland campuses. International campus students attended one of the campuses within Australia, created by CQU’s commercial partner primarily for international students. During this time period only the Sydney, Melbourne and Brisbane campuses were operating. Overseas international students were studying in Dubai, Singapore or Hong Kong using CQU learning materials and supported by a local, commercial partner of CQU.

Table 4.1 – Overview of CQU student numbers (1996-1999) by mode
  1996 1997 1998 1999
Distance education 59.4% 55.6% 53.7% 52.3%
Regional campus 34.7% 34.7% 32.6% 31.1%
International campus 4.4% 7.7% 10.5% 13.1%
Overseas international 1.6% 3.1% 3.3% 3.6%

In 1996, CQU’s academic units were broken up into six Faculties (Arts, Applied Science, Business, Education, Health Science and Engineering) made up of departments. The Department of Mathematics & Computing (M&C) at Central Queensland University (CQU) was part of the Faculty of Applied Science. The Department had a history of teaching programs in Mathematics and Information Technology (applied computing) to students studying on-campus or via print-based distance education. Distance education students rarely, if ever, set foot on a university campus. M&C had significant experience in print-based distance education, becoming amongst the first in the world to offer a professional computing course via print-based distance education when it offered Computer Science I in 1975 (Hinz, 1977).

Many of CQU’s distance computing students are mature, highly motivated people many of whom have already completed previous tertiary studies or have worked in the computing industry. The majority (87%) of CQU distance computing students study part-time while working full-time (Philip Farrands & Cranston, 1993) and in many cases supporting a family. By 1996, CQU was essentially a second generation distance education (Nipper, 1989) dual-mode provider. This means that the same courses were delivered to both on-campus and distance students, generally by the same teaching staff. With distance education students relying predominantly on print, in the form of study guides, textbooks and resource materials books, as the primary teaching medium (Jones, 1996b). University policy required that all courses offered by distance education must pass through the DDCE system (Macpherson & Smith, 1998).

The reputation of CQU’s pre-dominantly paper-based distance education resources is a result of a mostly collaborative effort between academics, instructional designers, editors, printery staff and other employees such as maintenance workers and administrative staff (Davison, 1996). In 1996, the Division of Distance and Continuing Education (DDCE) was responsible for the production and distribution of all distance learning material and consequently the specification of deadlines and the style of distance education material (Jones, 1996a). DDCE also offered a range of services including and instructional design, editing, management of assignment submission, and various other student support services. A wide range of computing and communications facilities were provided and maintained by the Information Technology Division (ITD). However, a small number of academic departments, such as the Department of Mathematics and Computing, funded and maintained their own information technology resources.

During 1997 and 1998 the institution undertook a comprehensive review of academic structures. The primary intent was to make the institution more competitive in an increasingly aggressive higher education marketplace (Macpherson & Smith, 1998). As a result of this review, a new structure of faculties of schools was created through innovative combinations of complementary disciplines that offered potential synergies that could be exploited to improve both teaching and research programs (Higher Education Division, 1999). The original six faculties were reduced to five through the combination of some existing faculties and the creation of a new one. The Department of Mathematics and Computing was moved from the Faculty of Applied Science to the Faculty of Informatics and Communications (Infocom). Infocom brought together the discipline areas of information technology, information systems, communication, cultural studies, journalism, mathematics and health informatics (Condon, Shepherd, & Parr, 2003) At the same time, the institution introduced a change from a two-semester academic year to a four-term academic year with the intent of attracting new students by enabling them to complete degrees over shorter periods of time (Macpherson & Smith, 1998).

The nature of a dual-mode, second generation distance education institution, the capabilities of the existing technologies, and the resulting organisational policies and processes necessary to support this practice across a large number of courses created a range of problems. These problems were widely known within the distance education literature (Caladine, 1993; Galusha, 1997; Jones, 1996a; Keegan, 1993; Sherry, 1995) and included, amongst others: high attrition in initial courses; loss of student motivation; significant up-front costs; limited interaction, collaboration or active learning; inflexibility in processes and materials; limited recognition and reward for staff; the out of sight, out of mind problem; and constraints of the print medium. The existence of these problems and the availability of a range of technologies and media have led members of the CQU community to undertake a range of experiments with e-learning. A brief overview of these experiments leading up to the start of this project in 1996 is provided in the following section.

References

Bowser, D., Danaher, P., & Somasundaram, J. (2007). Indigenous, pre-undergraduate and international students at Central Queensland University, Australia: three cases of the dynamic tension between diversity and commonality. Teaching in Higher Education, 12(5), 669-681.

Caladine, R. (1993). Overseas experience in non-traditional modes of delivery in higher education using state-of-the-art technologies: A literature review. Canberra: Department of Employment, Education and Training.

Central Queensland University. (2006). The history of Central Queensland University.   Retrieved 9 Jan, 2007, 2007, from http://www.cqu.edu.au/about/history.htm

Cole, R., Purao, S., Rossi, M., & Sein, M. (2005). Being proactive: Where action research meets design research. Paper presented at the Twenty-Sixth International Conference on Information Systems.

Condon, A., Shepherd, J., & Parr, S. (2003). Managing the evolution of a new faculty in the 21st century. Paper presented at the ATEM’2003.

Cryle, D. (1992). Academia Capricornia: A history of the University of Central Queensland. Rockhampton, QLD: University of Central Queensland.

Davison, T. (1996). Distance learning and information technology: Problems and solutions in balancing caring, access and success for students. Distance Education, 17(1), 145-158.

Farrands, P., & Cranston, M. (1993). Computing facilities of distance students. Paper presented at the Distance Education Futures, 11th Biennial ASPESA Forum.

Galusha, J. (1997). Barriers to learning in distance education. Interpersonal Computing and Technology, 5(3-4), 6-14.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312-335.

Gregor, S., Jones, D., Lynch, T., & Plummer, A. A. (1999). Web information systems development: some neglected aspects. Paper presented at the Proceedings of the International Business Association Conference, Cancun, Mexico.

Higher Education Division. (1999). The quality of Australian higher education: An overview. Canberra, ACT: Department of Education, Training and Youth Affairs.

Hinz, T. (1977). Teaching computing subjects externally. Paper presented at the Conference on Research in Mathematics Education, Melbourne.

Jones, D. (1995). 1000 users on a 486. Paper presented at the SAGE-AU’95, Wollongong.

Jones, D. (1996a). Computing by distance education: Problems and solutions. Paper presented at the Integrating Technology into Computer Science Education.

Jones, D. (1996b). Solving Some Problems of University Education: A Case Study. Paper presented at the AusWeb’96, Gold Coast, QLD.

Jones, D. (1999a). Solving some problems with university education: Part II. Paper presented at the Ausweb’99, Balina, Australia.

Jones, D. (1999b). Webfuse: An integrated, eclectic web authoring tool. Paper presented at the Proceedings of EdMedia’99, World Conference on Educational Multimedia, Hypermedia & Telecommunications, Seattle.

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. Paper presented at the Proceedings of ASCILITE’96, Adelaide.

Jones, D., & Gregor, S. (2004). An information systems design theory for e-learning. Paper presented at the Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D., & Gregor, S. (2006). The formulation of an Information Systems Design Theory for E-Learning. Paper presented at the First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Jones, D., Gregor, S., & Lynch, T. (2003). An information systems design theory for web-based education. Paper presented at the IASTED International Symposium on Web-based Education, Rhodes, Greece.

Keegan, D. (1993). Theoretical princples of distance education: Routledge.

Luck, J. (1999). Teaching and learning using interactive videoconferencing: screen-based classrooms require the development of new ways of working. Paper presented at the AARE-NZARE, Melbourne, Australia.

Macpherson, C., & Smith, A. (1998). Academic authors’ perceptions of the instructional design and development process for distance education: A case study. Distance Education, 19(1), 124-141.

Marshall, S., & Gregor, S. (2002). Distance education in the online world: Implications for higher education. In R. Discenza, C. Howard & K. Schenk (Eds.), The design and management of effective distance learning programs (pp. 21-36). Hershey, PA, USA: IGI Publishing.

McConachie, J., Harreveld, R. E., Luck, J., Nouwens, F., & Danaher, P. (2006). Editor’s introduction. In J. McConachie, R. E. Harreveld, J. Luck, F. Nouwens & P. Danaher (Eds.), Doctrina perpetua: brokering change, promoting innovation and transforming marginalisation in university learning and teaching. Teneriffe, Qld: Post Pressed.

McCormack, C., & Jones, D. (1997). Building a Web-Based Education System. New York: John Wiley & Sons.

Nipper, S. (1989). Third generation distance learning and computer conferencing. In R. Mason & A. Kaye (Eds.), Mindweave: Communication, Computers and Distance Education (pp. 63-73). Oxford, UK: Pergamon Press.

Oliver, D., & Romm, C. (2001). Integrated systems: Management approaches to acquiring them in Australian Universities. In K. Pearlson (Ed.), Managing and using information systems: A strategic approach: John Wiley & Sons.

Oliver, D., & Van Dyke, M. (2004). Looking back, looking in and looking on: Treading over the ERP battleground. In L. von Hellens, S. Nielsen & J. Beekhuyzen (Eds.), Qualitative case studies on implementation of enterprise wide systems (pp. 123-138). Hershey, PA: Idea Group.

Sherry, L. (1995). Issues in distance learning. International Journal of Educational Telecommunications, 1(4), 337-365.

Singh, M. (1998). Globalism, cultural diversity and tertiary education. Australian Universities Review, 41(2), 12-17.

ePortfolios in universities – forget it?

I continue to have a high level of skepticism around the concept of universities investing in ePortfolios. I feel that it is another example of how people within universities tend to over-emphasize their importance in the scheme of things, extend the university role into areas it where it should never have been and subsequently waste resources and more importantly the time and energy of academic staff that would be better spent focusing on other aspects of improving learning and teaching. In particular, I see ePortfolios being another approach that is being over-run by the technologists alliance.

This latest restating of my prejudice arises from a find from Stephen Downes OLDaily newsletter which eventually traces back to this post from a Spanish higher school teacher which in turn draws on this post from Derek Wenmoth.

Perhaps this is some limitation of mine. I just don’t see the point of ePortfolios. What is all the fuss about?

The diagram

The core of the post is the following image that, at least for me, does a good job of giving a road map of what learner’s do within their learning: do stuff, manage the outcomes, present it to various audiences, share it with others.

ePortfolio roadmap by Perfil de Sonia Guilana

My immediate though was where in any of this is there a need for a formal institution of learning (e.g. university or school) to provide the learner with the tools to perform any of this? Why does the advent of elearning technologies change any of the relationships?

From the discussion it appears that the institution’s role can be seen in providing a VLE – shown as one place the learner might “do stuff” and also talked about one place they may “manage stuff” – and one part of “presenting stuff”. The institution’s role in “presenting stuff” is in assessment and accreditation.

Already the VLE provided by institution’s is falling behind the usability and functionality provided by external tools. Sorry, but having seen both Moodle and Blackboard up close, I’d much prefer to be using external tools. I even prefer, for functionality and ease of use reason, using Google Mail to the email system provided by institution. Given they are already falling behind, why should an institution believe it can provide a better suite of systems for the learner to “present stuff” with.

Institution’s providing portfolio systems becomes a bit more silly when you add in the observations that informal learning far outweighs formal learning and that increasingly learners will engage in formal learning from many different providers. One solution proposed to address these issues is for education systems to standardise portfolio systems so either they are all using the same one or have systems that talk to each other. Given the long history of failure of such attempts at standarisation, I’m surprised anyone still doesn’t laugh uproariously when someone suggests such a project.

What is an alternative?

Only very briefly, have to stop procrastinating and get back to the thesis, the following are some initial suggestions:

  • Ensure that institutional systems integrate/interface simply and effectively with all the other tools that make up the above diagram.
    e.g. it should be easy for learners to export the “stuff” they produce in a VLE into their own tools. As part of this, VLEs should be generating RSS feeds for most if not all of its functions. Ensure institutional systems work within global authentication systems (e.g. OpenID), rather than institutional or system specific authentication systems. (e.g. Australian Access Federation)
  • Focus institutional technology on only those tasks that the institution must perform and aim on doing it well.
    e.g. Rather than providing an ePortfolio system that helps learners present their work (something they can do themselves). Focus on implementing significant improvements on the systems around assessment and accreditation. The assignment submission systems in most VLEs is woeful, and that’s only in simple implementation details that would significantly increase the efficiency of the assessment process. Most don’t offer any support for activities that might significantly improve learning and assessment from an educational perspective.

In part, this is one aspect of the BAM project. One area it is trying to experiment with. Rather than require students to use blogs provided within an institution LMS (which are mostly really limited), allow them to use real-world blog engines and focus the institutional information technology on the assessment aspect.

Wicked problems, requirements gathering and the LMS approach to e-learning

Increasingly, the IT requirements of organisations are being met through the application of “enterprise systems”. Large systems created by commercial vendors (though increasingly there are also open source variants, which while offering small improvements still suffer some of the same problems) that are meant to provide an integrated solution to a large scale system with an appraoch that combines “best practice” processes and techniques with information technology that will “scale” to meet the requirements of the organisation. Examples including ERP systems like Peoplesoft for finance, human resources and, at universities, student enrolment. In terms of e-learning at Universities the current dominant approach is also to employ “enterprise systems”. With e-learning the “enterprise system” is known as the learning management system (lms), course management system (cms), virtual learning environment (vle) or some other 3 letter acronym. Examples include: Blackboard, Moodle and Sakai.

In this context, based on the experience and observations of myself and colleagues from around the world, I’m suggesting the following as a nascent (and fairly cynical) process model for how IT departments approach development of feature requests from users. Have you got any additional steps you’d like added?

The process model is

  • Ignore the request.
  • Explain that the request can’t be done.
  • Explain to the requester how the same outcome can be achieved using another process within the existing system. The suggested approach will be so time and resource consuming for the requester that they are unlikely to use it.
  • Explain how the cost and resource implications of the request mean it can’t be implement at this point in time.
  • Explain how, given the need to upgrade to the next version of the enterprise system, IT needs to spend all of its technical resources on upgrading to the next version and consequently can’t implement the request feature.
  • Funnel the request through a reference group, project board or governance committee who are meant to identify whether or not the request is sensible and worth expending scarce resources. Such groups are usually made up of users – usually management or innovative end-users – and IT people. The user representatives usually have no IT knowledge and have to rely on the objective expert knowledge of the IT people.
  • Explain how the given feature doesn’t neatly fit within the model on which the enterprise system is built and how that would require IT to extend the enterprise system beyond “vanilla” and that it can’t do that. Since, if it goes beyond vanilla the next time it has to upgrade to the next version of the enterprise system it will have to re-implement the feature request, and that’s expensive.
  • If we get to this stage, the feature request might be implemented. The first stage of this implementation will be to funnel the request to a business analyst who will be tasked to determine the complete requirements for the request. The business analyst will, at the start of this project, usually have no knowledge of the business (e.g. the nature of learning and teaching) or of the technology that will be used to implement the feature. They are meant to develop an objective and complete set of requirements that doesn’t need to be sullied by additional knowledge.

    It is highly likely that the implementation will not be completed due to a range of factors.

So, in an enterprise system environment, I would suggest that it is highly unlikely that any feature request from a coal-face user will be implemented. If the request originates from someone important within the organisation, chances are that it won’t be implemented either, but it will go a slightly different route (e.g. it probably won’t have to go to the reference group).

But even if the request makes it all the way to the final step, there’s a problem.

Fundamental difficulty in establishing system requirements

The following is a quote from Sommerville (2001, p32). This is the 6th edition of one of the standard textbooks on software engineering. This is what it has to say about establishing system requirements.

A fundamental difficulty in establishing system requirements is that the problems which complex systems are usually built to help tackle are usually ‘wicked problems’ (Rittel and Webber 1973). A ‘wicked problem’ is a problem which is so complex and where there are so many related entities that there is no definitive problem specification. The true nature of the problem only emerges as a solution is developed.

This is the source of the well known problem in software engineering – “The user won’t know what they want until they see it, and then they will want something different to what they told you during the requirements gathering stage”. This is the reason why the business analyst approach and the related teleological approach to systems development is deeply flawed in just about any context, but especially those that are diverse and less than stable.

I’m hard pressed to think of any context that is more diverse and less stable than that involved with the implementation of e-learning within a university.

Disclaimers

I know of any number of really talented, nice people that work within IT departments and are driven to provide the best service they can to their clients. I’ve also seen a few that are not so nice, talented or appropriately motivated.

Personally, I don’t believe in universal models. I don’t think that all systems/institutions use the model above. I do think, in some situations, that the above model might be appropriate – not just e-learning. However, most IT departments profess a belief in universal models (i.e. single templated processes to implement any system regardless of its type). Most profess that you must generate requirements and only then start implementation, get sign off and then don’t touch the system for years. They don’t see the need for alternatives, in some situations.

Yes, I have developed alternate solutions or approaches. I’m not just being critical.

References

Rittel, H. W. J. and M. M. Webber (1973). “Dilemmas in a general theory of planning.” Policy Sciences 4(2): 155-169.

Sommerville, I. (2001). Software Engineering, Addison-Wesley.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén

css.php