Assembling the heterogeneous elements for (digital) learning

Month: July 2010

The grammar of school, psychological dissonance and all professors are rather ludditical

Yesterday, via a tweet from @marksmithers I read this post from the author of the DIYU book titled “Vast Majority of Professors Are Rather Ludditical”. This is somewhat typical of the defict model of academics which is fairly prevalent and rather pointless. It’s pointless for a number of reasons, but the main one is that it is not a helpful starting point for bringing a out change as it ignores the broader problem and consequently most solutions that arise from a deficit model won’t work.

One of the major problems this approach tends to ignore is the broader impact of the grammar of school (first from Tyack and Cuban and then Papert). I’m currently reading The nature of technology (more on this later) by W. Brian Arthur. The following is a summary and a little bit of reflection upon a section titled “Lock-in and Adaptive Stretch”, which seems to connect closely with the grammar of school idea.

Psychological dissonance and adaptive stretch

Arthur offers the following quote from the sociologist Diane Vaughan around psychological dissonance

[In the situations we deal with as humans, we use] a frame of reference constructed from integrated sets of assumptions, expectations and experiences. Everything is perceived on the basis of this framework. The framework becomes self-confirming because, whenever we can, we tend to impost it on experiences and events, creating incidents and relationships that conform to it. And we tend to ignore, misperceive, or deny events that do not fit it. As a consequence, it generally leads us to what we are looking for. This frame of references is not easily altered or dismantled, because the way we tend to see the world is intimately linked to how we see and define ourselves in relation to the world. Thus, we have a vested interest in maintaining consistency because our own identity is at risk.

Arthur goes onto to suggest that “the greater the distances between a novel solution and the accepted one, the large is this lock-in to previous tradition”. He then defines the lock-in of the older approach as adaptive stretch. This is the situation where it is easier to reach for the old approaches and adapt it to the new circumstances through stretching.

Hence professors are ludditical

But haven’t I just made the case, this is exactly what happens with the vast majority of academic practice around e-learning. If they are using e-learning at all – and not simply sticking with face-to-face teaching – most teaching academics are still using lectures, printed notes and other relics of the past that they have stretched into the new context.

They don’t have the knowledge to move on, so we have to make them non-ludditical. This is when management and leadership at universities rolls into action and identifies plans and projects that will help generate non-ludditical academics.

The pot calling the kettle black

My argument is that if you step back a bit further the approaches being recommended and adopted by researchers and senior management; the way those approaches are implemented; and they way they are evaluated for success, are themselves suffering from psychological dissonance and adaptive stretch. The approaches almost without exception borrow from a traditional project management approach and go something like:

  • Small group of important people identify the problem and the best solution.
  • Hand it over to a project group to implement.
  • The project group tick the important project boxes:
    • Develop a detailed project plan with specific KPIs and deadlines.
    • Demonstrate importance of project by wheeling out senior managers to say how important the project is.
    • Implement a marketing push involving regular updates, newsletters, posters, coffee mugs and presentations.
    • Develop compulsory training sessions which all must attend.
    • Downplay any negative experiences and explain them away.
    • Ensure correct implementation.
    • Get an evaluation done by people paid for and reporting to the senior managers who have been visibly associated with the project.
    • Explain how successful the project was.
  • Complain about how the ludditical academics have ruined the project through adaptive stretching.

Frames of reference and coffee mugs

One of the fundamental problem with these approaches to projects within higher education is that it effectively ignores the frames of reference that academics bring to problem. Rather than start with the existing frames of reference and build on those, this approach to projects is all about moving people straight into a new frame of reference. In doing this, there is always incredible dissonance between how the project people think an action will be interpreted and how it actually is interpreted.

For example, a few years ago the institution I used to work for (at least as of CoB today) adopted Chickering and Gamson’s (1987) 7 principles for good practice in undergraduate teaching as a foundation for the new learning and teaching management plan. The project around this decision basically followed the above process. As part of the marketing push, all academics (and perhaps all staff) received a coffee mug and a little palm card with the 7 principles in nice text and a link to the project website. The intent of the project was to increase awareness of the academics of the 7 principles and how important they were to the institution.

The problem was, that at around this time the institution was going through yet more restructures and there was grave misgivings from senior management about how much money the institution didn’t have. The institution was having to save money and this was being felt by the academics in terms of limits on conference travel, marking support etc. It is with this frame of reference that the academics saw the institution spending a fair amount of money on coffee mugs and palm cards. Just a touch of dissonance.

What’s worse, a number of academics were able to look at the 7 principles and see principle #4 “gives prompt feedback” and relate that to the difficulty of giving prompt feedback because there’s no money for marking support. Not to mention the push from some senior managers about how important research is to future career progression.

So, the solution is?

I return to a quote from Cavallo (2004) that I’ve used before

As we see it, real change is inherently a kind of learning. For
people to change the way they think about and practice education, rather than merely being told what to do differently, we believe that practitioners must have experiences that enable appropriation of new modes of teaching and learning that enable them to reconsider and restructure their thinking and practice.

Rather than tell academics what to do, you need to create contextualised experiences for academics that enable appropriation of new models of teaching and learning. What most senior managers at universities and many of the commentators don’t see, is that the environment at most universities is preventing academics from having these experiences and then preventing them from appropriating the new models of teaching.

The policies, processes, systems and expectations senior managers create within universities are preventing academics from becoming “non-ludditical”. You can implement all the “projects” you want, but if you don’t work on the policies, processes, systems and expectations in ways that connect with the frames of reference of the academics within the institution, you won’t get growth.

References

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96-112.

Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 39(7), 3-7.

First the fridge dies, and then…

For the last couple of days our LG side-by-side fridge has been dying. Not a great situation, especially given the problems we’ve had with it. And then today I find out that my employment at my current institution is about to cease after 20 years. The following is a bit of reflection about what might happen now.

The last couple of years at the institution have been a boring cycle: org restructure, imminent redundancy, last minute “saving”, period of uncertainty, org restructures, imminent redundancy, glimmer of last minute “saving”…redundancy. So, while there is a tinge of sadness (mostly for the folk “left behind), and a touch of worry (disruption to the family) this is actually a great relief. We’re in a position where this causes no great financial pressure, so not only is there relief, there is some wonder at the possibilities that have opened up.

The PhD

I’ve been working part time on the PhD for almost ten years. The first possibility is to finish the beast, get it off my back. The end is near, the time is now to put head down and get it done.

What do I want to do then?

Then what? Some of the possibilities include:

  • More L&T support/instructional design/educational development within higher education;
    This is the field I’ve been in for a while, there’s some interesting possibilities within this field. It wouldn’t be too hard to do a lot of stuff much better than how it is being done at the moment. However, there’s also a lot of inertia that makes it hard. Not amongst the academics. The inertia I’ve struggled with is the short-term perspectives of senior institutional management and the limited depth and diversity of their insights. Most academics want to engage in context appropriate innovation in their teaching. Most senior leaders want to tick the AUQA boxes and satisfy techno-rational notions of management/leadership – don’t rock the boat. (Note: given my current circumstances, I am probably over stating this case just a bit due to a somewhat lessened sense of objectivity, but the case is there).

    There are some positives here, but there are some negatives.

  • e-learning;
    Where my skills and experience are best used, I think, are harnessing information technologies to support learning and teaching. It’s where the PhD is located and where most of my experience is. The dividing line between this possibilty and the previous is very vague to non-existent. Especially in the similarities around it being fairly simple to do something significantly better than the status quo. However, the problem with senior management continues to exist and for good measure you get some potentially/typically very limited thinking from IT departments. This is the problem with e-learning, it’s currently seen as an IT problem, not a learning and teaching problem.
  • A return to information systems;
    For most of my 20 years I’ve been a faculty academic teaching and doing research. The PhD is within the information systems field, I have some contacts and publications in the area. I could return there. Especially given that my real interest is in developing and understanding new ways of helping organisations harness information technologies – for me e-learning is an application of that. The positioning of IS in relation to IT and other business disciplines is a bit troubling. Also there are a lot of IS PhD graduates, and not many positions.
  • technical development.
    Software development is one of the activities I like a lot. Software like BIM is an example of what I can do, though it’s also an example of the above. There’s a big move towards Moodle, BIM is Moodle…so perhaps a software development role. Perhaps not as intellectually rewarding, but more practically fulfilling.

So no clear cut choices. So, what’s the dream job? At the moment, a research and development job focused at helping a university harness information technology to effectively and innovatively improve the quality of its learning and teaching. Something that straddles information systems and e-learning and is focused on innovation. Especially one that involves being part of a team of talented folk, I’ve had enough of the lone-ranger stuff.

Actually, for some time the institutional inertia around learning and teaching has been getting me down. Perhaps it is time to look for a L&T/e-learning role outside of formal institutional settings, does such exist?

Actually, perhaps its time to really open up to the possibilities? What comes, comes. Perhaps taking me to a place I could never have imagined.

Where?

This is the question that is current causing the most heart ache. We have almost the perfect home for a young family. It would be hard to better and even harder to leave. However, it’s located in a regional area and the only place I’m likely to get the type of work I’ve described above, is the institution that is letting me go. Which suggests three options:

  • tele-commute;
    Not a lot of jobs around that do this, especially of the type I want. It doesn’t make a lot of sense. Though a straight software development role would fit well here.
  • contractor;
    i.e. short periods away and then back. Again not an ideal approach for the type of work I’d like. But limits family disruption.
  • moving.
    Pack up the family and move to where the work is. A disruption to be sure, but it opens up possibilities.

Alternatives?

So what have I forgotten or not even though of? Anyone got an opportunity? Anyone interested in a software developer/information systems/e-learning teaching academic?

Off to buy a fridge. And then see what they have in the way of interesting and productive careers.

PLEs and the institution: the wrong problem

Yesterday, I rehashed/summarised some earlier thoughts about “handling the marriage of PLEs and institutions. Since then, I’ve been reflecting on that post. Am coming to the belief that this is just the wrong problem, or perhaps just a symptom of a deeper problem. (Signal the start of the broken record).

All of the students and academic staff (the learners) of a university have always had their own PLEs. It’s only with the rise of Web 2.0, e-learning 2.0 and related movements/fads that the PLE (and/or PLN) label has become a concern. And only because of this has the question of how, or indeed if, an institution should provide a PLE arisen. In much the same way that universities – at least within Australia – have had to deal with distance education, flexible learning, lifelong learning, open learning, blended learning and a whole range of similar labels and fads.

The product focus

The problem that I am seeing is that university teaching and learning – and the systems that underpin and support that teaching and learning – are “product” or fad focused. i.e. folk within the institution note that “X” (i.e. open learning, blended learning, PLEs, e-portfolios etc) is currently the buzz word within the sector around learning and teaching and hence the organisation and its practices are re-organised – or at least seen to be re-organised – to better implement “X”. From this you get a whole bunch of folk within institutions (from senior management down) for whom their professional identity becomes inextricably linked with “X”. Their experiences and knowledge grow around “X”. Any subsequent criticism of “X” is a criticism of their identity and thus can’t be allowed. It has to be rejected. Worse still, “X” becomes the grammar of the institution, everything must be considered as part of “X” (thus good) or not part of “X” (thus bad).

Various factors such as short term contracts for senior managers; top-down management; certain research strategies that generate outputs through investigating “learning and teaching with “X””; the increasing prevalence of “project managers” within universities and the simplistic notions many of them have; deficit models of academics; and the wicked nature of the learning and teaching problem all contribute to the prevalence of this mistake.

The process focus

What I described as a way to handle the marriage of PLEs and institutions is no different from the approach we used to implement Webfuse and no different from the process I would use to attempt to support and improve learning and teaching within any university. It’s an approach that doesn’t focus on a particular “X”, but instead on adopting a process that enables the institution to learn and respond to what happens within its own context and outside.

Some broad steps:

  • Ensure that there’s a L&T support team full of the best people you can get with a breadth and depth of experience in learning, teaching, technology and contextual knoweldge.
    This is not a one off, it’s an on-going process of bringing new people in and helping the people within grow to exceed their potential.
  • Implement a process where the L&T support team is working closely and directly with the academics teaching within the context during the actual teaching.
    i.e. not just on design of courses before delivery, but during teaching in a way that enables the support team to help the teaching academics in ways that are meaningful, contextual and build trust and connections between the teaching academics and the support staff.
  • Adopt approaches that encourage greater connections between the L&T support team, the teaching academics, students and the outside world.
  • Support and empower the support team and teaching academics to experiment with interventions at a local level with a minimum management intervention or constraints in terms of institutional barriers.
  • Observe what happens in the local interventions and cherry pick the good ideas for broader incorporation into the institution’s L&T environment in a way that encourages and enables adoption by others.
  • Implement mechanisms where senior management are actively encouraged to understand the reality of teaching within the institutional context and actively charged with identifying and removing those barriers standing in the way of teaching and learning.
    The job of the leaders is not to choose the direction, but to help the staff doing the work get to where they want to go.

What’s important

The identity of “X” is not important, be it graduate attributes, constructive alignment, PLEs, Web 2.0, social media, problem-based learning, blended learning etc, all these things are transitory. What’s important is that the university has the capability and the on-going drive to focus on a process through which it is reflecting on what it does, what works, what doesn’t and what it could do better, and subsequently testing those thoughts.

How to handle the marriage of PLEs and institutions

The following is my attempt to think about how the “marriage” of the PLE concept and educational institutions can be handled. It arises from reading some of the material that has arisen out of the PLE conference held in Barcelona a few weeks ago and some subsequent posts, most notable this one on the anatomy of a PLE from Steve Wheeler.

The following is informed by a paper some colleagues and I wrote back in 2009 around this topic. That paper, aimed to map the landscape for a project we were involved with that was attempting to implement/experiment with just such a marriage. By the time the paper was presented (end 2009) the project was essentially dead in the water – at least in terms of its original conceptualisation – due to organisational restructures.

The paper and this post attempts to use the Ps framework as one way to map this landscape.

In summary, people (students and staff) already have PLEs, the question is how to effectively create a marriage between each person’s PLE and the institution that is effective, open, and responds to contextual and personal needs.

Product – what is a PLE

The assumption is that the definition of what a PLE is, is both uncertain and likely to change and emerge as the “marriage” is consumated (taking the metaphor too far?). I like the following quotes to summarise the emergence aspect

Broader conceptualisations see technology as one of a number of components of an emergent process of change where the outcomes are indeterminate because they are situationally and dynamically contingent (Markus & Robey, 1988). Ongoing change is not solely “technology led” or solely “organisational/agency driven”, instead change arises from a complex interaction among technology, people and the organization (Marshall & Gregor, 2002)

But we found some value in defining what a PLE is not:

  • a single tool;
  • specified, owned or hosted by the university;
  • be common across all students;
  • necessarily involve the use of information and communication technologies;
  • be a replacement or duplication of the institutional learning management system.

Picking up on the last point, we position the PLE as a counterpoint to the LMS

The PLEs@CQUni project emphasises the role of PLEs as a counterpoint (in the musical sense where two or more very different sounding tunes harmonise when played together) to the institutional LMS

The design guidelines we generated from this were

  • The "PLE product" is not owned, specified or provided by the university.
  • Each learner makes their own decisions about the collection of services and tools that will form their "PLE Product".
  • The University needs to focus on enabling learners to make informed choices between services and tools and on allowing for integration of institutional services with learners’ chosen services and tools.
  • The PLE work will act as a counterpoint to existing and new investments in enterprise systems, by combining them with the students’ customised environment in order to provide previously unavailable services.
  • The final nature of the PLE product and its relationship with the institution will emerge from the complex interaction between technology, people and the organization.

People

When looking at the people involved, we developed these guidelines:

  • The PLE project will fail if learners (both staff and students) do not engage with this concept.
  • People are not rational decision makers. They make decisions based on pattern matching of their personal or collective experiences.
  • There is little value in asking people who have limited experience with a new paradigm or technology what they would like to see or do with the technology.
  • The project focus should be on understanding, working with and extending the expectations of the participants within the specific conditions of the local context.
  • A particular emphasis must be on providing the scaffolding necessary to prepare learners for the significant changes that may arise from the PLE concept.

Process

It’s long been a bug bear of mine that universities are so project centric, that they believe that big up front design/traditional IT development processes actually works for projects involving innovation and change. This is evident in the guidelines around process we developed:

  • Classic, structured project management practices are completely inappropriate for the PLEs@CQUni project.
  • An approach based on ateleological or naturalistic design is likely to be more appropriate.
  • Project aims should be based on broad strategic aims and place emphasis on organisational learning.

Purpose

A project has to have a purpose doesn’t it? At the very least, for political reasons, the project has to be seen to have a purpose. The guidelines for purpose were:

  • The project will cultivate an emergent methodology.
  • The project will focus on responding to local contextual needs.
  • The overall purpose of the project is to support the institution’s new brand.

The last point is likely to bring shudders to most folk. Branding! Are you owned by “the man”? This was partly political, however, the “branding” really does gel with the concept of the PLE. The tag line is “Be what you want to be” and one of the “messages” on the corporate website was

CQUniversity interacts in a customized way to your individual requirements. Not all universities can say that and few can say it with confidence. We can.

For me, there is a connection with PLEs.

Place

To some extent, the discussions from the PLE conference that I have seen seem to assume that all universities are the same. I disagree. I think there are unique differences between institutions that can and should be harnessed. What works at the OU, will not work at my current institution. So the guidelines for place we developed are:

  • The project must engage with broader societal issues without sacrificing local contextual issues.
  • It must aim to engage and work with the different cultures that make up the institution.
  • It should use a number of safe-fail projects, reinforcing those with positive outcomes and eliminating others.

What’s missing?

There are other aspects of the Ps framework not considered in the paper or above – Pedagogy, and Past Experience. However, the above suggests how these would be handled. i.e. connecting with current practice within the specific place and trying to extend it to better fit with the ideas underpinning a PLE. Such extension would be done in diverse ways, with different disciplines and different individuals within those disciplines trying different things, talking to each other and working out new stuff.

What would it look like?

There were two concrete changes the project implemented before it was canned:

  1. BAM/BIM;
    Provide LMS-based method for staff to manage/aggregate and use individual student blogs (PLE).
  2. Generating RSS feeds from Blackboard 6.3 discussion forums.
    The institutional LMS at the time was the ancient Blackboard 6.3. We implemented an intermediary system that generated an RSS feed of posts. A way for students/staff using newsreaders (PLE) to track what was happening within the LMS and at the same time saving them time. They didn’t need to login to the LMS, go to each course, check each discussion forum for new posts….

These two bits only really touched the surface. In fact, these interventions were intended as easy ways to scaffold and encourage the greater use and integration of “PLE” like concepts into the daily practice of learning and teaching within a university. The start of a journey, and valuable because of the journey to come, more so than the destination they represented. A journey we we’re never able to carry through for an interesting distance. Here’s hoping that someone can start it again.

Features used in Webfuse course sites

Time to get back into the thesis. The following is the next completed section from the evaluation part of chapter 5 of my thesis. A result of much data munging and some writing, still needs a bit more reflection and thought, but getting close.

Features used in course sites

The previous sub-section examined the number of pages used in default course sites from 2000 through 2004. This sub-section seeks to examine in more detail the question of feature adoption within the Webfuse courses sites. In particular it seeks to describe the impact of the introduction of the default course sites approach and compare its results with feature adoption in course websites from other systems at other institutions. This is done using the Malikowski et al (2007) model introduced in Chapter 4, which abstracts LMS features into five system independent categories (see Figure 4.8). This sub-section first describes the changes in the available Webfuse features – both through new Webfuse page types and Wf applications – from 2000 onwards in terms of the Malikowski et al (2007) model. It then outlines how Webfuse feature adoption within course sites changed over the period from 2000 through 2004 and compares that with other systems at other institutions. Finally, it compares and contrasts feature adoption during 2005 through 2009 at CQU between Webfuse and Blackboard.

As described in Chapter 4, the fifth Malikowski et al (2007) category – Computer-Based Instruction – is not included in the following discussions because Webfuse never provided features that fit within this category. In addition, it is a category of feature rarely present or used in other LMS, especially from 2000 through 2004. Table 5.12 lists the remaining four Malikowski et al (2007) categories and lists the Webfuse features within those categories from 1997-1999 and 2000 onwards. The 2000 onward features included features provided by both page types and Wf applications.

Table 5.12 – Allocation of Webfuse page type (1996-1999) to Malikowski et al (2007) categories
Category Page Types (1997-1999) Webfuse features (2000-)
Transmitting content Various content and index page types
Lecture and study guide page types
File upload and search page types
CourseHome
CourseResources
CourseSchedule
CourseStaff
CourseAssessment
RSSUpdates
LectureRepository
Timetable generator (Jones 2003)
Creating class interactions Email2WWW
EwgieChatRoom
WWWBoard and WebBBS
Yabb
CourseGroup, CourseGroups
CourseMailingLists
Email Merge
Etutes
Evaluating students AssignmentSubmission Quiz
Assignment extension management
Academic misconduct database
OASIS (Jones and Behrens 2003)
BAM (Jones and Luck 2009)
Plagiarism detection
IROG (Jones 2003)
Peer Review
Topic Allocation
Evaluating course and instructors Barometer
UnitFeedback/FormMail
Survey

Table 5.13 shows the percentage of Webfuse courses that adopted features in each of the categories proposed Malikowski et al (2007) from 1997 through 2009. The “Malikowski %” row represents the level of feature adoption found by Malikowski et al (2007) in the LMS literature for usage reported before 2004. The “Blackboard %” row represents feature adoption within Blackboard by CQU courses during 2005. Blackboard was adopted as the official institutional LMS by CQU in 2004. The subsequent rows show the level of feature adoption within Webfuse courses from 1997 through 2007. The following describes some limitations and context for the data in Table 5.13 after which some additional visualisations of this data are shown and then some conclusions are drawn.

Table 5.13 – Feature adoption in Webfuse course sites (1999-2004)
Usage Transmitting content Class interactions Evaluating students Evaluating courses and instructors
Malikowski % >50% 20-50% 20-50% <20%
Blackboard % 94% 28% 17% 2%
1997 34.9% 1.8% 0.9% 9.2%
1998 38.4% 48.6% 1.4% 0.7%
1999 46.0% 9.0% 2.1% 9.5%
2000 46.6% 43.7% 24.7% 6.9%
2001 51.6% 32.4% 47.1% 28.3%
2002 69.6% 63.8% 57.7% 44.2%
2003 69.2% 68.5% 93.7% 37.7%
2004 61.3% 61.9% 91.8% 35.7%
2005 64.2% 69.2% 93.6% 39.8%
2006 70.0% 68.7% 105.1% 31.6%
2007 68.5% 102.0% 168.1% 33.1%
2008 72.9% 110.7% 192.0% 51.6%
2009 69.2% 105.7% 211.4% 42.7%

A variety of contextual factors and limitations are necessary to understand the data presented in Table 5.13. These include:

  • Missing course sites;
    As mentioned in previous tables both the course website archives for 1998 and 2000 are each missing course sites for a single term. The percentages shown in Table 5.13 represent the percentage of courses offered in the terms for which archival information is available.
  • Missing mailing lists;
    For most of the period shown in Table 5.13 a significant proportion of courses made use of electronic mailing lists for course communication. These lists, while supported by the Webfuse team, did not have an automated web interface until after the introduction of the default course sites. Information about the use of mailing lists before the default course sites is somewhat patchy. With none available before 2000 and only some information for 2000 and the first half of 2001.
  • Optional versus compulsory content transmission;
    All Webfuse courses sites, including both manually produced sites (pre 2nd half of 2001) and the default course sites (post 2nd half of 2001) included content. Rather than simply show 100%, Table 5.13 shows the percentage of courses where additional content was transmitted through the course site by teaching staff. This was an optional practice.
  • The definition of adoption and the course barometer;
    From 2001 through 2005 the presence of a course barometer was part of the Infocom default course site. This means that 100% of all Webfuse course sites had a course barometer. However, this is not represented in the figures for “evaluating courses and instructors” in Table 5.13. Instead, Table 5.13 includes the percentage of courses where the course barometer was actually used within the course.
  • Greater than 100% adoption.
    From 2006 onwards, both the class interactions and evaluating students columns suggest that greater than 100% of Webfuse course sites had adopted features in these categories. This arises due to the ability of courses to use a number of the Webfuse provided features (e.g. email merge and results upload) without using Webfuse for course sites.

The following graphs enable a visual comparison between the level of feature adoption within Webfuse and are also used to draw some conclusions about that adoption. These graphs use almost the same data as shown in Table 5.13 only separated into the four Malikowski et al (2007) categories. The only difference is that the following graphs also show how feature adoption for Blackboard changed over the period 2005 through 2009, rather than simply showing the level of adoption for 2005 as in Table 5.13. The Blackboard figures for 2009 only include data from the first CQU term, not for the entire year.

Figure 5.9 provides a visualisation of the percentage of courses using features associated with content transmission. The Malikowski et al (2007) range is identified by the dotted lines and represent that as of around 2004, it was common to find between 50% and 100% of course sites using content transmission features. The dashed line in Figure 5.9 shows that from 2005 through 2009 between 80% and 100% of CQU Blackboard course sites were using content transmission features. The thicker black line that includes data labels represents the percentage of Webfuse course sites using the option of adding content transmission features to the default course sites.

Content Transmission

Figure 5.9 – Percentage course sites adopting content transmission: Webfuse, Blackboard and Malikowski et al (2007) (click image to enlarge)

From Figure 5.9 it is possible to see that there was an increase from in the optional use of content transmission features when the default course site approach was introduced during the second half of 2001. In 2002, the first full year of operation for the default course site approach, there was an increase of over 20% use of content transmission features over 2000, the last full year without the default course site approach. From 2002 the adoption rate stayed above 60%.
Figure 5.10 shows the percentage of course websites adopting class interaction features such as discussion forums, chat rooms etc. As of 2004, Malikowski et al (2007) found that it was typical to find between 20% and 50% of course sites adopting these features. From 2005 through 2009, the percentage of Blackboard courses adoption class interaction features increased from 28% through 61%. The data series with the data labels represents the adoption of class interactions within Webfuse course sites and highlights some of the limitations and contextual issues discussed above about Table 5.13.

Interactions

Figure 5.10 – Percentage course sites adopting class interactions: Webfuse, Blackboard and Malikowski et al (2007)(click image to enlarge)

As mentioned in the previous chapter, the Department of Mathematics and Computing (M&C) – in which the Webfuse work originated – had started using email lists in 1992 as a way of interacting with distance education students. These lists arose from the same place as Webfuse. As outlined above prior to 2001, the archives of these mailing lists were kept separate from the Webfuse course sites and records are somewhat patchy. For example, there are archives of the mailing lists for 1998, hence the peak of 48.6% in 1998. The 1.8% and 9% adoption figures for 1997 and 1999 represent years for which mailing list data is missing. In addition, the greater than 100% adoption rates in 2007-2009 arise from increased use of the email merge facility by courses that did not have Webfuse course sites. These courses accessed the email merge facility through Staff MyCQU.

Figure 5.10 shows that adoption of class interaction features were significantly higher within Webfuse than both the Malikowski averages and in Blackboard. Given that once adopted, it was unusual for a course mailing list to be dropped, unless replaced by a web-based discussion forum. It is thought that complete archives of the pre-2001 mailing lists archives would indicate that as early as 1997, almost 50% of Webfuse course sites had adopted some form of class room interaction. Most of this adoption arose from M&C courses continuing use of mailing lists. The increased adoption of class interaction features post 2002 arise from the increased prevalence of Web-based discussion, especially amongst non-M&C courses.

Figure 5.11 shows the percentage adoption of features related to student assessment – typically quizzes and online assignment submission. It shows that the typical Malikowski et al (2007) adoption rate is expected to be between 20% and 50%. It shows that CQU Blackboard adoption from 2005 through 2009 ranged between 17% and 30%. On the other hand, Webfuse adoption after a minimal adoption in 1997 through 1999, increased to levels of over 90% from 2003 through 2005 before exceeding 100% from 2006 onwards.

Evaluate Students

Figure 5.11 – Percentage course sites adopting student assessment: Webfuse, Blackboard and Malikowski et al (2007)(click image to enlarge)

The almost non-existent adoption of student assessment features within Webfuse from 1997 through 1999 represents the almost non-existent provision of these features. A primitive online assignment submission system was used in a small number of courses during these years, mostly those taught by the Webfuse designer. From 2000 onwards an online quiz system became available and a new assignment submission system began to be developed. From this stage on adoption grows to over 90% in 2003. The use of Webfuse student assessment features far outstrips both the Malikowski ranges and those of CQU Blackboard courses.

Figure 5.12 shows the adoption of course evaluation features. It shows that the expected Malikowski et al range (2007) to be between 0% and 20%. The adoption of course evaluation features by CQU Blackboard courses ranges from 2% in 2005 through to 5% in 2009. Prior to 2001, the Webfuse adoption rate is less than 10%, but then increases to range between 28% and 52% from 2001 on. This increase is generally due to increase availability of the Webfuse course barometer feature (see Section 5.3.6).

Evaluate Courses

Figure 5.12 – Percentage course sites adopting course evaluation: Webfuse, Blackboard and Malikowski et al (2007)(click image to enlarge)

Two of the peaks in the Webfuse adoption of course evaluation features from Figure 5.12 coincide with concerted efforts to encourage broader use of the course barometer. The 2002 peak at 44.2% coincides with the work of a barometer booster within Infocom during 2001 and early 2002 as described in Jones (2002). The 2008 peak of 51.6% coincides with a broader whole of CQU push to use the barometer for student evaluation purposes.

The above suggests that, in terms of feature adoption by courses, Webfuse and the default course site approach has been somewhat successful. It ensured that 100% of all courses offered by the organisational unit using Webfuse had a course site with some level of content transmission. With a significant additional level of content added to the course sites. Overall, there was a broader adoption of content transmission with less effort required by academics. In terms of course interactions, student assessment and course evaluation features, the services provided by Webfuse after 2001 has results in levels of adoption greater than broadly expected (as indicated by the Malikowski model) and than found in the use of the Blackboard system at the same institution.

References

Jones, D. (2003). How to live with ERP systems and thrive. Paper presented at the 2003 Tertiary Education Management Conference, Adelaide.

Jones, D., & Behrens, S. (2003). Online Assignment Management: An Evolutionary Tale. Paper presented at the 36th Annual Hawaii International Conference on System Sciences, Hawaii.

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. Paper presented at the World Conference on Education Multimedia, Hypermedia and Telecommunications 2009. from http://www.editlib.org/p/31530.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

An overview of BIM

Just uploaded a screencast/presentation that is essentially a cut down version of the presentation I gave at Moodlemoot’AU 2010 last week. There’s an embedded version below or you can find the original on vimeo. There’s also more information about BIM elsewhere and you can download BIM from here (for Moodle 1.9.x) or BIM for Moodle 2.x here (look for the “Zip” button).

The ethics of learning analytics: initial steps

Col’s recent blog post has just started the necessary process of the Indicators project paying closer attention to the question of ethics when applied to learning analytics. The following are some of my initial responses to Col’s post and an attempt to invite some additional suggestions from other folk around the question

What are the ethical problems and considerations that should form part of work around learning analytics?

Feel free to comment.

Pointers to literature

I’ve tried a quick search for literature around ethics and analytics, but have not been able to find anything specific. Will need to search further, would welcome any pointers to relevant literature.

Web data mining and learning analytics

Col’s post seems to depend mostly on a paper that examines ethical issues in web data mining (Wel and Royakkers, 2004). While learning analytics could certainly be seen as a subset of web data mining, I’m not convinced that its not without its differences.

Especially given that the indicators project is currently focused on using usage data from institutional learning management systems. For example, Col uses the following quote from Wel and Royakkers (2004)

Web mining does, however, pose a threat to some important ethical values like privacy and individuality. Web mining makes it difficult for an individual to autonomously control the unveiling and dissemination of data about his/her private life

When a student is using the institutional LMS, is this really a part of his/her private life? Like it or not the institutional LMS is owned by the LMS, it’s being used by the student for learning and the purpose of learning analytics is to help improve that learning.

In addition, Col repeats the point that Wel and Royakkers (2004) make that there are issues when the data is analysed without user knowledge. Well, all LMS contain a certain level of “analytics” functionality, it’s built into the systems. I’m not sure that students are made explicitly aware of this functionality or how it is used. Is this a problem?

Internet research

The CMC/Internet research community is one amongst many various other fields/sub-groupings dealing with these sorts of issues. Herring (2002) offers a overview of this field, including ethical issues.

In summary,

  • ease of data collection creates ethical concerns;
  • participants may not be aware their actions are being collected and studied;
  • while identifies may be masked, some systems/archives and ways of expressing material may make it possible to identify;
  • there has been some debate (e.g. Mann and Stewart 2000);
  • some research advocate obtaining informed consent;
  • others suggest asking permission when quoting comments and/or masking identifying information;
  • informed consent can cause problems, especially with critical research;
  • need to strike a balance between quality research and protecting users from harm.
  • there are debates around the definition of harm.

This is a fairly old reference, likely to be more up to date information in more recent research publications and in research methodology texts. Need to look at those.

This chapter from the SAGE handbook of online research methods seems a good place to start.

References

Herring, S. (2002). Computer-mediated communication on the Internet. Annual Review of Information Science and Technology, 36(1), 109-168.

Wel, L. v., & Royakkers, L. (2004). Ethical Issues in web data mining. Ethics and Information Technology, 6, 11.

Trip report – Moodlemoot'AU 2010

The following is a report of my attendance at Moodlemoot’AU 2010 during the first half of this week. The aim is to engage in a touch of reflection, outline tasks to do, and inform colleagues back at CQUniversity about the conference.

My contribution

I was mainly responsible for two presentations at the conference. THe following presentation links include a range of resources, including slides. However, the planned audio/video wasn’t generated. The two talks were:

  1. A short show and tell of the idea for adding and harnessing curriculum mapping and alignment within Moodle.
    The 3 minute limit on this presentation was interesting, but was kept to (sort of). Some interest expressed by folk and a couple of links to follow up.
  2. A presentation showing off BIM and talking very briefly about limitations in developing innovative pedagogy.
    I decided to focus mostly on showing off BIM and how it worked. That was, I think, a good move. Though as it turns out a bit more though on the limitations side might have gone down well. Some good feedback on this presentation via twitter, more on that below.

I was also somewhat associated with two presentations from the Indicators project. Almost all the work for these presentations was done by Colin Beer and Ken Clark, and a great job they did. Seems there is growing interest in the indicators project, the next year or so looks like being very interesting.

Reflections

For me personally, the conference was – for a variety or reasons – perhaps the most valuable I’ve been to in recent times. Most of the reasons had nothing to do with the actual presentations. There were some interesting presentations, however, it was the connections made and the possibility of future work (in a range of sense) that made it incredibly worthwhile.

This is especially important given that it wass obvious that adoption of Moodle is rapidly expanding with UNE, LaTrobe and Monash announcing moves to Moodle in the months leading up to the conference. Each of these institutions had groups of staff at the conference. In addition, a number of non-Moodle universities also had representatives at the conference. Checking out the competition was the reason given, though I do suspect there are likely to be a few more Australian universities adopting Moodle in the coming years.

This suggests that the institutions that get into Moodle early and effectively have the opportunity to make useful contributions. It also suggests the potential for a critical mass of institutions sharing and collaborating around Moodle and its use for learning, teaching and beyond.

The Gold Rush

The other side to this is the observation that to some extent, there was a feel of a “Moodle” (gold) rush about the conference. Perhaps the settling of the wild west is a better metaphor. Lots of excitement from new settlers exploring a new land, trying to establish how it all works and plan what the future might bring. There were also a few old hands there to help and occasionally shake their head at the new arrivals. There was certainly a sense of excitement amongst the new settlers about the possibilities. I have to admit, that at times and at least for my somewhat cynical tastes, this fervour went a bit too far and on occasion started to take on the air of a gathering of evangelical Christians.

At the same time there was also a sense of there not being any collective history. Without a connection to the land, the settlers were making some fundamental mistakes, implementing practices that don’t make sense in the new land. Reporting and discussing these mistakes at the Moot is a step toward developing a collective history, however, it was somewhat disappointing that some of the insights developed around e-learning, educational technology, distance education and many other “groupings” from past literature weren’t widely known about.

Even more scary is the observation that at times, this lack of awareness, wasn’t limited to individuals new to a field. It was also evident in some of the large scale, “strategic” organisational projects implementing Moodle.

Presentation feedback and twitter

In the past I’ve belonged to academic units where it was compulsory to product a “trip report” on return to the host institution. An almost obligatory or compulsory part of these trip reports was the

My session was received positively by those there.

statement. Given this was a case of self-reporting by the person who gave the presentation, it always felt a bit self-serving. Not to mention that the use of “self-reporting” had to bring into question the validity of the statement.

Of course, I was as guilty as anyone else of using this comment, and in the context of my presentations at this conference,

My sessions was received positively by those there.

However, we’re much more modern now, we have Twitter a tool that’s increasingly being used at conferences to make explicit what was generally implicit. At this conference there was a healthy twitter stream and due to that stream, I have some hard evidence of good comments. See the following image, click it to make it bigger.


Twitter feedback on Moodlemoot'AU BIM presentation

Of course, most of the folk shown in that image are people who follow me on twitter. So, the validity of such comments might remain questionable.

Use of innovations by other institutions

One of the nicer aspects of moving to Moodle (from an institution specific system) has been the adoption of the tools I’ve developed (e.g. BIM) by other people at other institutions. Alan Arnold used BIM as an example in his presentation with James Strong about how University of Canberra worked with Netspot to maintain the balance between “staying vanilla” and innovation with UC’s Moodle.

Though I wasn’t too sure about the naming scheme Alan adopted – “CQU BIM”. According to the re-branding, it probably should’ve been (at the least) “CQUni BIM”.

The lack of TPACK

At least for me, there seemed to be a fairly visible division between the teaching academics, the teaching support folk and the information technology folk. There didn’t seem to be a lot of really strong cross-fertilization between the different groups. And that’s before we start talking about management.

The TPACK folk argue that it’s the effective combination of the knowledge that each of these groups hold that is needed to make really innovative and high quality use of IT for learning and teaching. And at least for me, the moot experience bears this out. Most of the really interesting presentations were those that drew on effective combinations of the various different types of knowledge. An example is given below of Michael de Raadt’s presentation.

Marnie Hughes-Warrington

The conference was opened by the Monash PVC for Learning and Teaching – Professor Marnie Hughes-Warrington – who linked Monash’s plans for Moodle and their broader VLE. The comment that stuck with me, probably because it mirrors my own thoughts was

learning with technology is about the connections and relationships

In describing some of the directions they are taking she mentioned that one of the first steps was listening to teaching academic staff and actually fixing the problems they’ve been reporting for the last five years.

There are lessons here for other institutions. At least currently, it appears that Monash are going to be amongst the most interesting to watch of all the institutional adoptions of Moodle. It shall be interesting to learn more about their migration to Moodle, the strategies and thinking underpinning that migration, and the resulting outcomes. In particular, would be interesting to hear from a collection of Monash teaching staff to see how/if their perspective differs from those driving the migration.

Peer review, progress bar, distributive “leadership” and behaviour change

Michael de Raadt – an information technology academic from USQ – gave a presentation on two plugins he’s developed for Moodle: a peer review assignment type and a progress bar block. As an IT academic with an interest in educational research, Michael had first hand experience of a teaching and learning problem, insights into educational solutions, and the technical ability to implement those solutions within Moodle. Both of Michael’s Moodle plugins could be useful for CQU staff and students.

I found the progress bar block particularly interesting. Michael described how students could become almost compulsive about ensuring that the bar was “all green”. The on-going presence of “red” in the bar was visible every time they used Moodle and acted as an encouragement to complete all the tasks, to be more active.

It is this sort of modifications to the Moodle and broader learning and teaching environments within universities that I am most interested in. The progress block appears to be particular effective examples of a “nudge theory” approach to improving learning and teaching.

To some extent, this type of approach is related to a presentation from some folk at ANU titled “Translating Learning Outcomes in Moodle” designed to aid teaching staff make the connection between constructive alignment and the activities available in Moodle. The approach described in this presentation offers some interesting ideas about how the Moodle environment can be extended to improve the capacity of staff to design more aligned activities. In particular, the approach has some potential to compliment the ideas behind the alignment project.

Bridging the gap between Moodle, institutional practice and academic requirements

A number of the presentations at the conference were examining the question of how to automate and/or ease the workload associated with creating Moodle course sites or integrate it with other related organisational processes (e.g. linking it with course outlines/profiles). Though none really seem to have moved beyond fairly limited “administrative automation”. The ANU outcomes approach in the last section looked at this task from another perspective.

No-one seems to be yet moving beyond these fairly limited forms of course site creation. To improving the level of abstraction.

Online assignment submission and management

Perhaps the most obvious collection of presentations at the conference were associated with various questions/issues around online assessment. For example:

  • de Raadt’s presentation that discussed his peer review assignment type;
    Essentially a Moodle assignment type that provides a higher level of abstraction to help with managing student peer marking of assignments with a reasonable level of staff oversight.
  • a presentation on the Lightwork tool for managing/marking online assignments;
    I didn’t attend this presentation, but Lightwork is a project that’s been going for a while now.
  • a presentation around a fair bit, including a Word template/document approach to marking.
    I didn’t go to this either, but know of some folk who did.

The supporting page for the last presentation does make the point that assignment marking and management remains a difficult, time consuming and expensive consideration within universities. Not something that is always done well. A practice within which there is significant capacity for innovation and improvement.

Tasks to do

Throughout a conference there are generally long lists of interesting stuff to follow up upon. After some less than perfect recollection and some reflection, the following is the list of important tasks I need to follow through upon:

  • Follow up with Michael de Raadt around getting more insight into how to make BIM “more moodle like”.
  • Prepare a video version of the BIM presentation that can be uploaded for folk to view.
    Done: video is available here
  • Give more thought to when/how I’ll start moving BIM to Moodle 2.0.
  • Talk with Col, Ken and Damien about how and when we continue the development of the Indicators project’s Moodle block.
  • Turn the idea behind the BIM presentation into a conference paper and subsequently a journal paper critiquing conceptions and attempts to implemented blended learning.

Whether or not I work on the following list of tasks depends more on the outcome of imminent job applications and interviews (not to mention work on the thesis):

  • Follow up with the ANU crew about how and what we might do in partnership around alignment.
  • Follow up with Jonathan Moore from Remote-Learner about connections between the alignment project and the work they are doing around K-12 standards in Moodle.
  • Try and figure out how the alignment project can be progressed beyond a thought experiment into something concrete.
  • Think about how the increasing number of Australian universities adopting Moodle can most appropriately harness this new community and the open source nature of Moodle.
  • Think how the insights from OASIS can be combined with other work around online assignment submission and management to develop innovations and improvements, especially given issues at my current institution.
  • See how and if both the alignment and indicators projects can be turned into successful cross-institutional projects and also successful ALTC grant applications.
  • Would love to see how different Australian universities might review each others Moodle migrations in an attempt to make reporting on these projects more independent and hopefully useful for future action.

There is much, much more to think about and do arising from presentations and conversations at Moodlemoot. I’ve only captured a small sample.

Integrating alignment into Moodle and academic practice: A proposal and a RFI

I’m off to the 2001 Australian MoodleMoot next week. The conference program includes a collection of 3 minute show and tell sessions on the Tuesday afternoon. The following is a summary of what I think I’m going to talk about and a call for suggestions.

I’m starting to add all the associated resources with the presentation to this post.

More information

Other resources/information around this idea include:

  • A blog post introducing how curriculum mapping might work in Moodle.
  • A detailed, draft grant proposal for a broader project around embedding mapping/alignment into a university.
    This proposal includes a fairly long reference list which points to some of the literature that informed this idea.

Video

The following video is a slightly extended version of the talk, using the same slides, recorded after the Moodlemoot.

Slides

The purpose

The title of this post is probably going to be the title of the talk. From that you can assume that this is not a show and tell of something that is working, but instead a proposal of an idea. The aim is to find out if there are other people interested in this project or already working on something similar. The aim is to start a conversation. The talk is also request for interest (an RFI). I’m keen to hear from folk interested in working on this idea, especially in terms of a potential ALTC grant for next year.

The proposal is based on previous ideas posted here. At the core is the idea of how curriculum mapping might work in Moodle. However, the intent is to do much more than simply modify Moodle. The broader aim is to modify the environment and processes within which teaching academics work in order that consideration of alignment (be it constructive, instructional, curriculum or graduate attributes) is part of every day practice.

A more detailed description of this idea is available here. The rest of this is a written summary of what I think the 3 minute show and tell will cover next week at the Moot.

The problem

Within Australian Universities, the alignment of what happens within a course (sometimes known as a unit) against some outcomes or graduate attributes is becoming widespread, even standard practice. For example, there’s a presentation at the Moot with the title “Translating Learning Outcomes in Moodle”. This presentation draws on Bigg’s (1996) idea of constructive alignment, which is probably the most common, currently used concept of alignment. The push toward graduate attributes for everything is perhaps the other common application of alignment within Australian higher ed.

The Moot presentation identifies as a problem the difficulty of translating learning outcomes into an effective course design within an LMS. The problem which I’m interested is connected to this, but is also a little different. The problem I’m interested in is that the every day, regularly experience of an academic doesn’t require them to think about alignment. More broadly, the everyday experience of teaching academics doesn’t encourage nor enable them to think about learning and teaching from an educational perspective. Instead the focus on low level tasks like uploading documents because of the low-level of abstraction in most LMS.

Experience is important

What people experience is important. There’s a growing body of literature from neuroscience (e.g. Zull, 2002) and psychology (e.g Bartunek and Moch, 1987) that suggests your experiences shape who you are, what you think and how you see the world. Which in turn is related to insights like Kolb’s learning cycle.

Kolb's Learning Cycle

If alignment is not something academics experience regularly, and experience within a context that encourages and enables them to reflect and experiment with alignment, then how are they expected really to learn and adopt alignment?

The proposal

The proposal aims to modify the environment in which academics operate such that they are encouraged and enabled to consider alignment as a regular component of their everyday teaching experience. To provide an environment in which they can move through all of the stages of Kolb’s learning cycle. The proposal is based on the following assumptions and propositions:

  • The most common teaching experience for university academics is teaching and slightly tweaking a course that has been taught before.
  • It is fairly simple to modify Moodle to enable the mapping of alignment relationships between Moodle activities and resources and outcomes or graduate attributes.
  • Once this alignment information is being maintained, an ecosystem of services can be added to Moodle that enable reflection, abstraction, and active testing of ideas around alignment in a collaborative and open way.
  • If such an ecosystem enabled and encouraged effective, on-going use, then the quality of learning and teaching would improve.
  • On-going use of such an ecosystem would raise interesting questions about the design and operation of Moodle.

Disclaimer: I have some reservations about alignment, however, it’s almost become a requirement within Australian higher education and I do believe that consideration of alignment could provide a useful McGuffin for learning and teaching.

The 3 minute show and tell will focus on showing some proposed screen shots of how curriculum mapping might work within Moodle and some initial ideas of how the resulting alignment information could be used to create an ecosystem of services.

Request for interest

Effectively implementing something like this is not easy. It would be improved by having a good combination of skills and perspectives. I’m keen to work with people who are interested in trying to further develop and eventually implement this idea.

I’m especially interested in hearing about projects that are related to, or already implementing something like this.

References

Bartunek, J., & Moch, M. (1987). First-order, second-order and third-order change and organization development interventions: A cognitive approach. The Journal of Applied Behavoral Science, 23(4), 483-500.

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347-364.

Zull, J. (2002). The art of changing the brain. Stirling, Virginia: Stylus Publishing.

McGuffins, learning, teaching and universities

D’Arcy Norman suggests that Edupunk is a McGuffin. I like the metaphor. But I think it breaks down a bit, at least in the context I’m interested in.

Wikipedia uses a definition of a McGuffin that suggests it is “a plot element that catches the viewers’ attention or drives the plot of a work of fiction”. Wikipedia suggests that the defining characteristic of a McGuffin is

the major players in the story are (at least initially) willing to do and sacrifice almost anything to obtain it, regardless of what the MacGuffin actually is.

.

Importantly, as Wikipedia suggests

the specific nature of the MacGuffin may be ambiguous, undefined, generic, left open to interpretation or otherwise completely unimportant to the plot.

What is important is not the details or nature of edupunk, top-down quality assurance, problem-based learning, teacher-of-the year awards, or anything else. What is important is what happens as a result of the characters wanting to obtain the McGuffin. In movies, what’s important is a good plot.

I work in a university context. In that context, I think what’s important is improving the quality of learning and teaching. I don’t see enough of that happening. To a large extent I think this is due to the absence of appropriate McGuffins. The current McGuffins within a university context aren’t driving the majority of academics to improve the quality of learning and teaching.

Edupunk is the right McGuffin for some. But I’m not sure how widespread that is. The folk interested in Edupunk are generally not the ones that need a McGuffin.

So, what is the McGuffin for improving L&T within a university? Does it make sense for there to be one, or even a small number of McGuffins?

The VLE model and the wrong level of abstraction

I’m currently trying to formalise the information systems design theory for e-learning that is meant to be the contribution of my thesis. i.e. what is the model, set of principles etc that I think is important.

As it happens, I recently came across this post on models around the use of VLEs/LMS from Barking College. It’s part of a discussion involving a group of folk (James Clay, David Sugden, and Louise Jakobsen) talking about models for getting academics to use the complete functionality of the VLE/LMS.

This is interesting for two reasons. First, it helps me see what others are thinking in this sphere. Second, it provides a spark for me to think about my “model”. As an example there is an interesting point made on the Barking College post that I want to pick up on in the following.

The basic idea is that the functionality of a bog-standard VLE/LMS – like Moodle – embodies the wrong level of abstraction. At least in terms of encouraging and enabling effective use of the VLE/LMS within a university by academics. A traditional VLE model is at a very low level of abstraction which means lots of flexibility, but also lots of problems. I think there is some value (also some dangers) in moving the level of abstraction up a few notches.

Moodle document management and the wrong level of abstraction

The post from Barking College makes the point that uploading and maintaining document content on Moodle “is one of the most round-about and time consuming things anyone can do”. I agree. But it doesn’t end there. Even when academics understand and engage in the uploading process, their are more problems.

Then we teach staff to upload files to the VLE … and in no time they have transferred their heap and bad habits onto the file areas of VLE courses. If, in addition to their own unique ways of filing things, they are not the only editing teacher on a course then total chaos is almost guaranteed.

There are some exceptions. There are folk who are well organised and anally retentive enough that they have their files organised into well thought out directory structures with meaningful file names. But most academics don’t. As pointed out, this causes enough problems when it is just that originating academic having to deal with the resulting chaos, but when a course is taught by multiple academics….

This highlights one of the major flaws I see in most VLEs/LMS. They are design at the wrong level of abstraction. The file upload capability is very basic, it simply provides the ability to manage files. It provides no additional cognitive support for structuring content and certainly none that connects with the act of learning and teaching. The file management capability can’t tell the difference between a course synopsis, a tutorial sheet, a trial exam, a set of solutions or a set of lectures. Most can’t even tell you if it’s a PDF file, a Powerpoint file or a Word document.

This low level of abstraction is necessary to enable the broadest possible flexibility in use. The more abstraction you build into a system, the more specific you make it.

The CQU eStudyGuides

One example of what I mean here is the CQU eStudyGuide Moodle block I played with last year. Some brief context:

  • CQU has a long history of being a print-based distance education provider.
  • A print-based study guide was a standard component of the package of materials traditionally provided to distance education students.
  • The study guide was written by CQU academics and intended to guide the distance education student through the learning materials and activities they were meant to complete.
  • The production of the study guides was a formal process involving a central “desktop publishing unit” which would produce “professional” quality documents that were then printed.
  • The “desktop publishing” group is part of the unit I work with.
  • Back in 2007/2008 the unit modernised the study guide production process so that it improved the quality of the final document and produced both a print and electronic version.

The electronic version was long over due. For a long time academic staff wanted to provide electronic versions of the course study guide on the course website. Some had done this via an ad hoc, manual process, but it was time for something better.

Producing the electronic version of the study guide was only the first step. We also had to produce an automated process that would allow academics place the eStudyGuide on their course website. Requiring academics to do this manually was inefficient and likely to result in less than professional outcomes. This connects with the observation about file management made in the Barking College post about file management.

So, we implemented the automatic generation of an eStudyGuide web page for every course. The following image is one example, click on it to see it in a bigger size.

CQU eStudyGuide web page

The eStudyGuide page for a course was produced by a script. The script was aimed at a much higher level of abstraction. The script knew about CQU courses, it knew about the format we used for the eStudyGuide and it was able to use that knowledge to produce the file. e.g. it pulled the title of each part of the eStudyGuide from the eStudyGuide.

The CQU eStudyGuide Moodle block

By 2010 CQU had moved to Moodle as its LMS. As part of learning Moodle development I played with the creation of an Moodle eStudyGuide block. A block that would embody the same greater level of knowledge about CQU’s eStudyGuide than a more general Moodle block. Consequently, significantly simplifying the process of uploading an eStudyGuide to a Moodle course site. The following table compares and contrasts the eStudyGuide block approach with the more traditional manual appproach.

eStudyGuide block Moodle file upload
  1. Login to Moodle and go to course site.
  2. Turn editing on.
  3. Choose “eStudyGuide” block from “Add a block menu”
  4. Position it where you want.
  1. Get a zip file containing the eStudyGuide from somewhere.
  2. Login to Moodle and go to course site.
  3. Go to the “Files” link under administration.
  4. Create a directory for the eStudyGuide.
  5. Upload the zip file containing the eStudyGuide.
  6. Unzip it.
  7. Return to the course site.
  8. Turn editing on.
  9. For each of the chapters (usually 10 to 12) of the eStudyGuide
    • Manually add a link to the chapter
    • To make sure each link is meaningful you may have to open the chapter PDF to remember what the title of the chapter was.
  10. Add another link to the PDF containing the entire eStudyGuide.
  11. Add the blurb about how to use PDF files and where to get a copy of Acrobat or other PDF viewer.

The trade-off

There isn’t a perfect solution. Both low and high levels of abstraction involve a tradeoff between different strengths and weaknesses.

A low level of abstraction means the solution is more flexible, can be used in more institutions and for unexpected uses (good and bad). It also means that the user need to have greater knowledge. If they do, then good things happen. If they don’t, it’s not so good. It also means that the user has to spend a lot more time doing manual activities, which increases the likelihood of human error.

A high level of abstraction, especially one that connects with practices at a specific institution reduces workload on the part of the users, reduces the chance of errors and perhaps allows users to focus on other more important tasks. But, it also limits portability of the practice. i.e. the CQU eStudyGuide process probably wouldn’t work elsewhere. Which means it requires additional resources to implement and maintain.

A greater level of abstraction also removes some flexibility from what the user can do. The simple solution to this is not to mandate the higher level of abstraction. e.g. at CQU we provided the automated eStudyGuide page, but academics weren’t required to use it. They could do their own thing if they wanted to. Most didn’t.

Other examples

Providing a high level of abstraction to the VLE is almost certainly going to be a component of my ISDT, of my model. This is exactly what the default course site approach attempted to do. Provide a much higher level of abstraction on top of the VLE.

Into the future, it’s also a key part of what I’m interested in investigating. I think the addition of a “default course site” approach to Moodle, especially one that increases the level of abstraction but at the same time can be used at multiple institutions, is especially interesting.

Webfuse feature adoption – 1997 through 2009

The following presents details of feature adoption by courses using the Webfuse system from 1997 through 2009. It presents the data generated by the work described in the last post.

No real analysis or propositions, mostly just the data. Analysis is next step.

How features are grouped

To provide some level of Webfuse independence I’ve used the model developed by Malikowski et al (2007) and represented in the following image.

Reworked Malikowski model

Essentially, each Webfuse feature was assigned to one of four categories:

  1. Transmitting content.
  2. Interactions.
  3. Evaluating students.
  4. Evaluating courses.

Feature adoption

The following table shows the results. The adoption rate is shown as a percentage of Webfuse courses that used a particular feature category.

Year # course sites Transmit content Interactions Student eval Course eval
1997 109 34.9 1.8 0.9 9.2
1998 138 38.4 48.6 1.4 0.7
1999 189 46.0 9.0 2.1 9.5
2000 174 46.6 43.7 24.7 6.9
2001 244 51.6 32.4 47.1 28.3
2002 312 69.6 63.8 57.7 44.2
2003 302 69.2 68.5 93.7 37.7
2004 328 61.3 61.9 91.8 35.7
2005 299 64.2 69.2 93.6 39.8
2006 297 70.0 68.7 105.1 31.6
2007 251 68.5 102.0 168.1 33.1
2008 225 72.9 110.7 192.0 51.6
2009 211 69.2 105.7 211.4 42.7

When is a course a Webfuse course?

The most surprising aspect of the previous table is that some of the percentages are greater than 100%. How do you get more than 100% of the Webfuse courses adopting a feature?

Due to the nature of Webfuse, its features, and the political context it is possible that a course could use a Webfuse feature without having a Webfuse course website. Webfuse was never the official institutional LMS, that honour fell upon WebCT and then Blackboard. However, a number of the features provided by Webfuse were still of use to folk using WebCT or Blackboard.

For the purpose of the above, a Webfuse course is a course that has a Webfuse course site.

Discussion

Each of the following will show a graph of each feature category and offer some brief notes and initial propositions based on the data.

Click on the graphs to see them bigger.

Transmit content

Adoption of content transmission features Webfuse 1997 through 2009

Background

  • 100% of Webfuse course sites had content transmission.
  • From the 2nd half of 2001 these were automatically created.
  • Feature adoption above only includes where teaching academics have placed additional content onto the course site.
  • Main Webfuse features in this category are: course web pages, uploading various files and using the course announcements/update features.

Observations and propositions:

  • The automated and expanded default course site introduction in 2001 appears correlated with an increase in use. However, that could simply be broader acceptance of the Web.
  • Even at most accepted, almost 30% of staff teaching courses did not place additional content on the course site. This could be seen as bad, 30% didn’t do anything, or good, 30% could focus on other tasks.

Interactions

Adoption of interaction features Webfuse 1997 through 2009

Background:

  • Percentage adoption in 1997 through 1999 is probably higher as significant numbers of courses used Internet mailing lists. However, records of these aren’t tightly integrated with Webfuse course sites.
  • The 1998 archives have been found, so the table above shows 48.6% of courses having mailing lists.
  • From 2000 onwards Webfuse default sites included a web-based mail archive of mailing lists.
  • Features include: interactive chat rooms, web-based discussion forums, email-merge, BAM and web-based archives of mailing lists.
  • The push over 100% in 2007 onwards comes from a combination of more widespread use of mailing lists/discussion forums in default course sites and broader adoption of the email merge facility by non-Webfuse courses.

Evaluate students

Adoption of student evaluation features Webfuse 1997 through 2009

Background:

  • Main traditional features are online quizzes and online assignment submission and management.
  • Other non-traditional student evaluation features include an academic misconduct application, assignment extension system, informal review of grade system etc.
  • About 2005 onwards some of the non-traditional features became institutional systems.

Evaluate courses

Adoption of course evaluation features Webfuse 1997 through 2009

Background:

  • 2000 and before primary evaluation is via web-based forms with a bit of course barometer usage.
  • Post 2001 course barometer becomes standard in all Webfuse courses.
  • But not all courses have contributions. The percentages only include a barometer feature if someone has posted a comment to it. If measured as having a course barometer, then figure would be 100% from 2001 through 2004/5.
  • Spike in usage in 2008 comes from small institutional project using barometer in non-Webfuse courses.
  • Similar spike in 2002 comes from active enouragement of barometer idea.

Examining feature adoption – slightly better approach

I’m in the throws of finalising the second last bit of data analysis for the thesis. For this I’m trying to examine the level of feature adoption within courses supported by the Webfuse system (the main product for the the thesis). The following describes an attempt to formalise the process for this evaluation.

This has been under construction for almost a week. It’s complete. The following just documents what was done. Probably not all that interesting. Will present some results in the next post.

The main outcome of the below, is that I now have a database that has abstracted Webfuse feature adoption data from 1997 through 2009.

Rationale

There are three reasons to do this:

  1. Improve the process;
    So far, I’ve been doing this with a collection of UNIX scripts and commands, text files and the odd bit of a database. It works, but is not pretty.
  2. Record what I do; and
    I need to document what I’m doing so that I can re-create/check it later on. I could do this in a Word document but this way I can share what I’m doing.
  3. Move the Indicators project along a bit.
    Given contextual reasons, not sure how much further the project might go, but this might help a little.

The problem

The aim is to understand what features of an e-learning system are being used. i.e. how many courses are using the discussion forum, the quiz system etc. The aim is not to just understand this in the context of a single term, single institution or a single e-learning system. The idea is to examine feature adoption across systems, time and institutions in order to see if there are interesting patterns that need further investigation. This is the underlying aim of the Indicators project and more immediately important for me, what I have to do for my thesis around the Webfuse system.

So, I need to gather all the information about Webfuse feature adoption and turn it into a form that can be compared with other systems. I’ve done this before. It was first blogged about and then became part of an ASCILITE paper (Beer et al, 2009)

But since that work, I’ve gotten some additional Webfuse data and also had the opportunity to revisit the design and implementation of Webfuse through writing this second last chapter. I’ve also come up with a slightly different way to interpret the data. This means I need to revisit this usage data with some new insights.

One of the problems is that the original calculations in the ASCILITE paper did not draw on the full set of Webfuse features that fit into the Malikowski et al (2007) categories (represented in the diagram below). I need to add a bit more in and that means trawling a range of data sources. I need to have this done through a single script.

Reworked Malikowski model

In some ways, this need to have a “single script” encapsulates a key component of what the indicators project needs, an LMS independent computer representation of feature adoption of e-learning systems. A representation that can be queried and analysed quickly and easily.

What follows is my first attempt. I believe I’ll learn just by doing this. Hopefully, this means that when/if the indicators project does this in anger, it will be better informed.

The plan

I’m essentially going to create a couple of very simple database tables

  • courses: period, year, course, lms
    Which courses were offered in which period by which LMS. I’m using a very CQU centric period/year combination as I’m not going to waste my time and cognition establishing some sort of general schema. That’s for the next step, if it ever comes. I want to solve my problem first.
  • feature_adoption: period, year, course, category, feature
    Which features (in terms of specific feature and the Malikowski feature category) have been used in which courses.

It’s neither pretty, complex or technically correct (from a relational database design perspective), but it should be simple and effective for my needs.

To populate this set of tables I am going to write a collection of scripts that parse various databases, course website archives and Apache system logs to generate feature adoption figures.

Once populated, I should be able to write other scripts to generate graphs, CSV files and various forms of analysis to suit my purpose or that of others.

The rest of this post documents the implementation of these plans.

Create the tables

courses

Simple (simplistic?), straight-forward. Have used an enum of LMS and included the list I’m likely to use at my current institution.

[sourcecode lang=”sql”]
create table courses
(
period char(2) not null,
year int(4) not null,
course varchar(10) not null,
lms enum ( ‘webfuse’, ‘blackboard’, ‘moodle’ ),
index offering (period,year,course)
)
[/sourcecode]

feature_adoption

Another simple one, the category values “match” the 5 categories proposed by Malikowski et al (2007)

[sourcecode lang=”sql”]
create table feature_adoption
(
period char(2) not null,
year int(4) not null,
course varchar(10) not null,
category enum ( ‘transmitContent’, ‘interactions’, ‘evaluateStudents’, ‘evaluateCourses’, ‘cbi’ ) not null,
feature varchar(20) not null,
index offering (period,year,course)
)
[/sourcecode]

Fill the database tables

With the database tables in-place it is now time to fill them with data representing feature adoption within courses using Webfuse. I’m going to do this via Perl scripts because Webfuse is written in Perl and so, I’m very comfortable with Perl and Webfuse provides some classes that will help make this process a bit simpler. Am going to work through this Malikowski category by category. Will leave out the computer-based instruction category as Webfuse never provided this feature. But first, have to populate the courses table.

courses

There is an existing database table that tracks Webfuse course sites, however, I’m going to use a slightly different track based on some existing text files I’ve generated for earlier analysis. These text files contain lists of Webfuse course sites per year. I’m simply going to use vi to turn them into SQL commands and insert them into the database. This took three commands in vi and is working.

Done.

Interactions

For each of these following sections, the process is:

  • Identify the features within Webfuse that fit this category.
    Webfuse provides features via two means: page types and Wf application. Calculating usage of each is somewhat different. They’ll need to be considered differently.
    • Page Types – WebBBS, YaBB, Etutes, CourseGroup, EwgieChat, WWWBoard. DONE
      For some years, splitting page types into categories has already been done. Just a matter of doing the vi translation into SQL.
    • Wf applications – EmailMerge : DONE
      The problem with email merge is that while it generally originates from a specific course, it is implemented via list of student ids. This makes it hard to associate emailmerge usage with a course. Attempts to find a solution to this is described below.
  • Identify percentage adoption of these features per year.
  • Stick it in the database

The attempt to associate use of EmailMerge with a course used the following steps:

  • Look at referer in Apache log
    This gives a range of courses that have used email merge. So, some data could be retrieved. There’s also mention of JOBID – i.e. mail merge stores information about jobs in a database table.
  • Look at email merge database tables;
    One has the username of the staff member associated with the job and the date created. This could be used to extract the course, but a bit ad hoc.

Solution is to parse out the refers that mention course/period/year and convert that into SQL for insertion. This should capture some of the uses of MailMerge, but won’t get them all.

Evaluating students

  • page types DONE
    All done using the
  • Wf applications DONE
    Need to write a script to extract info from various databases and updates stats. The additional Wf applications are:
    • BAM – EAST:BAM_CONFIGURE
    • IROG – DATA:IROG
    • AES – EAST:REQUEST
    • AMD – Plagiarism:PlagCase

Evaluating courses

  • page types: UnitFeedback, FormMail and Survey
  • Wf applications: Barometer

Transmitting content

This category is a bit more difficult. All Webfuse course sites transmit content, there’s a basic level incorporated into all sites. What I need to calculate here is the percentage of courses that have additional content, beyond the default, added by the teaching staff. Evidence of this could include used of the following by teaching staff (not support staff):

  • course updates DONE
    This generates an RSS file, which I think is mostly put into CONTENT file of the course site. Each element as a dc:creator tag with the name of the user.

    One approach would to be find all updates content files, grep for creator (including course/period), remove creators that are support staff. From 2002, this is done in a separate RSS file, but all good.

  • fm DONE
    This is recorded in the apache logs. Will need to parse that.
  • page update
    Again, parsing of apache log files

Calculating adoption

Once the data is in the database, the next step is to calculate the adoption rate, which is essentially:

  • Get the total # of courses in a year.
  • For each Malikowski category
    • calculate percentage of courses adopting features in the category

Show the results in the next post

References

Beer, C., D. Jones, K. Clark. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. ASCILITE’2009. Auckland, NZ.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

Powered by WordPress & Theme by Anders Norén

css.php