Assembling the heterogeneous elements for (digital) learning

Month: March 2009 Page 2 of 3

Virtual learning environments: three implementation perspectives

The aim of this post is to summarise my current reading – Keller (2005). I believe it will have some connection with the thesis.

Aside: Using the United Kingdom term – virtual learning environment (VLE) – as a synonym for the more common (in Australia and elsewhere) – learning management system (LMS). Have to say I still prefer course management system (CMS) as a more appropriate label.

Apart from the PhD, this article has contextual implications as my existing institution is currently adopting Moodle as mentioned elsewhere.

Summary

This is an interesting conceptual paper – there are no empiricial data – that is a bit light on in detail (e.g. how the CoP approach can be used to improve a VLE implementation is very abstract and repetitive). This may also be true for the innovation and acceptance perspectives. I am more familiar with those so may be automatically inserting my own experiences.

For me it provides a reference for the complementary value of adding an IS perspective to VLE implementation.

It does open up some possibilities for some interesting empirical work examining what is being done and how within VLE implementation within institutions.

Abstract

Seems to suggest that the common theoretical framework for implementing VLES – instructional design – can be complemented by 3 different perspectives of VLE implementation from the information systems implementation research and organisation theory. Would appear to suggest that these perspectives have important things to say about the successful use and implementation of VLES. The three complementary perspectives are

  • technology acceptance;
    Sees the VLE as a new technology that will be accepted or rejected by users.
  • diffusion of innovations; and
    VLE implementation is seen as the effort to diffuse the VLE within the user community.
  • learning process.

Introduction

Some typical stuff about the impact of ICTs and e-learning on students and institutions.

Refers to an earlier publication of the authors (Keller & Cernerud, 2002) in an attempt to justify why there is a “strong need for closer study of models of implementation and an exploration of their underlying theoretical frameworks”. But I haven’t got it yet – Is it too late on a Friday afternoon to be starting this?

The basic point seems to be that factors such as age, gender, learning style, degree programmes and previous knowledge of computers have been assumed to influence students’ perceptions of e-learning and the implementation strategy found. They reference (Mitra et al., 2000; Nachmias & Shany, 2002) for this point. However, they claim that the author’s earlier study found that these only exert a minor influence.

Sounds weak to me.

The rest of the introduction is broken up into sections:

  • Virtual learning environments;
    Mentions that e-learning encompasses an awful lot. Narrows things down to VLEs and defines what they are.
  • Concept of implementation;
    This is the bit the explains that the instructional design view of VLE implementation is somewhat narrow – has some references. Argues for the view of VLE implementation that sees the VLE as an information system and the university as being an organisation. Outlines a number of different theoretical views of information systems implementation – most around initiation, development and implementation/termination. Includes mention of Lewin’s freeze/unfreeze model, but makes this point

    six-phase view of the information systems implementation process compared to Lewin’s model of organizational change (adapted from Kwon & Zmud, 1987) Within these models, implementation is seen as a continuous process. This is in accordance with Mintzberg and Quinn’s (1996) view of implementation as being intertwined with formulation of organizational goals in a complex interactive process.

    This is an important distinction and salves my problems with this view somewhat, however, I don’t think it goes far enough. I think the reality in most organisations and most information systems demands that evolution and termination receive specific treatment – but that’s an argument for elsewhere.

  • Three perspectives of implementation
    Briefly names the three implementation perspectives to be examined and explains that the point is to see what these perspectives can tell us about VLE implementation. The next 3 sections introduce each perspective.

The concept of implementation section refers to a figure like the following to show the linkage between Lewin’s model of organisational change and the stages of an implementation model.

IS Implementation Phases and Lewin's organisational change

There seems to be some connection with George Siemens’ IRIS model, some similarities and some differences. I’ve expressed some reservations about both the IRIS model and also Lewins model. A few things have come together that mean I do need to revisit these.

Implementation as technology acceptance

Explains how technology acceptance is seen as one of the most mature research streams in IS. In fact, some see it as one of the few original contributions that IS has made. Others see it as an example of a flawed adoption of a perspective from another discipline (psychology and sociology mentioned in this paper) that has failed to keep up with the improved understandings of that original discipline.

Keller explains

The models focus on explaining individual decisions of accepting and using a new technology. The factors influencing these decisions are seen as variables measured at a specific point. Relationships between the variables are identified by statistical correlation analysis. Among the most influential models of this research stream are technology acceptance model (TAM) and social cognitive theory (SCT) (Venkatesh et al., 2003).

And then goes onto examine each of these.

Technology acceptance model (TAM)

Some friends and I have used the TAM in a couple of previous papers. One for an e-learning audience and another for an information systems audience. But we used an older version with an emphasis on perceived usefulness and perceived ease of use.

Keller uses a later version from Venkatesh & Davies (2000) that adds subjective norm and behavioural intention. See the following figure.

TAM with the extension of subjective norm

Social cognitive theory (SCT)

A different theory developed by other researchers from Bandura. Includes elements such as computer self-efficacy, outcome expectations (performance), outcome expectations (personal), affect, anxiety, and usage. Won’t go into detail, because Keller mentions UTAUT next.

Unified theory of acceptance of use of technology (UTAUT)

This is where 8 of the influential models of user acceptance have been integrated into a theory that has been found to explain 70% of the variance in users acceptance and use of information systems.

UTAUT

Implementation as diffusion of innovations

Mostly drawing on Rogers diffusion of innovation work – we gave a summary of this in this paper. Though I did like this quote, particularly the last part

Innovation research indicates that there is a significant positive relationship between participation in innovation decisions and rate of adoption of innovations (Jahre & Sannes, 1991) and that internally induced innovations are more likely to be accepted than those induced externally (Marcus & Weber, 2000).

Uses the following structure to suggest how the innovation process occurs within an organisation

  1. Inititation – information gathering, conceptualizing and planning of the innovation adoption
    1. Agenda-setting – define the problem or need for the innovation. Identify a performance gap.
    2. Matching – tailor the innovation to fill the gap.
  2. Implementation – all events, actions and decisions involved in putting an innovation into use.
    1. Redefining/restructuring – modify innovation to accommodate org needs more closely.
    2. Clarifying – meaning of innovation becomes more clear to members of the organisation and use broadens
    3. Routinizing – the innovation becomes a part of the organisation and ceases to be an innovation.

Damn, that’s a teleological view of diffusion. No surprise in guessing I don’t like that characterisation. But I guess that is how it is likely to be used within an organisation.

Implementation as a process of learning

Suggests that “learning in organisations” can be studied from different perspectives including:

  • action theory
  • organisational learning
  • knowledge management
  • communities of practice

The emphasis here is on community of practice because of

its capability to describe social learning, but also interactions that occur between man (communities of practice) and technology (boundary objects).

CoP arises from situated learning and based on two basic premises

  1. activity-based nature of knowledge (practice)
  2. group-based character of organisational activities (communities)

talks about work done by Hislop (2003) examining innovation in IT from the perspective of CoP. Finding is that the CoP and innovation implementation are mutually dependent. Innovation creates new communities and change the knowledge distribution within the organisation. The CoP affect how the innovation is supported.

CoPs connect through boundary objects. VLE connects student and teacher communities – boundary object. Four characteristics enable artefacts to be boundary objects:

  1. Modularity – different users, different views
  2. abstraction – distinguishing certain important features of described concepts in the system
  3. accommodation – different functions to support different activities
  4. standarisation – functions can be organised in the same way?? this sounds somewhat funny.

Suggests Wegner wants information systems to be designed to facilitate participation, rather than to facilitate use. Connected with Brown and Duguid’s (1998) statement that technology aimed at supporting knowledge distribution should support informal communication between communities and deal with reach and reciprocity.

Conclusions

The guts of this appears to be summarised in the two tables. The first summarises the differences between these perspectives. The second derives implications for implementation of VLEs.

A comparison of 3 implementation perspectives (adapted from Keller, 2005)
Technology acceptance Diffusion of innovations Learning process
Basic concepts Variables influencing decisions of acceptance or rejection by individual users at specific points The individual decision process of adapting an innovation.
The diffusion process of innovations in organizations
The learning process of different communities of practice within an organization
Regards the VLE as A new technology to be accepted or rejected by users An innovation to be diffused in an organization A boundary object connecting different communities of practice
Regards the users of the VLE as: Individual users making personal decisions of accepting or rejecting a technology Individuals making personal decisions of adopting or rejecting an innovation; an organization adopting or rejecting an innovation Different communities of practice interacting through a boundary object
Considers the different roles of teachers and students No No Yes

I have some disagreements with the above

  • Both UTAUT (TAM) and diffusion theory include consideration of the social system in adoption. While the group is perhaps not as central as with CoP etc., it is still a consideration. To some extent it would depend on how the approaches we’re implemented. To say for certain it sees the users as individuals making decisions….is not entirely true.
  • Similarly, to suggest that diffusion theory and TAM don’t consider the students is not neccesarily entirely correct. In applying diffusion theory to the implementation of online assignment submission we have employed it both to encourage use by students and staff. Yes, CoP may well support the notion of boundary objects to connect students and staff. But a lot of CoP doesn’t necessarily take that on board, just like a lot of TAM/DoI work doesn’t – they all can, but don’t have to.
Implications for the use and implementation of VLEs
Technology acceptance Diffusion of innovations Learning process
Successful use The VLE should: enhance the resolving of educational tasks; be easy to use; improve user’s self-efficacy The VLE should: fill a performance gap; create positive visible outcomes; be consistent with existing beliefs; be less complex to use The VLE should: provide modularity, abstraction, accommodation and standardization; support informal communication; be designed for participation
Successful implementation The implementation process should be supported by: formal and informal leaders; a reliable technological infrastructure The implementation process should: be internally induced; be based on a consensus decision; provide possibilities of trying the VLE beforehand The implementation process should: allow peripheral participation; consider the impact of the VLE on different communities of practice

Again, some potential points of disagreement:

  • This one is probably more a matter of definition. “Educational tasks” in the TAM/successful use box could be interpreted as mostly emphasising the instructional design perspective. i.e. directly and only at learning and teaching. In some, perhaps many contexts, the administrative tasks associated with education (results processing, assignment submission etc.) are of more interest to the academics. Especially if the context is problematic. That’s the point we made in the papers applying TAM to online assignment submission – the main reason academics perceived it as useful, was it solved a range of workload issues. Learning perspectives (e.g. rapid return of feedback to students) were a long last.
  • I’m not sure the DoI approach necessarily excludes input from leaders. In fact, appropriate change agents (informal leaders?) can be important.

Implications

One obvious application of this work would be to apply the different lenses to understanding what is happening in the actual implementation of a new LMS at a university. Is the project being sold as any one of these three perspectives and if so is the implementation plan following any of the guidelines? What, if any, impact will this have on adoption and use.

Or is the implementation taking a different perspective entirely. For example, the “build it and they will come” approach. Is it simply implementing the tech and assuming people will use it.

Are these the only perspectives that could inform VLE implementation? Are there others? What?

As with most discussion of this sort of thing, I don’t think this paper pays sufficient attention to more ateleological processes. All of these assume that the VLE will be used, it’s just a nature of what processes we surround the VLE with. i.e. is it technological determinist?

It’s a very conceptual paper. Obvious thing would be to use this type of approach to examine implementation.

Different performance gaps and the impacts

The diffusion perspective requires that “decisions of implementing a VLE must be based on a performance gap, and hence create a visible and tangible positive outcome for the university”. What happens if the performance gap is perceived differently by different folk? For example, at my institution I can see the following performance gaps being discussed:

  • A prime performance gap seen by IT and management was cost. The institution was seen to have two LMSes, one would be cheaper. Also we were paying licence fees to a commercial VLE vendor. A single, open source LMS solves this performance gap nicely.
  • Some “educational determinists” see the performance gap as the vast majority of online courses at our institution not following a particular type of “good pedagogy”. The new LMS is seen as a solution to this. As a different system it will enable, support or perhaps require “good” pedagogy.
  • Some pragmatic folk see the current systems as old and out of date. In need of updating. The new LMS is seen as a way to be more modern.

Perhaps talking to folk, observing documents and meetings would be a way of surfacing additional performance gaps that the new LMS is seen to solve.

The presence of different percieved gaps raise some other questions:

  • How will a single implementation project plan cater for all these different performance gaps? i.e. how you solve the problem of licence fees is very different to the problem of “good” pedagogy or modern features. Is the plan focusing on solving a particular performance gap at the expense of others?
  • Which gap is the most important? Are some gaps just silly? How do you handle this?

References

Keller, C. (2005). “Virtual learning environments: three implementation perspectives.” Learning, Media and Technology 30(3): 299-311.

PhD update – week #3

A new record. A renewed interest in the PhD has lasted 3 weeks. I’ve even made a come back from the weak second album problem and probably had the most fulfilling week. Though I could have been more productive, perhaps that’s an aim for next week.

This week I did cross a lot of things off the PhD to do list, but I also added a fair few.

7 March to 13 March

All that said, I claimed that I would be aim to complete the following this week:

  • Complete first draft of at least 1 Ps component section for chapter 2 – lets start with “Past Experience” – Not done, not even started.
  • Complete reading and give feedback on Shirley’s DESRIST paper.
  • Finalise a structure with rough content for chapter 3. Done and sent to the supervisor for feedback.

The other main work on the thesis this week included:

  • Gathering and a bit of reading of additional literature for both chapters 2 and 3.
  • A few blog posts on the PhD or ideas arising from it.
    Last week’s output include
    • The biggest flaw in university L&T/e-learning – is connected to thinking associated with chapter 2 and the Ps Framework. In particular, a big problem any approach to e-learning within a university has to address.
    • How to improve L&T and e-learning at universities – provides one perspective on the “solutions” that have arisen from the thesis work to the problem outlined in the previous post.
    • Moving from scarcity to abundance changes things – music – draws on a very recent example to illustrate how a scarce resource becoming overly abundant changes many of the fundamental assumptions of prior practice. A key part of my thesis work is a suggestion that e-learning and information systems are having to face this paradigm change with the raise of the internet, social media and many more. The assumption of scarcity is one of the major flaws of many current approaches to e-learning and organisational information systems.

There is a double-edged sword with the blog posts. They take time away from writing on the PhD, however, they also help deal with the need for a quick sense of completion and also encourage me to get ideas down into writing. Writing that I should be able to re-use in the thesis…theoretically. I need to keep an eye on this.

Next week

For the next week I’d like to:

  • Complete as many sections of the Ps Framework (chapter 2) as possible and have most put onto the blog.
  • Need to complete reading the theory building paper and provide feedback.
  • Need to tidy up a bit of the other outstanding literature I have gathered..

Moving from scarcity to abundance changes things – music

Growing up in Rockhampton in the 70s and 80s, access to music that wasn’t pop or C&W just didn’t happen. Different types of music were scarce. In these days of iTunes, peer-to-peer etc. it has radically changed. There is an abundance.

The impact of this change is difficult to underestimate, and difficult to illustrate. This YouTube video and what it embodies does a really good job.

This post has more about it. This is the core of it for my point

Israeli musician Kutiman has taken hundreds of YouTube samples – often non-musical ones – and turned them into an album that’s awesome on so many levels that it leaves you stunned. First of all, the music is good; really good, especially if you’re a fan of Ninja Tune’s catalog. Secondly, it’s amazing to see all those unrelated YouTube bits and pieces fit together so perfectly

The question is…

The same migration from scarcity to abundance is happening in learning and teaching and e-learning at universities. How is the practice of those tasks going to change?

Another perspective for the indicators project

The indicators project is seeking to mine data in the system logs of a learning management system (LMS) in order to generate useful information. One of the major problems the project is facing is how turn the mountains of data into something useful. This post outlines another potential track based on some findings from Lee et al (2007).

The abstract from Lee et al (2007) includes the following summary

Sample data were collected online from 3713 students….The proposed model was supported by the empirical data, and the findings revealed that factors influencing learner satisfaction toward e-learning were, from greatest to least effect, organisation and clarity of digital content, breadth of digital content’s coverage, learner control, instructor rapport, enthusiasm, perceived learning value and group interaction.

Emphasis on learner satisfaction???

This research seeks to establish factors which impact on learner satisfaction. Not on the actual quality itself, but with how satisfied students are with it. For some folk, this emphasis on student satisfaction is not necessarily a good thing and at best is only a small part of the equation. Mainly because its possible for students to be really happy with a course, but to have learnt absolutely nothing from it.

However, given that most evaluation of learning at individual Australian Universities and within the entire sector rely almost entirely on “smile sheets” (i.e. low level surveys that test student satisfaction), an emphasis on improving student satisfaction may well be a pragmatically effective past-time.

How might it be done?

The following uses essentially the same process used in a previous post that describe another method for informing the indicators project’s use of the mountains of data. At least that suggested approach had a bit more of an emphasis on quality of learning.

The process is basically:

  • Identify a framework that claims to illustrate some causality between staff/institutional actions and good outcomes.
  • Identify the individual factors.
  • Identify data mining that can help test the presence or absence of those factors.
  • Make the results available to folk.

In this case, the framework is the empirical testing performed by the authors to identify factors that contribute to increased student satisfaction with e-learning. The individual factors they’ve identified are:

  • organisation and clarity of digital content;
  • breadth of digital content’s coverage;
  • learner control;
  • instructor rapport;
  • enthusiasm;
  • perceived learning value; and
  • group interaction.

Now some of these can’t be tested for by the indicators project. But some can. For example,

  • Organisation of digital content
    Usually put into a hierarchical structure (weeks/modules and then resources), is the hierarchy balanced?
  • Breadth of content coverage
    In my experience, it’s not unusual for the amount of content to significantly reduce as the term progresses. If breadth is more even and complete, greater student satisfaction?
  • group interaction – participation in discussion forums.
  • instructor rapport – participation in discussion forums and presence in the online course.

Questions

I wonder if the perception of there being a lot of course content for the entire course is sufficient. Are students happy enough that the material is there? Does whether or not they use it become academic?

References

Lee, Y, Tseng, S et al (2007). “Antecedents of Learner Satisfaction toward E-learning.” Journal of American Academy of Business 11(2): 161-168.

Messiness of information systems – another reason institutional e-learning struggles

My current disciplinary home is within the information systems community, which, not surprisingly, concerns itself with research and practice around information systems. This begs the question, “What is an information system?”. This post provides one answer to this question and in doing so suggests another reason why most institutional, university-based e-learning implementations enjoy less than stellar success. Of course, it’s not limited to just e-learning, but that’s what my focus is.

What is an information system?

DuPlooy (2003) describes an information system as consisting of three subsystems: the hardware, software and “otherware” and uses the following figure to represent their relationship.

Neat representation of an information system

The hardware component is the computer hardware, the processing units, printers, network equipment, monitors etc. The software component, as you may expect, are the software applications that make use of the hardware to help users of the system to perform various tasks. “Otherware” is defined as including the system’s goals, the owner, users, operational procedures, and the tasks and responsibilities of the people involved.

DuPlooy (2003) makes the point that the hardware and software components are deterministic. That is, given the same inputs, the outputs of these components will generally be the same. On the otherhand, he makes the point that otherware is not deterministic. One reason why otherware is not deterministic is the involvement of people and the observation by Markus (1983) that people may have agendas and goals that differ vastly from those of the organization.

Given this fundamental difference in the nature of otherware when compared to the other two components, I believe DuPlooy’s (2003) representation is somewhat less than effective. I suggest the following representation is more appropriate.

Modified representation of an information system

This better captures the messy, non-deterministic nature of otherware. It illustrates that otherware is different, that it can’t be treated the same as software and hardware.

It is the consideration of all three subsystems, and in particular the addition of “otherware”, which differentiates information systems from other related disciplines such as computer science and information technology. The inadequacy of computer science in addressing problems associated with the use of computers in organisational contexts has played a large part in the emergence of the IS discipline (Fitzgerald and Adam 1996).

The inadequacy of computer science and information technology in addressing problems with organisational information systems is the main reason I’ve moved from the information technology discipline to that of information systems. Of course, it must be said that much of the research into information systems doesn’t fully engage with the messiness of otherware. For one example take a look at Behrens (2007).

What’s wrong with e-learning?

Essentially, the vast majority of practice in university-based e-learning radically under-estimates the messiness of the otherware invovled within a university context. Within the pantheon of organisations, I believe that universities are amongst the elite in terms of just how messy their otherware can be.

Whether it be a computer scientist developing an “adaptive LMS” by applying some algorithm and mathematics, the central information technology support division applying some project management methodology or management applying some top-down management approach, they all under-estimate the messiness of the otherware and the impact this has on their nice, neat plans and assumptions.

In some cases, it appears that the neatness of the software, hardware or of the traditional methodologies is used as a haven from the messiness of the context. The messiness is too hard, so one must ignore it and focus on the neatness of the process or the product.

For example, selecting one piece of software (e.g. an LMS) from a set of such software, which all have a pre-defined, limited set of features and expect those limited set of features to fulfill all the requirements of a messy “otherware”. Rather than recognise that the otherware is inherently messy, will continue to be messy, and will continue to change the nature of its messiness over time and subsequently adopt an approach that is able to respond to that messiness. Institutions insist on attempting to limit the messiness within the confines of the product. The phrase “we’ll implement a vanilla version” in connection with an enterprise system, including learning management systems, sums this up very nicely.

It shows a perspective that thinks it is too hard to change the information system, so the solution is to force the messiness of the otherware to comply with the confines of the information system. This perspective assumes that you can force messy otherware to conform.

Personally, I don’t think this is possible. It is possible to create the illusion that it is conforming, but scratch the surface and you will find it isn’t. Personally, I think it is better to engage with, seek to understand and respond to the messiness of the otherware.

References

Behrens, S. (2007). Diversity in IS Research; Metaphor, Meaning and Myth. ICIS’2007, Montreal, Canada.

duPlooy, N. F. (2003). Information systems as social systems. Critical Reflections on Information Systems: A Systematic Approach. J. Cano. Hershey, IDEA Group Inc.

Markus, M. L. (1983). “Power, politics and MIS implementation.” Communications of the ACM 26: 430-440.

Validity is subjective

Just another quote, but a good one. And one that connects with a recent post about the difficulty of getting agreement around learning and teaching and the difficulty this creates within universities when you want to try and improve the quality of that learning and teaching.

Validity is subjective rather than objective: the plausibility of the conclusion is what counts. And plausibility, to twist a cliché, lies in the ear of the beholder.
Cronbach (1982)

Perhaps then, it is no surprise to see where the quote comes from? This book on the evaluation of educational programs.

References

Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, Jossey-Bass.

Initial steps toward an education aggregation taxonomy – community versus individual?

In a previous post I talked about the rationale and need for thinking about a taxonomy of educational aggregation projects. Something that I haven’t really given a lot of thought to, just yet.

About 15 minutes ago I posted this about “cooked” course feeds. WordPress.com’s “possibly related posts” feature (where it automatically appends 3 or 4 other posts from WordPress.com blogs which it deems to possibly related) included a link to this post entitled “Prologue as an eLearning Blog Portal”.

It’s a post by Mike Bogle in which he outlines one of the key problems we saw with using blogs for individual student reflective journals.

the main challenge for educators using blogs in their courses is how to keep track of them all. By nature blogs function completely independently of one another, so the prospect of monitoring the individual blogging activities of several dozen or more people presents a substantial time investment for instructors and students alike.

In the post, Mike outlines some problematic solutions before suggesting that the use of Prologue as a way to create a blog portal. The aim being create a “dynamic, interactive student network”.

It’s this purpose and some of the features Prologue provides and the possible applications these features can support that seem to suggest another place in a taxonomy of aggregation projects.

The “portal/community” aggregator

What Mike is suggesting could perhaps fit under the title of “portal aggregator”. Perhaps community rather than portal might be a better label – portal has too many connotations from the late 90s and corporate Web 1.0 centralised IT constraints. After all, the purpose of the suggestion is to enable the creation and maintenance of a community.

Community aggregators tend to provide many of the features Mike mentions including various filtering options and the idea of social tagging of posts.

This seems to be the category into which EduGlu would fit. There is now even a EduGlu sandbox which allows you to play. The workflow image in that post provides a good representation.

Perhaps this type of workflow might be the way to differentiate different aggregation projects?

Based on the little I know of it, I believe that Cloudworks probably belongs in this category as well.

What’s the difference? Is there one?

Suggesting a need for a taxonomy of aggregation projects assumes that there are different types of projects. The “community aggregator” approach seems to be fairly general, perhaps general enough to provide all of the necessary features. At the moment, I think not. I think some aspects of BAM identify a potential difference, though there is also some overlap.

So what’s the difference between BAM and tools like Cloudworks, EduGlu and Prologue? Well, apart from the quality of code and implementation, I believe the main differences are

  • An initial focus on the individual student.
  • The integration of institutional needs.

A focus on the individual student

The original rationale and design for BAM was to support and improve student usage of individual reflective journals and enable academic staff to be aware of student progress. i.e. the only people directly and regularly reading the student blog posts was their tutor. Hence the social aspects of filtering and tagging aren’t in BAM, because they weren’t immediately useful.

There are plusses and minuses to this approach. There would certainly have been benefits for student reflections to be seen and commented upon by other students. But there also would’ve been negatives.

In terms of Mike’s post the emphasis in BAM was on helping staff keep track of individual student blogs, rather than creating a “dynamic, interactive student network”.

That’s not to say that BAM can’t help support this approach. It was used to implement a simple form of the community approach for the portfolio and weblog networks on the Creative Futuring course site. But without the tagging and filtering.

Integration of institutional needs

D’arcy Norman in his post on EduGlu describes the magic combination of features for EduGlu as

Aggregation of feeds + Groups + Social Rating + Tagging

A summarised version of a similar magic combination for BAM would be

Aggregation of feeds + institutional needs

Where “institutional needs” gets expanded out into:

  • Institutional data;
    Students register their blog with BAM. BAM associates their student number and course with their blog. This then allows BAM to determine which staff member (actually its which hierarchy of staff membrs) is responsible for the student. It associates the use of the blog with the specific assignment for the course. This is all drawn from existing institutional systems and processes.
  • Institutional services; and
    The institution already has services such as a staff portal, online assignment submission and results uploading. BAM integrates the use of the blogs into these systems. BAM provides a marking interface for the student blog posts. The staff member accesses this interface through the staff portal. The marks are available via the online assignment submission and then at the end of term are integrated into the end of term results processing system.

    All this minimises work for the staff.

  • Institutional requirements.
    One of the most common concerns raised about using a blog service that is not serviced and hosted by the institution is the “dog ate my homework” fear. i.e. what happens if the external blog service disappears and the students’ blog posts are lost? To address this concern, BAM maintains a mirror of the RSS feed from student blogs on a institution system.

Where does BAM fit?

Does this mean that BAM is an “individual” aggregator? An “institutional” aggregator?

That type of labeling doesn’t seem to work all that well. Suggestions?

Cooked course feeds – An approach to bringing the PLEs@CQUni, BAM and Indicators projects together?

The following is floating an idea that might be useful in my local context.

The idea

The idea is to implement a “cooked feed” for a CQUniversity course. An RSS or OPML feed that either students or staff or both can subscribe to and receive a range of automated information about their course. Since some of this information would be private to the course or the individuals involved, it would be password protected and could be different depending on the identity of the person pulling the feed.

For example, a student of the course would receive generic information about the course (e.g. any recent posts to the discussion forums, details of resources uploaded to the course site) as well as information specific to them (e.g. that their assignment has been marked, or someone has responded to one of their discussion posts). A staff member could receive similar generic and specific information. Since CQU courses are often offered across multiple campuses staff and student information could be specific to the campus or the sets of students (e.g. a tutor would receive regular updates on their students – have they logged into the course site etc)

A staff member might get a set of feeds like this:

  1. Student progress – perhaps containing a collection of feeds. One might be summary that summarises progress (or the lack thereof) for all students and then one feed per student.
  2. Course site – provides posts related to the course website. For example, posts to discussion forums, usage statistics of resources and features etc.
  3. Tasks and events – updates of when assignments are due, when assignments are meant to be marked, when results need to be uploaded. These updates would not only contain information about what needs to be done, but also provide links and advice about how to perform them.

The “cooked” adjective suggests that the feeds are not simply raw data from original sources. But that they undergo additional preparation to increase the value of the information they contain. For example, rather than a single post simply listing the students who have (or have not) visited a course site the post might contain the students GPA for previous courses, some indication of how long into a term they normally access a course site, when they added the course (in both date and week format – i.e. week 2 of term), links back to institutional information systems to see photos and other details of the students, links to an email merge facility to send a private/bulk email to all students in a particular category, a list of which staff are responsible for which students etc.

The point is that the “cooking” turns generic LMS information into information that is meaningful for the institution, the course, the staff, and the students. It is this contextual information that will almost always be missing from generic systems, simply because they have to be generic and each institution is going to be different.

Why?

The PLEs@CQUNi project already has a couple of related sub-projects doing work in this area – discussion forums and BAM.

Discussion forums. The slideshow below explains how staff can currently access RSS feeds generated from the discussion forums of CQU’s current implementation of Blackboard version 6.3. A similar feature has already been developed for the discussion forum used in the other “LMS” being used at CQU.

The above slideshow uses the idea of the “come to me” web. This meme is encompasses one reason why doing this might be a good thing. It saves time, it makes information more visible to the staff and the students. Information they can draw upon to decide what to do next. Information in a form that allows them to re-purpose and reuse for tasks that make sense to them, but would never be apparent to a central designer.

BAM. The Blog Aggregation Management (BAM) project now generates an OPML feed unique for each individual staff member to track their students’ blog posts. The slidecast below outlines how they can use it.

The indicators project is seeking to mine usage logs of the LMS to generate information that is useful to staff. I think there is value in this project looking at generating RSS feeds for staff based on the information it generates. Why depends on the difference between lag and lead indicators.

I’ve always thought that too much of the data generated at Universities are lag indicators. Indicators that tell you how good or bad things went. For example, “oh dear, course X had a 80% failure rate”. While having this information is useful it’s too late to do anything. You can’t (well you shouldn’t be able to) change the failure rate after it has happened.

What is much more useful are lead indicators. Indicators that offer you some insight into what is likely to happen. For example, “oh dear, the students all failed that pop quiz about topic X”. If you have some indication that something is starting to go wrong, you may be able to do something about it.

Aside: Of course things brings up the problematic way most courses are designed, especially the assessment. They are designed in ways such that there are almost no lead indicators. The staff have no real insight into how the students are going until they hand in an assignment or take an exam. By which time it is too late to do anything.

Having the indicators project generating RSS posts summarising important lead indicators for a course might encourage and help academics take action to prevent problems developing into outright failure.

This is also encompassed in the idea of BAM generating feeds and the very idea of BAM in the first place. It allows staff to see which students are or are not progressing (lead indicator) and then take action they deem appropriate.

It’s also a part of the ideas behind reflective alignment. That post also has some suggestions about how to implement this sort of thing.

Getting feeds out of BAM – the first steps

The Blog Aggregation Management (BAM) project is an attempt to be a bit more Web 2.0/SaaS in the implementation of e-learning within a University. BAM works by students creating and using their own blog on one of a number of freely available external blogging services and registering it with BAM. BAM provides some management infrastructure that integrates these external services with university information and also offers support for staff to mark and track student posts. The staff interface is primarily via a web interface.

Recently, I’ve been thinking about changes to enable teaching staff to use RSS readers as their primary interface. This post details some initial steps towards achieving this.

What’s been done

A fairly simple addition has been made to BAM. The ability for an individual staff member to download an OPML file. The OPML file contains pointers to the blogs for that staff member’s students. Many CQUni courses have large numbers of staff teaching large numbers of students.

The OPML file can then be imported into a newsreader and used to track the posts students are making to their blogs. Without a need to visit the university supplied web interface.

The following screencast is intended to help CQUniversity staff make use of this feature. It also illustrates the why and what of the process.

The next step

The feeds within the OPML file are the “raw” files from the student blogs. i.e. they only contain the students raw blog posts. There is no additional “cooking” of the feeds from the individual student blogs to add additional value. Some examples of “cooking” could include:

  • Addition to each post of links back to the BAM web interface (e.g. a link to the interface that staff use to record a mark for each post).
  • Addition of information about the student – their name, student number etc.

The aim is to get this initial “raw” feed feature out to the staff and see how they go with it. If all goes well, the “cooked” feed feature will come out later.

How to improve L&T and e-learning at universities

Over the last week or so I’ve been criticising essentially all current practice used to improve learning and teaching. There are probably two main prongs to my current cynicism:

  1. Worse than useless evaluation of learning and teaching; and
    Universities are using evaluation methods that are known to be worthless and/or can’t get significant numbers of folk to agree as some definition of “good” learning and teaching.
  2. A focus on what management do.
    Where, given the difficulty of getting individual academics (let alone a significant number of them), to change and/or improve their learning and teaching (often because of the problems with point #1), the management/leadership/committee/support hierarchy within universities embark on a bit of task corruption and start to focus on what they do, rather than on what the teaching staff do.

    For example, the university has improved learning and teaching if the academic board has successfully mandated the introduction of generic attributes into all courses, had the staff development center run appropriate staff development events, and introduced “generic attributes” sections within course outlines. They’ve done lots of things, hence success. Regardless of what the academics are really doing and what impacts it is having on the quality of learning and teaching (i.e. see point #1).

So do you just give up?

So does this mean you can’t do anything? What can you do to improve learning and teaching? Does the fact that learning and teaching (and improving learning and teaching are wicked problems mean that you can’t do anything? This is part of the problem Col is asking about with his indicators project. This post is mostly aimed at trying to explain some principles and approaches that might work. As well as attempting to help Col, it’s attempting to make concrete some of my own thoughts. It’s all a work in progress.

In this section I’m going to try and propose some generic principles that might help inform how you might plan something. In the next section I’m going to try and apply these principles to Col’s problem. Important: I don’t think this is a recipe. The principles are going to be very broad and leave a lot of room for the application of individual knowledge. Knowledge of both generic theories of teaching, learning, people etc. and also of the specific contexts.

The principles I’m going to suggest are drawn from:

  • Reflective alignment – a focus on what the teachers do.
  • Adopter-based development processes.
  • A model for evaluating innovations informed by diffusion theory.
  • Emergent/ateleological design.
  • The Cynefin framework.

Reflective alignment

In proposing reflective alignment I believe it is possible to make a difference. But only if

The focus is on what the teacher does to design and deliver their course. The aim is to ensure that the learning and teaching system, its processes, rewards and constraints are aiming to ensure that the teacher is engaging in those activities which ensure quality learning and teaching. In a way that makes sense for the teacher, their course and their students.

The last sentence is important. It what make sense for the teacher. It is not what some senior manager thinks should work, or what the academic board thinks is important or good. Any attempt to introduce something that doesn’t engage with the individual teacher and doesn’t encourage them to reflect on what they are doing and hopefully make a small improvement, will fail.

Adopter-based development

This has strong connections with the idea of adopted-based development processes, which are talked about in this paper (Jones and Lynch, 1999)

places additional emphasis on being adopter-based and concentrating on the needs of the individuals and the social system in which the final system will be used.

Forget about the literature, forget about the latest fad (mostly) and concentrate first and foremost on developing a deep understanding of the local context, the social system and its mores and the people within it. What they experience, what their problems are, what their strengths are and what they’d like to do. Use these as the focus for deciding what you do next, not the latest, greatest fad.

How do you decide?

In this paper (Jones, Jamieson and Clark, 2003) we drew on Rogers’ diffusion theory (Rogers, 1995) to develop a model that might help folk make these sorts of decisions. The idea was to evaluate a potential innovation against the model in order to

increase their awareness of potential implementation issues, estimate the likelihood of reinvention, and predict the amount and type of effort required to achieve successful implementation of specific … innovations.

Variables influencing rate of adoption

The model consists of five characteristics of an innovation diffusion process that will directly influence the rate of adoption of the innovation. These characteristics, through the work of Rogers and others, also help identify potential problems facing adoption and potential solutions.

This model can be misused. It can be used as an attempt to encourage adoption of Level 2 approaches to improving learning and teaching. i.e. someone centrally decides on what to do and tries to package it in a way to encourage adoption. IMHO, this is the worst thing that can happen. Application of the model has to be driven by a deep understanding of the needs of the people within the local context. In terms of reflective alignment, driven by a desire to help encourage academics to reflect more on their learning and teaching.

Emergent/ateleological design

Traditional developer-based approaches to information systems are based on a broadly accepted and unquestioned set of principles that are completely and utterly inappropriate for learning and teaching in universities. Since at least this paper (Jones, 2000) I’ve been arguing for different design processes based on emergent development (Truex, Baskerville and Klein, 1999) and ateleological design (Introna, 1996).

Truex, Baskerville and Klein (1999) suggest the following principles for emergent development:

  • Continual analysis;
  • Dynamic requirements negotiation;
  • Useful, incomplete specifications;
  • Continuous redevelopment; and
  • The ability to adapt.

They are expanded in more detail in the paper. There have been many similar discussions about processes. This paper talks about Introna’s ateleological design process and its principles. Kurtz and Snowden (2007) talk about idealistics versus naturalistic approaches that are summarised in the following table.

Idealistic Naturalistic
Achieve ideal state Understand a sufficiency of the present in order to stimulate evolution
Privilege expert knowledge, analysis and interpretation Favour enabling emergent meaning at the ground level
Separate diagnosis from interfention Diagnosis and intervention to be intertwined with practice

No surprises for guessing that I believe that a naturalistic process is much more appropriate.

Protean technologies

Most software packages are severely constraining. I’m thinking mostly of enterprise systems here that tend to illustrate the underlying assumptions in their design where the control of what users do is necessary to ensure efficiency. I believe it just constrains what people can do, limits innovation and in an environment like learning and teaching this is a huge problem.

Truex et al (1999) make this point about systems and include “ability to adapt” as a prime requirement for emergent development. The software/systems in play have to be adaptable. As many people as possible, as quickly as possible, need to be able to modify the software to enable new functionality as the need becomes apparent. The technology has to enable, in Kurtz and Snowden’s (2007) words, “emergent meaning at the ground level”. It also to allow “diagnosis and intervention to be intertwined with practice”.

That is the software has to be protean. As much as possible the users of the system need to be able to play with the system, to try new things and where appropriate there have to be developers who can help and enable these things to happen more quickly. This implies that the software has to enable and support discussion, amongst many different people, to occur. To help share perspectives and ideas. The mixing of ideas help generate new and interesting ideas for change to the software.

Cynefin framework

Cynefin framework

Which brings us to the Cynefin framework. As a wicked problem, I place teaching and attempting to improve teaching into the Complex domain of the Cynefin framework. This means that the most appropriate approach is to “Probe – Sense – Respond”. i.e. do something small, see how it works and then encourage the stuff that works and cease/change the stuff that doesn’t.

Some ideas for a way forward

So to quickly finish this off, some off the cuff ideas for the indicators project:

  • Get the data from the indicators into a form that provides some information to real academics in a form that is easy to access and preferably as a part of a process or system they already use.
  • Make sure the form is perceived by the academics to provide some value.
  • Especially useful if the information/services provided by the indicators project enables/encourages reflection on the part of the academics.
    For example, giving a clear, simple, regular update on some information about student activity that is currently unknown. Perhaps couched with advice that helps provide options for a way to solve any potential problems.
  • Use a process and/or part of the product that encourages a lot of people talking about/contributing to ideas about how to improve what information/services the indicators provides.
  • Adopt the “open source” development ethos “release early, release often”
  • Perhaps try and create a community of academics around the project that are interested and want to use the services.
  • Pick people that are likely to be good change agents. Keep in mind Moore’s chasm and Geohegan’s identification of the technologists alliance.

References

Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

David Jones, Teresa Lynch, (1999). A Model for the Design of Web-based Systems that supports Adoption, Appropriation, and Evolution, Proceedings of the 1st ICSE Workshop on Web Engineering, Murugesan, S. & Deshpande, Y. (eds), Los Angeles, pp 47-56

David Jones, Kieren Jamieson, Damien Clark, (2003). “A Model for Evaluating Potential WBE Innovations,” Hawaii International Conference on System Sciences, vol. 5, no. 5, pp. 154a, 36th Annual Hawaii International Conference on System Sciences (HICSS’03) – Track 5, 2003.

Kurtz, C. and D. Snowden (2007). Bramble Bushes in a Thicket: Narrative and the intangiables of learning networks. Strategic Networks: Learning to Compete. Gibbert, Michel, Durand and Thomas, Blackwell.

Rogers, E. (1995). Diffusion of Innovations. New York, The Free Press.

Truex, D., R. Baskerville, et al. (1999). “Growing systems in emergent organizations.” Communications of the ACM 42(8): 117-123.

The biggest flaw in university L&T/e-learning?

Welcome folk from UHI. Hope you find this interesting. Your e-learning portal is here. Good luck with it all.

Over recent years I’ve been employed in a position to help improve the quality of learning and teaching (and e-learning) at a university. If all goes according to plan, I might well have a related position for the next few years, at least. This post is intended to identify and provide some early insights into what I think has been the biggest flaw about my practice and in the practice at most universities when it comes to learning and teaching and how they attempt to improve learning and teaching.

The “biggest” adjective is not intended to indicate certainty, there may be bigger flaws, there are certainly other flaws. But at this point in time, given my current thinking, this is what I think are the biggest flaws. The term “flaw” could also be replaced by “hurdle” and/or “barrier”.

The biggest flaw?

Over the last couple of years, as I’ve had a less than positive experience, I’ve increasingly become convinced that the biggest flaw in individual and organisational attempts to improve learning and teaching is quite simply that there is no widely accepted measure of what is good or bad learning and teaching. There are two main problems with the approaches that are used:

  1. They don’t work.
  2. There is not wide acceptance of the value.

This absence of an effective measure leads to what I’ve talked about in a recent post – task corruption and the observation that task corruption occurs most frequently with tasks where it is difficult to define or measure the quality of service. Learning and teaching within a university, for me at least and especially when applied to institutions that I’m familiar with, suffers from just this flaw.

Most, if not all, of the problems, debates, struggles and political fire-storms around learning and teaching within universities can be tracked down to the uncertainty about what is quality learning and teaching.

The don’t work

At this point in time I am pretty certain that the following methods don’t work (at least not by themselves, and probably not even when complimented by other methods):

  • Student results – given the realities of university learning and teaching I don’t believe (a belief backed by the published research of others) that these are a good indication of student learning. Certainly not for comparison purposes between offerings of courses, especially if taken by different staff or across disciplines.
  • Level 1 smile sheets – i.e. the majority of what passes for learning and teaching “evaluation” at universities in Australia. Surveys of students at the end of courses or programs asking how they felt. This is broken.

Absence of wide acceptance

Now there may be methods to measure the quality of learning and teaching that do work. You may know of some, feel free to share them. But the point is that when it comes to the complexity and diversity inherent in the organisational practice of learning and teaching within higher education, there is no method that is broadly accepted.

The absence of this broad acceptance and subsequent widespread, disciplined use totally voids any validity the evaluation method may have. Unless the senior management, middle management, coal-face practitioners and all other stakeholders see the value of the measure, it doesn’t matter that they work.

Teaching is not rocket science

This lack of acceptance is not unexpected as teaching is a wicked design problem. A point made by the quote from Prof Richard Elmore illustrated by the attached photo. Many of the defining characteristics of wicked design problem make it very difficult to effectively get wide acceptance of a solution. For example, from the Wikipedia page

  • There is no definitive formulation of a wicked problem.
    i.e. everyone will have their own understanding of the problem, which implies their own beliefs about what the solution should be.
  • There is no immediate and no ultimate test of a solution to a wicked problem.
    “No ultimate test of a solution”, makes it somewhat hard to evaluate and measure.

Impacts on improving learning and teaching?

In the absence of any measure of quality learning and teaching, I can’t see how you can possibly implement any improvements to learning and teaching within a university in any meaningful way. If you can’t measure it and get broad acceptance of the value, then whatever you do is likely not to be accepted and will eventually be replaced.

Over the 19 years I’ve been involved with learning and teaching at Universities I’ve seen the negative ramifications of this again and again. Some examples include:

  • Resting on their laurels (a foundation built of sand).
    I’ve heard any number of academics proudly claim that they are brilliant teachers or that there courses are fantastic. Only to take those courses as a student, hear from other students or take over the courses and discover the reality. In the absence of any effective and accepted measures of teaching quality it’s possible to defend any practice, including doing nothing to improve.
  • Fearing change and reverting to past practice.
    People hate change. When there are different measures of value/outcome, it’s possible to ignore something good, especially when it is different. I’ve seen courses re-designed by talented teachers or instructional designers get thrown in the bin.
  • Task corruption.
    In some cases the “good design” hasn’t been trashed, it has been “corrupted” – as in task corruption. For example, an approach based on reflective journals has the questions modified so they don’t encourage deeper reflection (such questions are easier to come up with and easier to make) and necessary steps to support and encourage students to reflect, ceased. So the reflective journal is still “there”, but its use has been corrupted.

Disclaimer and request for insights and case studies

Perhaps all of the above is due to the limitations of my experience and knowledge. If you know better, please feel free to share.

What next?

This is not new. So why talk about it? Well, it is a problem that will have to be addressed in someway. So this post is an attempt to think about the problem, identify its outcomes and start me thinking about how/if it can be solved.

The weak second album (PhD update)

Last week I started a new Phd tradition – weekly updates. Traditionally the second album for a successful pop group is somewhat less than successful, I feel some connection with that tradition. Hopefully it will be significant motivation for next week.

27th Feb to 6th Mar

Most of what I’ve done, and their feels to have been little enough of it, has been to continue reading literature and expanding the information for chapter 2 and the Ps Framework. As well as thesis content that reading has sparked a number of blog posts. Have started similar work with chapter 3 – even crossed one of the “to-dos” off the list.

The highlight of the past week has been the featuring of the PhD presentation at ANU on the Slideshare home page. That has seen the number of views go from 105 to 1373 in a week.

I did strike one to-do off the list – getting a copy of Shirley’s DESRIST paper.

The next week

In coming weeks I’m trying to focus on writing in the thesis rather than blog posts. The posts have been useful to get the brain and words flowing, but time for some concrete outcomes.

Specific outcomes I’m aiming for (and added to the to-do list):

  • Complete first draft of at least 1 Ps component section for chapter 2 – lets start with “Past Experience”.
  • Complete reading and give feedback on Shirley’s DESRIST paper.
  • Finalise a structure with rough content for chapter 3.

The IRIS model of Technology Adoption – neat and incomplete?

George Siemens has a post introducing the IRIS model of technology adoption – image shown below.

IRIS model of technology adoption

I always start off having a vague disquiet about these types of models. I think the main reason is the point George makes at the start of the post

In many instances, it’s a matter of misunderstanding (determining the context from which different speakers are arguing)

i.e. some of my disquiet arises from bringing a different context/perspective to this. The following is my attempt to clearly identify the source of my vague sense of disquiet about this model.

At the moment, I think I’m going to identify three sources:

  1. It’s too neat.
  2. Misinterpretation based on different definitions.
  3. It misses the most important part.

It’s too neat

I’ve argued before that frameworks and their graphical representations tend to make the inherently messy, too neat.

One of the things I don’t like about frameworks is that they have (for very good reasons) to be tidy. This certainly helps understanding, a key purpose of frameworks, but it also can give the false impression of tidiness, of simplicity of a tame problem. My interest is currently in e-learning within universities, which I consider to be extremely messy. To me it is an example of a wicked problem.

Innovation in any reasonably complex social system is also a wicked problem.

Part of my disquiet about the neatness is how I’ve seen models, frameworks and taxonomies used within organisations. They’ve been used as a replacement for recognising, understanding and dealing with the complexities and messiness of the real situation. Some of the sentiment expressed by Jim Groom about leadership captures some of this. I’ve seen this problem lead too often to faddish and fashionable adoption of innovations. I tend to think much of the organisational implementation of e-learning is based on fads and fashions.

That said, a neat graphical representation is a good way to start understanding, but it’s not the end game.

Misinterpretation based on definitions

Perhaps getting back to the point about misunderstandings arising from context. In my current context fads and fashions are something I see regularly and am thinking about. Hence when I see “How do we duplicate it?” under the systematization component of the IRIS model, I immediately think of fads and fashions. I wonder if “How do we scale it?” or “How do we encourage widespread appropriate adoption?” might capture better what George’s intent might be.

It doesn’t go far enough

The IRIS model, as it stands, appears to make (based on my interpretation) the same problem that almost all of these types of models make. It focuses mostly on the development and pays little or no attention to the long term use, adaptation and evolution of innovations. I hesitate to label it as such, but the IRIS model seems to have a very strong basis in teleological design (Jones et al, 2005; Jones and Muldoon, 2007). Again, this could be the impact of perspective and context. It could be me falling into the hole provided by Kaplan’s law of instrument.

Going back to a major component of my information systems design theory for e-learning and a quote from an old paper (Jones, Lynch and Jamieson, 2003)

The world in which systems are developed is not static and entirely predictable – systems will need to be altered and maintained. Maintenance typically consumes about 40 to 80 percent of software costs and 60% of that maintenance cost is due to enhancement (Glass, 2001). That is, adding new capabilities that were not known of during the analysis phase. If maintenance is such a large part of system development the assumption of a period of low-cost maintenance to recoup costs from the analysis and development phases seems less valid. If an organization is operating in a continually changing context then a large investment in up front analysis is a poor investment as requirements change before the end of the analysis stage (Truex & Klein, 1999).

I think there is another step in the IRIS model – let’s call it Evolution. It’s a step that comes after Systematization and has a cyclical relationship with both Systematization and Innovation. No innovation survives in its original form once it starts to be used. A whole range of limitations and unexpected affordances of the innovation are discovered as it is used in new and complex settings. This is especially true if the innovation makes use of some form protean technology that enables and even encourages the modification of the innovation by people within a given context.

Time for the hobby horse

The lack of attention paid to this Evolution phase is perhaps the aspect of university-based e-learning that annoys me most. The processes used and the products selected generally don’t pay sufficient attention to the need and benefits of actively enabling this evolution. It’s the type of thinking that leads to systems and practices that aren’t moving with the times and require organisations to enter into large-scale replacement projects – e.g. the selection of a new LMS.

This approach is based on the idea of “big up-front design” as shown in the following model from Truex et al (1999). There is a period of analysis and design which is expensive. Then, to recoup costs, there is a period of stable use until the system no longer is suitable and hence the need for replacement.

Big up front design

The major problem with this process is evident if you look at universities and their use of learning management systems and expand the timelines out to 10 years or so, you get the following.

The long term effects of big up front design

That is, because they don’t pay enough attention to Evolution and because they don’t treat their “Product” (i.e. the LMS) as protean, every 5 years or so they have to go through the expensive and onerous process of replacing their systems.

References

Glass, R. (2001). Frequently Forgotten Fundamental Facts about Software Engineering. IEEE Software, 110-112.

David Jones, Teresa Lynch, Kieren Jamision, Emergent Development of Web-based Education, Proceedings of Informing Science + IT Education Conference, Pori, Finland

David Jones, Jo Luck, Jeanne McConachie, P. A. Danaher, The teleological brake on ICTs in open and distance learning, To appear in Proceedings of ODLAA’2005

David Jones, Nona Muldoon, The teleological reason why ICTs limit choice for university learners and learning, In ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore. pp 450-459

Truex, D., & Klein, B. (1999). Growing Systems in Emergent Organizations. Communications of the ACM, 42(8),
117-123.

Place – the industrial society's impact on schools – and universities?

The Ps Framework: a messy version

The intent of the Ps Framework is to help identify and lightly organise the different components, and the different perspectives of those components, that impact the practice of e-learning within universities. Arguably they could be applied to learning and teaching in general, but the focus of this work is e-learning, so I limit it to that.

The underlying/all encompassing Ps component is “place”. Place includes all the contextual and environmental factors, the place, within which e-learning will occur. Consideration of these factors is important and cannot be ignored because contextual factors shape the decision making process and may amplify, moderate or suppress certain factors (Jamieson, Hyland and Soosay, 2007). Higher education institutions are tied to their local context.

In the thesis, I’m currently planning to organise the multitude of factors within “Place” into the following groupings:

  • societal;
    A long list of common factors fit in here: globalisation, the massification of higher education, the knowledge society, increasing managerialism, reduced government fundings etc.
  • sector; and
    The characteristics and accepted norms of the higher education industry limit what is acceptable and what is not. They tend to define an orthodoxy.
  • institutional.
    The type of institution, how an institution (and its members) see itself, how this “vision” informs policy, practice and actual decision making all impact upon the practice of e-learning. For example, Simpson (2003) identifies two types of institutions
    1. survivalist; and
      Where higher education is perceived as a competition where the survival of the fittest reigns.
    2. remedialist.
      Where the perspective of higher education results in a more inclusive culture.

The impacts of Place are not always that obvious. White (2006) makes the observation that organisational and societal characteristics also help create the emotional context in which learning and teaching occurs and frame how individuals and organisations respond.

It’s the way we do things around here

Then there is also the unspoken assumptions that “the way we do things around here” is the best and only way of “doing things around here”. This starts to overlap with another component of the Ps – “Past Experience” (the overlapping nature of these components is one reason the framework image is so messy – real life is messy).

The “way we do things around here” is historical. It’s influenced, sometimes directly, because of the nature of the “Place” some time ago.

The Industrial Revolution and the design of schools

In the following video, Alvin Toffler and his wife talk about how much of the “way we do things around here” in education, specifically schools, are directly influenced by the nature of the “Place” a couple of hundred years ago. Not suprisingly, he’s made a number of calls for the “way we do things around here” in education to be replaced by approaches more suitable to the current “Place”.

There have been changes in the “Place” in which universities operate that bring into serious questions the validity of many of its existing practices.

Other changes in place impacting upon universities

From my perspective there are a lot of the same influences, which the Tofflers discuss in the context of schools, that also effect universities. Perhaps not the same extreme, but there is certainly a major influence, that many others have pointed out.

Some other changes in Place:

  • Scarcity to abundance.
    Information, technology and many other things that were once scarce are now abundant. Practices and approaches that worked when things were scarce, don’t make sense any more.
  • Abundance to scarcity.
    In the current financial climate you could perhaps also add a change that goes the other way. However, in my “Place” the Australian higher education sector, while that may be somewhat true. There are indications that it won’t be all that bad. Perhaps no where near as bad as it may be for universities in the USA. This makes the point that universities in different societies are impacted differently.
  • Increasing change.
    The “cliche” – the only thing that is constant is change applies. The change is accelerating. Approaches that assume a stable state are not approprite.

..and there are others.

References

Jamieson, K., P. Hyland, et al. (2007). “An exploration of a proposed balanced decision model for the selection of enterprise resource plannin systems.” International Journal of Integrated Supply Management 3(4): 345-363.

Simpson, O.P. (2003, November 5–7). Mature student retention – the case of the UK Open University. Paper presented at the International Student Retention Conference, Amsterdam.

White, N. (2006). “Tertiary education in the Noughties: the student perspective.” Higher Education Research & Development 25(3): 231-246.

The insanity of changing LMSes/VLEs

There is a definition of insanity that I’ve seen seen attributed to either Einstein or Benjamin Franklin,

“The definition of insanity is doing the same thing over and over again and expecting different results”

That quote, at least for me, has connections with one of more certain origins.

Those who cannot remember the past are condemned to repeat it.

which comes from George Santayana

The connection with LMSes and e-learning

There is an orthodoxy in e-learning at universities. Implement a learning management system like Blackboard, Moodle, Sakai…. Different names, slightly different features but essentially the same type of tool. A big integrated “ring to rule them all”.

At least going by the literature I read and the experience I have the success of LMSes has been far from good. Either the LMS is rarely used or what use it is put to is at a very low level in terms of quality learning and teaching.

Given this is known, then why are many universities up to their second, third and even fourth learning management system? Why are they doing the same thing over and over again and expecting different results?

My answer

In the following presentation I give my answer, which is essentially

  • Implementation of e-learning is really complex and requires a mix of skills and knowledge.
  • It’s easier to adopt a fad – the LMS – than engage with the complexity.

To some extent, this might have some connection with the idea of task corruption.

Those who disagree with the definition of insanity

There is not universal agreement on the validity or source of the “Einstein” quote. George Sanger has a post titled “The definition of Insanity is, perhaps, using that quote”. Of course, I and a number of the folk commenting on the post disagree.

One of the most credible seeming points made against this quote is

It contradicts the notions of experimentation and practice.

Which, on reflection, doesn’t apply. For me at least, experimentation and practice, means that you will not be doing the same thing again and again. You will be trying slightly different things. Each time you practice you will be working to improve what you are doing, to learn from your mistakes.

The source of the insanity quote

Wikiquote attributes the insanity quote to Rita Mae Brown and her book “Sudden Death”

Task corruption in teaching @ university – negative impact of Place?

Busy being a good boy working on the thesis, currently reading a collection of literature to flesh out Chapter 2 which is drawing on the Ps Framework to illustrate the current state of e-learning within Universities. As the last post illustrates, the most recent paper I’m reading is White (2006).

The Ps Framework: a messy version

In her concluding remarks, White draws on the idea of task corruption suggested by Jan Chapman (1996) to describe some of the negative impacts of broader societal issues on learning and teaching at Universities. I’m attracted to this idea for two reasons:

  1. Increasingly I’ve thought most learning and teaching at universities is increasingly of less than stellar quality and “task corruption” provides an interesting (and at current glance, appropriate) perspective on why.
  2. It reinforces the potentially negative impacts that “Place” (one of the components of the Ps Framework) can have on the practice of learning and teaching (again one of many).

In the following, I’m trying to explain what task corruption is and explore what impact it might have on learning and teaching, and particularly e-learning (topic of the thesis), within universities.

What is “task corruption”

Task corruption is where either an institution or individual, conciously or unconsciously, adopts a process or an approach to a primary task that either avoids or destroys the task. Yesterday’s Dilbert cartoon – see below – is a great example.

Dilbert.com

White (2006) identifies two types of task corruption:

  1. amputation; and
    Where parts of the task are no longer performed or are ‘starved’ of attention at the expense of other parts of the task. White (2006) uses the following quote from one of the students she talked with as an example

    I personally believe that the way universities are run today is not necessarily in the best interests of students, but rather in securing numbers to generate a wealthy university and to establish research programs and post graduate programs rather than focusing on the majority of student who come to study in undergraduate degrees.

    I don’t think it would be too hard at some institutions to find a similar student quote in relation to full-fee paying overseas students at commercial campuses.

  2. simulation.
    Where the system or the individual is seen to comply with the task. i.e. they adopt the appearance of task engagement with the aim of avoiding real engagement.

    Perhaps an example of this is what happens in response to a rule at one organisation that states a course shall have no more than 2 assignments, if there is an exam worth more than a certain percentage. If you check course profiles this rule has essentially been followed. But scratch the surface and you find multi-part assignments, including sub-parts that have different due dates than other sub-parts of the same assignment.

Drawing further on Chapman, there is the observation that task corruption occurs most frequently with tasks where it is difficult to define or measure the quality of service (learning and teaching anyone?). Consequently, incentives (or punishiments) are based on quantity rather than quality. This certainly has resonances with personal experience and the almost exclusionary concern about end of term on failure rates, rather than on actual quality of learning.

Sadly, the most recent discussion of this work (Chapman, 2003) is in a journal to which I don’t currently have access. But it is interesting enough to follow up.

Implications for universities

My interest is in how you improve learning and teaching at universities, and one in particular. What implications do these ideas have for that?

Perhaps the most important one I can think of at the moment is to increase awareness of task corruption.

My feel is that task corruption is the dirty little secret of learning and teaching. Most people are aware it goes on, can probably point to some examples but professional pride (and perhaps other reasons) will prevent them from admitting this in a broader sense. In my experience management, especially those at a senior level, have developed the ability to ignore task corruption.

A certain sense of abstraction at the senior management level is good, otherwise you’d never get anything done. But perhaps, it’s been taken too far. Looking for and talking about the forms of task corruption within a university around learning and teaching could be a good first step in identifying those factors within the organisational and the social setting that are contributing to the task corruption. Hopefully as a first step in addressing these problems (and who says I can’t be wildly optimistic).

The problem isn’t limited to senior management. In an organisation that places emphasis on top-down, teleological design procesess the problem is (I believe) likely to occur within instructional design groups, information technology “support” groups etc.

References

Chapman, J. (1996). Hatred and corruption of task (Australian Institute of Social Analysis Working Paper No. 3). Carlton: AISA.

Chapman, J. (2003). Hatred and corruption of task. Organisational and Social Dynamics, 3(1):40-60

White, N. (2006). “Tertiary education in the Noughties: the student perspective.” Higher Education Research & Development 25(3): 231-246.

Page 2 of 3

Powered by WordPress & Theme by Anders Norén

css.php