Assembling the heterogeneous elements for (digital) learning

Month: March 2009 Page 1 of 3

Implementing an institution-wide learning and teaching strategy: lessons in managing change

The following is a summary and possibly some reflection on Newton (2003). I’m still trying to decide if, as I read literature associated with the PhD, if I should take the time to produce these summaries. I wonder if, instead, I should concentrate on writing the thesis….

Essentially illustrates that academics have different perspectives of strategy than management – SNAFU principle perhaps. Suggests need for better understanding of change and policy implementation. Reinforces much of what I think, some nice references, but doesn’t necessarily indicate an appreciation of or if ateleological approaches might be more appropriate.


Examines an attempt to implement a “learning and teaching strategy” within the UK HE context. Describes the background through the 90s – very corporate, quality based. Is an institutional case study into how strategy and policy are “responded to” by academic managers, academic staff and students. Identifies a range of factors that can undermine successful policy implementation. Offers lessons which can inform management, in particular management of change.

Of course, there appears to be an immediate assumption that top-down/strategic change is appropriate. Given my previous post, I’d suggest the possibility that prescribing something may not be the best way to go.


This is a follow on to a previous article (Newton, 1999) around the same institution. The earlier article as more detail on setting, approach etc. The method/approach is described as a “systematic experiment in reflective practice taking the form of an extended conversation with a developing organisational situation”. A “range of methods and sources has been used to provide a basis for stabilizing the views of key order and groups, including the results of a questionnaire survey and data from a series of typed, semi-structured interviews”.

Does mention following the precepts of an ‘appreciated’ approach (Matza, 1969) – is this linked with appreciated inquiry of which I have a sense of disquiet and of which Dave Snowden is quite scathing?

Pressures on higher education institutions

Launching into the, by now, fairly traditional setting of the higher education sector – massive change, competition for students, impact of ICTs forcing change in the delivery of education, low funding, demands for efficiency gains…

Some nice quotes/references

What hasn’t changed is the perception that higher education is beset by what one Vice-Chancellor described as ‘grotesque turbulence’ (Webb, 1994, p. 43). All who work in higher education today continue to have to deal with the ‘complex interaction between the planned and the serendipitous’ (Webb, 1994, p. 43).


The growth of external and internal regulation and monitoring became associated with academic deprofessionalisation. This increased accountability, expressed in various areas of policy and strategy, has been characterised by the rise of ‘audit culture’ (Power, 1994), and by what Shore and Wright (2000, p. 57) have termed the ‘rapid and relentless spread of coercive technologies into higher education’.

Not to mention the rise of problems associated with this sort prescription

By the end ofthe 1990s, many academics had grown resistant to the ‘intrusion’ associated with the growth of the ‘quality industry’ in UK higher education.

Positions the hole for this research in that in the era post-Dearing in the UK there is some caution amongst institutions. Suggests that the profound impact/transformation of academics and teaching has one under-researched and under-theorised.

Development of a L&T strategy – institutional case study

Sets out the context for this case study,

Institution: a “non-elite” institution, teaching-led, rather than research led, expanded during the 90s, underfunded, experienced organisational turbulence and change, leading to unresolved tensions that undermine attempts at improvement, yet to resolve challenges raised by external changes

External regulatory context. Calls for L&T strategies only arose in the 97 with Dearing report. Resulting in two agencies – Institute for Learning and Teaching and Quality Assurance Agency. References and descriptions for this “new managerialism. Wales funding agency pushes for strategies. Most institutions had underdeveloped strategies – few the product of extensive or open consultation.

Institutional policy context. Management saw the need for L&T strategy before the external requirement. Implement version 1.0 by 1997. Implementation issues and version 2.0 developed during 99/2000.

version 1.0 – perspectives of stakeholders

Version 1.0 included a wide-ranging general policy document with specific recommendations, targets and a costed implementation schedule.

Senior management. Raised profile of L&T and assessment issues, generated debate and critical comment. Investment in staff development sessions – with high levels of invovlement. Production of web-based materials only small number of staff. But overall quality improved and measurable extension of teaching packs and directed learning materials.

Staff. Front line academics covered later. Academic managers thought success not easily visible. Implementation patchy. Some deadlines/targets not met, subsequent decrease in perceived value. Some saw aims as idealistic, some to techno-centric, no defintiion of “good teaching and learning”. Most innovators were enthusiasts, who may have innovated regardless of the strategy. In sufficient ownership and a lack of bottom-up commitement.

Students student focus groups not included. Some conclusions about the “thin veneer of student-centredness”.

Academics and implementation of strategy

Front-line academic and policy process. Above suggested it is contested. Coal-face adapt and shape policy. Various references about this.

Policy reception – factors influence implementation. From quantitative data and observation – arise 5 concepts or barriers to implementation

  1. Loss of ‘front-line’ academics’ autonomy.
    Corporotisation increasing institutional requirements/impingement on teaching. The need to demonstrate compliance taking away emphasis on teaching and innovation.
  2. Policy and strategy overload.
    This one certainly resonates with my local context at the moment. The shifting, growing nature of policy and requirements – “the goalposts keep moving”. Uncertainty over expectation.
  3. Bureaucratisation of teaching.
    The rise of “task corruption”. More important to fill in the forms and plans, than actually be a good teacher. Some good quotes here.
  4. Local practices and local culture.
    Seen both negatively and as a source of information. Negatively through “game playing”. Positively illustrating weaknesses of top down policy.
  5. The ‘shift from teaching to learning’.
    This forms part of the prescription embedded within the strategy. Quotes from staff about students wanting to be taught. Disconnect from reality, limited impact on staff.

Lessons learned

  • Centralised consultation processes lead to a lack of ownership and effort required to support implementation.

    Indeed, as has been argued earlier, strategies do not implement themselves or lead automatically to improvement—even where there may be consensus amongst academic managers and front-line academics regarding the ‘desirability’ of a strategy. Even where general principles are agreed, ‘implementation has to be localised and quality enhancement planned for. As Gibbs argues, ‘implementing learning and teaching strategies requires more than a statement of policy’ (HEFCE, 1999b, p. 4).

  • Implementation must engage with the tensions that arise.
    Implementation reveals tensions as things change, knowledge increases etc. These need to be responded.
  • There is no blue print for an L&T strategy.
  • There is a need for a greater degree of sophisticiation in institutional thinking in strategic planning and policy implementation.


Suggests the ethnographic approach is useful in highlighting certain perspectives – agree. But there’s also the issue of the single person doing the interpretation.

Strategy driven mostly be external needs, is less likely to succeed.

The nature of universities – characterised by turbulence and uncertainty – require better understanding of change. Wariness of planned change perspectives. need to be more sensitive to the diverse views and practices of the academic community. Policy needs constant evaluation……


Newton, J. (2003). “Implementing an institution-wide learning and teaching strategy: lessons in managing change.” Studies in Higher Education 28(4): 427-441.

Prescription, adaptation and failure around improving univeristy teaching

The following post and its content has been shaped by (at least) three separate influences:

  1. My on-going attempt to establish some ways of thinking about how you effectively support the improvement of teaching within universities – currently going under the label of “reflective alignment”
  2. A post by Damien Clark that attempts to integrate some of my ramblings into his own thoughts.
  3. The article by Knight and Trowler (2000) that I’m currently reading entitled “Department-level cultures and the improvement of learning and teaching”.

Lightning McQueen

I’ve found the Knight and Trowler (2000) article particularly good because it has expressed and explained quite effectively a number of points that I believe currently make most institutional attempts to improve teaching less than successful (Yes, there’s a good chance that confirmation bias plays a significant role here. But then I think I’m right 😉 ). In this post, I’m hoping/planing to focus on the following points:

  • Prescription – why most institutional approaches to improving teaching generally rely on prescription and why this is always destined to fail.
  • Adaptation – how whatever “innovation” is introduced into a social setting, especially one like a university and the practice of teaching, will be adapted by the participants both negatively and positively. Importantly, a suggestion that institutional leaders need to forget about proscribing the negative effects and instead focus on encouraging the positive. Not to mention the need to more effectively engage with context and ignore “best” practice.
  • Improvement is a journey, not a blueprint – where I’ll try and outline the foundations of an alternative approach to improving teaching.

In the last section, I’ll also explain why I’ve used a photo of a Pixar movie character at the start of this post.


Damien writes in his post

It occurs to me that prescribing any particular learning theory (such as constructive alignment) is not the answer

Absolutely, this is the problem I have with most of what is practiced around improving teaching at universities, it seeks to make prescriptions. This is one example of what I label within the reflective alignment idea as “level 2” knowledge, which is defined as:

  1. What the management does.
    This is the horrible simplistic approach taken by most managers and typically takes the forms of fads. i.e. where they think X (where X might be generic skills, quality assurance, problem-based learning or even, if they are really silly, a new bit of technology) will make all the difference and proceed to take on the heroic task of making sure everyone is doing X. The task is heroic because it usually involves a large project and radical change. It requires the leadership to be “leaders”. To wield power, to re-organise i.e. complex change that is destined to fail.

When applying “level 2” knowledge about improving teaching it is typical for a small group of folk to go away, identify based on their expertise and perspectives what the solution is and then prescribe it for everyone else. Where everyone might be the program, department or the institution. You can see this quite often when there are headlines like “All students will complete at least one online course”, “All courses in our medical program use Problem-based learning”, or “All courses will have an online presence”, or even worse “All courses will have an online presence that consists of A, B, C and E with an option of F”.

Paul Ramsden – an example of “level 2” knowledge

On of the interesting aspects of the Knight and Trowler (2000) paper is that they offer a criticism of Paul Ramsden’s work. This is the first criticism of that work I’ve heard (which may say more about the breadth and depth of my reading) and one that resonates strongly with the point I’m trying to make here. It also appears to criticise the idea of “transformational leadership”, which I’m also not a fan of – two birds one stone, perhaps.

Knight and Trowler (2000) argue that Ramsden’s (1998) suggestions for improving teaching illustrate the perspective of a leader that prescribes a solution with little focus on how it will be received by the academics that will be required to adopt it. They give an example to illustrate this

Ramsden suggests that departmental leaders establish a student liaison forum where students can meet staff over lunch to canvass ideas and creative options for better teaching and learning. Such an event would be a desirable effect, rather than an achievable cause, of departmental change. In practice, in the departments most in need of change such a proposal would be met with a mixture of resistance, avoidance, coping or reconstructing strategies related to staff and students’ interpretation and reception of such an idea and its underpinning assumptions. The same is true of most of the rest of Ramsden’ s proposals, such as forming groups of staff interested in working through key texts on teaching during their lunchtimes or encouraging peer observation of teaching by being the first to be observed.

This resonates strongly with me and my experience. Just last year I saw an attempt at “forming groups of staff” fail after a couple of meetings. And I see this all the time with the “prescriptions” that are rolled out by institutions.

The prescription approach ignores the findings from work on workarounds (Ferneley and Sobreperez, 2006), shadow systems (Jones et al, 2004) and task corruption. It ignores that nature of academics and teaching process.

Most importantly and pragmatically, it does NOT work. Knight and Trowler (2000)

Likewise, attempts to improve teaching by coercion run the risk of producing compliance cultures, in which there is `change without change’ , while simultaneously compounding negative feelings about academic work

Of course, there’s a neat research project in finding empirical evidence to back that claim up. It might go something like this:

  • Take a look at all of the attempts to improve teaching at an institution or two, three..over a certain time period.
  • Categorise those approaches based on the level of prescription.
    e.g. how far removed from the coal face academics was the prescription decision made? What type of participation did coal face academics have in preparing the prescription? e.g. were they “consulted” (and then ignored) about what they thought of the idea? Were they involved heavily from the start? How different is the prescription from current practice?
  • Determine how successful those prescriptions have been.
    First criteria would be, “is it still being used?”. The second criteria could be, “How is it being used?”. i.e. find out whether or not academics are working around the prescription. Lastly, “What impact has the prescription had?”.

Adaptation – why prescription fails?

Why do I think this approach fails? Well, there are the empirical results arising from my observations. Observations of prescription after prescription fail either through lack of use or task corruption. There are, however, also theoretical reasons and/or beliefs about the nature of teaching, academics, universities and how to effectively enable change. The following covers one particular area around the importance and inevitability of adaptation.

The importance and ignorance of place

The Ps Framework: a messy version

In the Ps Framework I have identified “Place” as the environment in which it all takes place. It is the foundation. The nature of the “Place” (or the context) in which teaching takes place is an essential influence on what is possible and what happens. Importantly, there is also the idea that “Place” is unique. The institution I work for is different others. The departmental culture you belong to is different from the one I belong to.

Knight and Trowler (2000) suggest

Yet how this is done will vary from context to context. Case studies of actual innovations such as the Rand Change Agent Study (1974-78) have confirmed that the need to achieve mutual adaptation of the innovation and the context is one important component of successful innovations

There are many related perspectives, including Gonzalez (2009)

Factors arising from the context within which the staff member is teaching also proved to influence the approach finally adopted

Not to mention the Trigwell framework (2001) I’ve used repeatedly.

So what has this got to do with the failure of the prescription approach to improving teaching? Knight and Trowler (200) quote Fullan

… one of the basic reasons why planning fails is that the planners or decision makers of change are unaware of the situations that potential implementers are facing. They introduce changes without providing a means to identify and confront the situational constraints and without attempting to understand the values, ideas and experiences of those who are essential for implementing any changes. (Fullan, 1991, p. 96)

Academics are knowledge workers

How do you think academics react when a prescription is made that illustrates little or no understanding of the constraints within which they operate? Let’s take a little test of interactivity and have a poll. Go on, interact.

View Poll

Perhaps it’s no surprise which of the above options I believe to be somewhat unlikely. One reason I think this is that I believe academics are knowledge workers. As knowledge workers academics have considerable autonomy about how they perform tasks and often can and do resist the imposition of new technology and changes to routine. Which links to and is informed by Drucker’s views of knowledge workers “Knowledge workers own the means of production. It is the knowledge between their ears. And it is a totally portable and enormous capital asset.”

Knight and Trowler (2000) suggest

Creating an environment in which lecturers feel that they have control over their teaching, that teaching is valued and that they have room to take chances, has been found to assist in the move towards a student-focused approach which leads them towards deep learning and significant conceptual change.

Senge (1999) offers a view on the impacts of a prescriptive approach to change

Top driven change…do(es) not reduce fear and distrust, nor unleash imagination and creativity, nor enhance the quality of thinking in the organization

Inevitability of adaptation

Arising from the view of academics and knowledge workers, the importance of context and more generally the social shaping of technology literature it is inevitable that any innovation or prescription will be adapted as it is adopted. Response to change in academic contexts always produces unintended results (Meister-Scheytt and Scheytt, 2005), outcomes are unpredictable and fuzzy (Knight and Trowler, 2000). In part because “human agency means that there is choice and that actions can be taken to maximise work satisfaction in the face of structural changes” (Knight and Trowler, 2000).

People, particularly academics when it comes to teaching, will modify how a prescription operates. Partly in an aim to “handle” the prescription but also, importantly, because introducing a change in a context will generate new experiences and new insight that will shape the system, its culture and expectations. Perhaps for good and perhaps for bad.

Improvement is a journey, not a blueprint

So what’s the solution? Knight and Trowler (2000)

We suggest that learning organisations require learning managers: managers who are reflective practitioners and who apply their analytical skills to the important activity systems with which they are engaged, and develop with other staff appropriate, contextualised, strategies for change. Fullan (1993) reminds us that change is a journey, not a blueprint. Journeys are usually engaged in with a specific destination in mind, but the one reached may be significantly different from that originally envisaged and there are usually as many reasons for going as there are travellers.

Making great time, rather than having a great time

My eldest son has a growing fascination, along with many kids his age, with animation, and in particular movies from Pixar. Over the weekend, after much badgering, he received a copy of Cars. In that movie one of the characters has the line

Cars didn’t drive on it to make great time. They drove on it to have a great time.

The prescription approach is an example of teleological design. Teleological design places an emphasis on the destination, not the journey. Ateleological design reverses that.

I’m suggesting that improving teaching requires a much more ateleological approach. In attempting to explain the difference my co-authors (Jones, Luck et al, 2005) and I came up with the following

An analogy involving how to plan an overseas trip can provide a more concrete example of the differences between teleological and ateleological design. The extreme teleological approach to such a trip involves taking a package tour. Such a tour has a fixed, upfront plan designed by a group of experts, with little or no knowledge of the individual traveller, to appeal to a broad cross section of people. The extreme ateleological approach involves the traveller not having a fixed plan. Instead the traveller combines deep knowledge of her personal interests with a growing contextual knowledge of the destination to make unique choices that best suit her preferences and quickly modify her journey in response to unexpected events.

Knight and Trowler (2000) combine Weick and Fullan to arrive at

As Weick (1995) has observed in his analysis of organisational sense-making, aims are often elucidated after action, which suggests that the progress of change is more likely to be successful when it follows the path of `ready, fire, aim’ rather than the more usual `ready, aim, fire’ (Fullan, 1993, p. 31).

It’s more than that

So, do you just let each individual academic embark on their own back-packer journey of teaching. Doing what they want, when they want? No, that’s not what I’m arguing. Even with the back-packer analogy above, being an effective back-packer requires/is improved by an infrastructure that:

  • Improves/expands the travelers knowledge of the potential paces to visit.
  • Provides the necessary resources for the traveler to reach those places.

At this stage, I’m going to stop trying to extend this to a description of a solution. I’m stuck in a writer’s block and this post is already too long. Pick this up later.

Departmental leadership?

Knight and Trowler (2000) argue that

cultural change for the better can occur when the focus of leadership attention is at the level of the natural activity system of universities: the department or a subunit of it. However, cultural change has to be collaborative and is therefore unpredictable. Managers work in rather than on cultural contexts and their most important skills revolve around perceptiveness towards and analysis of these contexts

The build on this to suggest that middle managers – department heads – and how they lead are an important contributor to the quality of teaching. In particular, their use of approaches that “support the backpacker”.

I’m not convinced that the department-based approach is all that effective. I’m not sure there is an appropriate level of diversity within such groups to ensure a broad enough selection of destinations for travel.

I’m also not convinced that Knight and Trowler’s (2000) emphasis on leadership, especially that of middle management, is the full story. It continues the emphasis on Level 2 knowledge about improving learning and teaching (an emphasis on what management does) and also assumes that what a single individual does (the leader) is the complete story.

For me, the entire system, its processes and polices has to be focused on what the teacher does. On providing the infrastructure that provides the teacher with (at least) the two points introduced in the last section.


Ferneley, E. and P. Sobreperez (2006). “Resist, comply or workaround? An examination of different facets of user engagement with information systems.” European Journal of Information Systems 15(4): 345-356.

Fullan, M. (1991). The New Meaning of Educational Change. London, Cassell.

Gonzalez, C. (2009). “Conceptions of, and approaches to, teaching online: a study of lecturers teaching postgraduate distance courses.” Higher Education 57(3): 299-314

Jones, D., S. Behrens, et al. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Jones, D., J. Luck, et al. (2005). The teleological brake on ICTs in open and distance learning. Conference of the Open and Distance Learning Association of Australia’2005, Adelaide.

Knight, P. and P. Trowler (2000). “Department-level Cultures and the Improvement of Learning and Teaching.” Studies in Higher Education 25(1): 69-83.

Meister-Scheytt, C. and T. Scheytt (2005). “The complexity of change in universities.” Higher Education Quarterly 59(1): 76-99.

Trigwell, K. (2001). “Judging university teaching.” The International Journal for Academic Development 6(1): 65-73.

Ramsden, P. (1998). Learning to Lead in Higher Education. London, Routledge.

PhD Update #5 – a new low

Well this week has been the worst yet in terms of progress on the PhD, at least with the last 5 weeks of updates. Most of it has been work related. Issues and events that have taken away the time, motivation and peace of mind necessary to effectively engage with PhD work.

On the upside, today’s been pretty effective, perhaps the best for the weeks. Hopefully this trend can continue.

What I’ve done

Last week I wanted to try and

  • Complete at least 2 sections of the Ps Framework for Chapter 2 – probably “Past Experience” and “People”. If I’m motiviated, perhaps add “Product”.
    At best I’ve made some small movement on “Past Experience” and a fairly big step with part of “Product”.
  • Cleaned up a lot of the literature I’ve found in the last week.
    Have only done a modicum of this.

In terms of PhD related blog posts, this week has produced:

  • one ring to rule them all;
    Fairly good start on one section of the “Product” part of chapter 2, some good references and points starting to be developed. (at least I’m happy with them).
  • Myth of rationality;
    Some good components of what will go into the “Process” part of the Ps Framework. Including some literature to suggest that the supposedly rational process is far from it.
  • Poor crafstman;
    More related to “Past experience” and “People” to do with the technology not improving L&T.
  • Making the LMS mythic;
    More criticisms of the LMS approach to e-learning, drawing on some literature and Postman’s ideas about 5 things to know about technological change.
  • Postman’s 5 things about technology change; and
    Came across a speech by Postman in which he outlines 5 things to know about technology change. Definite resonances/application in the Ps Framework.
  • Cognition – we’re not rational.
    Early steps, sparked by another post, on developing some ideas for the “People” component of the Ps Framework.

What to do next week?

Essentially finish what I said I would do last week and do more of the Ps Framework. Don’t let current events get me down.

"One ring to rule them all": Limitations and implications of the LMS/VLE product model

As part of the PhD I’m developing the P Frameworks as a theory for analysing/understanding the factors the impact the organisational implementation of e-learning. Essentially, I argued that that the current institutional practice of e-learning within universities demonstrates an orthodoxy. Further, I argue that this orthodoxy has a number of flaws that limit, some significantly, potential outcomes.

In this post, and a few following, I’m going to develop a description of what I see as the orthodoxy associated with the “Product” component of the Ps Framework, what I see as the flaws associated with that orthodoxy, and the impacts it has on the institutional impact of e-learning. The “Product” component of the Ps Framework is concerned with

What system has been chosen or designed to implement e-learning? Where system is used in the broadest possible definition to include the hardware, software and support roles.

One ring to rule them all

The emphasis in this post is on the “one ring to rule them all” approach characteristic of an enterprise system like a learning management system (LMS)/virtual learning environment (VLE).

The product is almost always a LMS

It is broadly accepted that the almost universal response to e-learning within universities has been a selection of a Learning Management System (LMS) aka Virtual Learning Environment (VLE) or Course Management System (CMS). By 2005 there was an almost universal adoption of just two commercial LMSs (Coates, James, & Baldwin, 2005). The 2003 Campus Computing project reports that more than 80% of United States universities and colleges utilize a LMS (Morgan 2003). Elgort (2005) cites work that indicates that 86% of 102 UK universities are using a LMS and all 18 surveyed New Zealand based institutions used a LMS. Smissen and Sims (2002) found that 34 of the 37 Australian universities were using one of two LMS – Blackboard or WebCT. If not already adopted, Salmon (2005) suggests that almost every university is planning to make use of an LMS. Indeed, the speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006).

The trend in recent years has been a move away from commercial systems to open or community source systems such as Moodle or Sakai. Whether or not your LMS is open source or commercial, doesn’t change the underlying product model. All LMS are based on the enterprise or “one ring to rule them all” approach.

In terms of the limitations this brings to e-learning and its implications for practice, there is no significant difference between open source and commercial LMSes.

What is an LMS?

LMSes are software systems that are specifically designed and marketed to educational institutions to support teaching and learning and that typically provide tools for communication, student assessment, presentation of study material and organisation of student activities. A university’s LMS forms the academic system equivalent of enterprise resource planning (ERP) systems in terms of pedagogical impact and institutional resource consumption (Morgan, 2003).

There are more similarities than differences between individual learning management systems. Each LMS consist of a standard set of tools for communication, assessment, information distribution and management. Beyond these standard features LMS distinguish themselves through micro-detailed features (Black, Beck, et al. 2007).

An LMS is an integrated system. A unified collection of different services or tools produced by a single vendor that can be managed through a single interface. While the interface, abstractions and some of the tools will be different between different LMS, the underlying model of an integrated system remains the same.

Based on experience (I’ve been trying to explain this since around 2000) the points I’m trying to make are somewhat easier made, if the argument is accompanied by graphical representations. So let’s start with the next two images. These are intended to represent, at a very high level, two different LMSes. There’s a different colour, a slightly different shape, however, there is some commonality in the structure. They are a collection of services, slightly different sized/shaped squares, and both have an overall shape. The overall shape is meant to represent the functionality perceived by the organisation and its users. There is some commonality in shape between the two systems, but moving from one to another does involve some negotiation, translation and change.

Abstraction of an LMS

Abstraction of an LMS

LMS design largely focuses on satisfying certain functional requirements, such as the creation and distribution of on-line learning material and the communication and collaboration between the various actors (Avgeriou, Retalis et al. 2003) . There is a list of common functions that are now expected of an LMS (quiz tool, discussion forum, calendar, collaborative work space, grade book etc) consequently there are more similarities than differences between different LMS (Black, Beck et al, 2007). The only real difference between LMS, lie in marketing approaches (Carriere, Challborn and Moore, 2005).

The following image provides an expanded view of one LMS, where the individual features have been identified. This will be revisited later.

Expanded LMS abstraction

What’s the product model of an LMS? – one ring to rule them all

The following sentence was used above. A university’s LMS forms the academic system equivalent of enterprise resource planning (ERP) systems in terms of pedagogical impact and institutional resource consumption (Morgan, 2003). The LMS product model is essentially equivalent to that of the ERP system. It’s an integrated system. i.e. it contains lots of different modules which are all tightly integrated, generally because they are provided by a single vendor (or in the case of open source, by a single community). The ERP model has become “the dominant strategic platform for supporting enterprise-wide business processes” and have been generally implemented to overcome issues arising from custom development (Light, Holland et al, 2001).

What are the limitations of this model?

The focus of this post is not to talk about the limitations and implications of the design decisions that have gone into most LMS. For example, the decision to use the “course” as the major approach for organising content and interactions. This particular topic has gotten broad coverage within the e-learning literature. For example, and in a obvious case of self-citation, Beer and Jones (2008) provide a brief discussion of these limitations and provide pointers to relevant literature. I will cover this topic in the thesis, but not in this post.

The focus here is on the limitations associated with the “product model” – the one ring to rule them all approach. The list of limitations of this approach suggested here includes:

  • You can’t change the system.
  • The organisation and its people are Forced to adapt to the system.
  • You are limited to a single vendor or community.

These limitations have some significant ramifications for the practice of e-learning within a university. These limitations:

  • increase risk;
  • reduce quality;
  • increase complexity of implementation and support;
  • reduce flexibility and competitiveness; and
  • significantly constrain innovation and differentiation.

Can’t change the system

The nature of enterprise systems and the main reasons for adopting them mean that they cannot be changed. Modifying enterprise systems will: increase development time, increase required staff resources during and after implementation, reduce the capability to upgrade the system by making it more difficult, and go against the reasons for adopting an enterprise system in the first place (Light, Holland and Wills, 2001). Modifying such a complex system leads to expensive maintenance requirements (Dodds, 2007)

Hence the phrase “vanilla implementation”. When you implement an enterprise system, you cannot change it. You must/should implement it as is. This is supposed to be a good thing as enterprise systems are expected to implement “best practice”.

This creates a mismatch between the LMS and the nature of the context and activity it is meant to be supporting. e-learning or learning and teaching more generally takes place within a context that is rapidly changing in terms of technology, understandings of how to use the technology and a broad array of other societal trends and influences.

Let’s take the simplest of these, technology. Both the hardware and the software technologies underlying online education are undergoing a continuing process of change and growth (Huynh, Umesh, & Valacich, 2003). Any frozen definition of ‘best’ technology is likely to be temporary (Haywood, 2002). Increasing consumer technological sophistication adds to demand for sustained technological and pedagogical innovations (Huynh et al., 2003).

With an enterprise system the institution can’t change the system, they have to rely on the vendor or community to change the system. This might work for broader societal trends, but it certainly doesn’t work for organisational requirements. Commercial vendors aren’t interested in the unique customisation needs of an individual organisational client.

Forced to adapt to the system

Over the last 10/15 years many universities have implemented enterprise systems for a range of tasks. All too often these sytems make the organisation conform to the system, the system forces teaching and research to conform to the IT system (Duderstadt, Atkins et al, 2002). Such systems lack the end-users view of business processes and require the institution to modify its practices to accommodate the system (Dodds, 2007). Theoretically, not doing so forgoes an opportunity for positive change, as such systems are meant to embody best practice (Dodds, 2007).

However, the ‘best practice’ view embodied in the LMS, may not be a match for the institution’s interests (Jones, 2004). Such systems impose their own logic on a companies strategy, structure and culture and push the company towards generic processes even when customised processes may be a source of competitive advantage (Davenport, 1998) Technology is not, of itself, liberating or empowering but serves the goals of those who guide its design and use (Lian, 2000). The tools themselves are never value-neutral but are replete with values and potentialities which may cause unexpected responses (Westera, 2004).

For an LMS, this implies some level of standardisation of teaching and learning processes towards those supported by the LMS. As two of the most highly personalised sets of processes within institutions of higher education, any attempt at standardising teaching and learning is likely to be radical, painful and problematic (Morgan, 2003). It will increase the difficulty of implementation and most likely cause resentment amongst academics and students due to the imposition of a change of uncertain value.

The standardisation of, and the values embedded in, CMS design can create a number of operational conditions for the client institution that push teaching and learning in a particular direction. For example, most CMS vendors assume a self-paced learner and so these systems are not rich in interaction or collaboration tools (Bonk, 2002) beyond simple chat rooms, email and discussion forums. CMSs are by nature structured and have limited capability for customisation (Morgan, 2003). A choice for enterprise CMS made for administrative reasons can result in students having access to different pools of electronic resources, thus affecting the quality of their educational experiences (Dutton & Loader, 2002).

Let’s illustrate this graphically. I’m going to reuse these graphical representations in latter posts as I develop and suggest a much better alternative.

First, let’s look at the organisation in its pre-enterprise system state. For these purposes I’ll suggest that there are two main components:

  • the social system; and
    This is the collection of practices, expectations and beliefs about how learning and teaching and related activities are performed. It can range from how big a course is, what a course is called (my institution used call a “course” a “unit”), how teaching responsibilities are allocated, what happens at the start of term, teaching preferences etc.
  • the infrastructure.
    The hardware, software and other support systems and processes that support the social system and how it works.


The above representation is very optimistic. It assumes that the infrastructure and the social system integrate very well. There are few gaps between the infrastructure and the social system. The reality I believe is much worse, there are usually significant gaps that require people within the social system to take on busy work to overcome the limitations.

If the institution is looking at adopting an LMS, then chances are the gaps – or least the gaps as perceived by some people – are quite large and require the implementation of an LMS to fill them.

Post LMS implementation the representation looks something like the following.

Post LMS

Note the insertion of one of the LMS representations from above. Also note that the LMS has over-riden aspects of the structure of the Social System. This represents the necessity of the social system to be changed in order to fit with the unchanging nature of the LMS.

I should point out that the similar over-riding of infrastructure by the LMS is probably not 100% accurate. There may be a bit of over-riding, where the infrastructure is changed to suit the LMS. For example, I know of one institution that had to buy completely new server hardware because the new LMS didn’t like the existing hardware. However, in many cases the gap between existing infrastructure and the LMS will be bridged by “middleware”. An odds and sods collection of technical approaches used to manipulate the structure/output of the infrastructure to suit the middleware.

Important: This type of “modification” is deemed to be appropriate and efficient. However, any “modification” that helps bridge the gap between the LMS and the social system, is deemed to be “bad”.

Single vendor or community

Any changes that are made to the enterprise system must be made by the vendor (commercial) or community (open source). As the previous two limitations point out, the institution cannot change the system. They must rely on the vendor/community to do so.

Motiviations In many cases this can cause a problem because the motivations of the vendor/community (more so the vendor) don’t always match those of the organisation. For example, Avgeriou, Retalis et al (2003) suggest

…the quality requirements of LMSs are usually overlooked and underestimated. This naturally results in inefficient systems of poor software, pedagogical and business quality. Problems that typically occur in these cases are: bad performance which is usually frustrating for the users; poor usability, that adds a cognitive overload to the user; increased cost for purchasing and maintaining the systems; poor customizability and modifiability; limited portability and reusability of learning resources and components; restricted interoperability between LMSs.

A question of scale. The single vendor or community now becomes the development bottleneck. Very early on with my involvement with e-learning I suggested that it would be impossible for a single institution (in this case a university) to provide all of the services and functionality required of e-learning within a university (Jones and Buchanan, 1996). My argument here is that the same thing applies to a single vendor or open source community.

At its simplest the single vendor or community (no matter how large the community) is always going to be smaller than the broader community. As a simple example I performed a number of Google searches, the following list shows what I searched for and the number of “hits” Google found:

  • moodle discussion forum – 167,000 hits;
  • sakai discussion forum – 234,000 hits;
  • blackboard discussion forum – 238,000 hits; and
  • web discussion forum – 45,100,000 hits.

The following graph of the above figures reinforces that we’re talking about an order of magnitude difference.

Comparison of search results for "discussion forum"

Given the relative sizes of the community, where do you think the best discussion forum is going to come from. One of the minute communities around a specific LMS, or the much larger general web-based community? This is one aspect of the idea of “worldware” developed by Steve Erhman and defined as

Let’s define worldware to be hardware or software that is used for education but that was not developed or marketed primarily for education.

Is there an alternative?

So, if the enterprise system approach has so many problems, is there an alternative? The simple answer is yes, best of breed. Though, as with any wicked problem, it’s not necessarily the answer. In fact, as I’ll argue in a latter post, I don’t believe the traditional best of breed approach is appropriate for e-learning. Until then, I’ll finish with the following table from Light, Holland and Wills (2001) that seeks to compare best of breed with the ERP approach.

Best of breed Enterprise
Organisation requirements and accommodations determine functionality The vendor of the ERP system determines functionality
A context sympathetic approach to BPR is taken A clean slate approach to BPR is taken
Good flexibility in process re-design due to a variety in component availability Limited flexibility in process re-design, as only one business process map is available as a starting point
Reliance on numerous vendors distributes risk as provision is made to accommodate change Reliance on one vendor may increase risk
The IT department may require multiple skills sets due to the presence of applications, and possibly platforms, from different sources A single skills set is required by the IT department as applications and platforms are common
Detrimental impact of IT on competitiveness can be dealt with, as individualism is possible through the use of unique combinations of packages and custom components Single vendor approaches are common and result in common business process maps throughout industries. Distinctive capabilities may be impacted on
The need for flexibility and competitiveness is acknowledged at the beginning of the implementation. Best in class applications aim to ensure quality Flexibility and competitiveness may be constrained due to the absence or tardiness of upgrades and the quality of these when they arrive
Integration of applications is time consuming and needs to be managed when changes are made to components Integration of applications is pre-coded into the system and is maintained via upgrades


Avgeriou, P., S. Retalis, et al. (2003). An Architecture for Open Learning Management Systems. Advances in Informatics. Berlin, Springer-Verlag. 2563: 183-200.

Beer, C. and D. Jones (2008). Learning networks: harnessing the power of online communities for discipline and lifelong learning. Lifelong Learning: reflecting on successes and framing futures. Keynote and refereed papers from the 5th International Lifelong Learning Conference, Rockhampton, Central Queensland University Press.

Black, E., D. Beck, et al. (2007). “The other side of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments.” Tech Trends 51(2): 35-39.

Bonk, C. (2002). Collaborative tools for e-learning. Chief Learning Officer: 22-24, 26-27.

Carriere, B., C. Challborn, et al. (2005). “Contrasting LMS Marketing Approaches.” International Review of Research in Open and Distance Learning 6(1): 1492-3831.

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Davenport, T. (1998). “Putting the Enterprise into the Enterprise System.” Harvard Business Review 76(4): 121-131.

Dodds, T. (2007). “Information Technology: A Contributor to Innovation in Higher Education.” New Directions for Higher Education 2007(137): 85-95.

Duderstadt, J., D. Atkins, et al. (2002). Higher education in the digital age: Technology issues and strategies for American colleges and universities. Westport, Conn, Praeger Publishers.

Dutton, W. and B. Loader (2002). Introduction. Digital Academe: The New Media and Institutions of Higher Education and Learning. W. Dutton and B. Loader. London, Routledge: 1-32.

Elgort, I. (2005). E-learning adoption: Bridging the chasm. Proceedings of ASCILITE’2005, Brisbane, Australia.

Haywood, T. (2002). Defining moments: Tension between richness and reach. Digital Academe: The New Media and Institutions of Higher Education and Learning. W. Dutton and B. Loader. London, Routledge: 39-49.

Huynh, M., U. N. Umesh, et al. (2003). “E-Learning as an emerging entrepreneurial enterprise in universities and firms.” Communications of the AIS 12: 48-68.

Jones, D. (2004). “The conceptualisation of e-learning: Lessons and implications.” Best practice in university learning and teaching: Learning from our Challenges. Theme issue of Studies in Learning, Evaluation, Innovation and Development 1(1): 47-55.

Lian, A. (2000). “Knowledge transfer and technology in education: Toward a complete learning environment.” Educational Technology & Society 3(3): 13-26.

Light, B., C. Holland, et al. (2001). “ERP and best of breed: a comparative analysis.” Business Process Management Journal 7(3): 216-224.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

Smissen, I. and R. Sims (2002). Requirements for online teaching and learning at Deakin University: A case study. Eighth Australian World Wide Web Conference, Noosa, Australia.

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

Westera, W. (2004). “On strategies of educational innovation: between substitution and transformation.” Higher Education 47(4): 501-517.

The myth of rationality in the selection of learning management systems/VLEs


Over the last 10 to 15 years I’ve been able to observe at reasonably close quarters at least 3 processes to select a learning management system/virtual learning environment (LMS/VLE) for a university. During the same time I’ve had the opportunity to sit through presentations and read papers provided by people who had led their organisation through the same process.

One feature that the vast majority of these processes have reportedly had was objectivity. They were supposedly rational processes where all available data was closely analysed and a consensus decision was made.

Of course, given what I think about people and rationality it is of little surprise that I very much doubt that any of these processes could ever be rational. I think most of the folk claiming that it was rational are simply trying to dress it up, mainly because society and potentially their “competitors” within the organisation expect them to be, or at least appear to be, rational.

I don’t blame them. The vast majority, if not all, of what is taught in information systems/technology, software development and management automatically assumes that people are rational. It’s much easier to give the appearance of rationality. This really is a form of task corruption, in this case the simulation “type” of task corruption.

The reality?

So, if it isn’t rational and neat, what is it? Well messy and contingent and highly dependent on the people involved, their agendas and their relative ability to influence the process. And I’ve just come across probably the first paper (Jones, 2008 – and no, I’m not the author) that attempts to engage with and describe the messiness of the process.

It’s also somewhat appropriate as it provides one description of the process used by the Open University in the UK to adopt Moodle, the same LMS my current institution has selected.

The paper concludes with the following

There is no one authoritative voice in this process and whilst the process of infrastructural development and renewal can seem to be the outcome of a plan the process is one that is negotiated between powerful institutional interests that have their roots in different roles within the university. Negotiation is not only between units and the process of decision making is also affected by the sequence of time in taking decisions, for example by who is in post when key decisions are taken. Decisions taken in terms of the technological solutions for infrastructural development have definite consequences in terms of the affordances and constraints that deployed technologies have in relation to local practices. The strengths and weaknesses of an infrastructure seem to reside in a complex interaction of time, artefacts and practices.


If we know that, even in the best of situations, human beings are not rational, and we know that in situations involving complex problems involving multiple perspectives, that the chances of a rational, objective decision is almost possible then:

  • Why do we insist on this veneer of rationality?
  • Why do we enter into processes like an LMS evaluation and selection using processes that assume everyone is rational?
  • Are there not processes that we can use that recognise that we’re not rational and that work within those confines?

Comment on Moodle

The paper includes the following quotes from a couple of senior managers at the Open University. When asked about the weakness of the approach the OU were taken, one senior manager responded

Weakness ? …the real weakness is probably in the underlying platform that we’ve chosen to use, Moodle. That’s probably the biggest weakness, and I think we made the right decision to adopt Moodle when we did. There wasn’t another way of doing it.

Then a senior manager in learning and teaching had this to say, continuing the trend.

Where Moodle was deficient was in the actual tools within it, as the functionalities of the tools were very basic. It was also very much designed for – in effect – classroom online. It’s a single academic teaching to a cohort of students. Everything’s based around the course rather than the individual student. So it’s teaching to a cohort rather than to an individual, so a lot of the work has gone in developing, for example, a much more sophisticated roles and permissions capability. There really are only 3 roles administrator, instructor, and student, but we have multiple roles…

This is particularly interesting as my current institution has some similarities with the OU in terms of multiple likely roles.

Of course, given that organisations are rational, I’ll be able to point out this flaw to the project team handling the migration to the new LMS. They will investigate the matter (if they don’t already know about it), and if it’s a major problem incorporate a plan to address it before the pilot, or at least the final migration.

Of course, that’s forgetting the SNAFU principle and the tension between innovation and accountability and its effects on rationality.


It has been pointed out to me that the penultimate paragraph in the previous section, while making the point about my theoretical views of organisations and projects, does not necessarily represent a collegial, or at least vaguely positive, engagement with what is a hugely difficult process.

To that end, I have used formal channels to make the LMS implementation team aware of the issue raised in Jones (2008).

I have also thought about whether or not I should delete/modify the offending paragraph and have decided against it. There will always be ways to retrieve the original content and leaving both the paragraph and the addendum seems a more honest approach to dealing with it.

I also believe it can make a point about organisations, information systems projects and the information flows between users, developers and project boards. The SNAFU principle and various other issues such as task corruption do apply in these instances. Participants in such projects always bring very different perspectives and experiences, both historically and of the project and its evolution.

To often, in the push to appear rational the concerns and perspectives of some participants will be sidelined. Often this creates a sense of powerlessness and other feelings that don’t necessarily increase the sense of inclusion and ownership of the project that is typically wanted. Often the emphasis becomes “shoot the messenger” rather than deal with the fundamental issues and limitations of the approaches being used.

The push to be a team player is often code for “toe the company line”, a practice that only further increases task corruption.

I have always taken the approach of being open and transparent in my views. I generally attempt to retain a respectful note when expressing those views, but sometime, especially in the current context, that level may not meet the requirements of some. For that I apologise.

However, can you also see how even now, I’m struggling with the same issues as summarised in the SNAFU principle? Should I take more care with what I post. To an extent of avoiding any comments that might be troubling for some? Since, if I’m too troubling, it might come back and bite me.

Or is it simply a case of me being rude and disrespectful and deserving of a bit of “bite me”?

What do you think? Have your say.


Jones, C. (2008). Infrastructures, institutions and networked learning. 6th International Conference on Networked Learning, Halkidiki, Greece.

"Blame the teacher" isn't new to technology-mediated learning

I’ve been banging on about the tendency for educational technology folk, especially those in the technologists alliance to “blame the teacher” as the reason why technology-mediated learning hasn’t achieved all of its promise.

I came across a paper that illustrates just how long this tendency as been around. Petrina (2004) passes a critical eye over Pressey’s early work on the first teaching machines in the early 1920s and what broader lessons there might be. But it also includes a couple of quotes that illustrate Pressey’s tendency to “blame the teacher” for the failure of his utopian dreams.

First, his dream

Within the next twenty years special mechanical aids will make mass psychological experimentation commonplace and bring about in education something analogous to the Industrial Revolution. There must be an ‘industrial revolution’ in education in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education.

And of course now, “blame the teacher”

the intellectual inertia and conservatism of educators who regard such ideas as freakish or absurd, or rant about the mechanization of education when the real purpose of such a development is to free teachers from mechanical tasks.

Pressey about that his machine might

provoke some sentimentalists to an outcry against ‘education by machine’

Later in the paper Petrina tells the story about one teacher who was enthuiastic about Pressey’s teaching machine. This teacher was already in the habit of giving a true/false test at the beginning of every class by the method of him reading the true/false statements to the class and his sister doing the scoring. Pressey’s machine appeared to match his current practice and at the same time do away with the need for him to use his sister.


Petrina, S. (2004). “Sidney Pressey and the Automation of Education, 1924-1934.” Technology and Culture 45(2): 305-330.

Poor craftsman – or the "blame the teachers" excuse


I strongly believe in the notion that both learning and teaching, and attempting to improve learning and teaching, are wicked design problems to which there is no single answer, there are no right nor easy answers. The better answers lie in a broad recognition, understanding and synthesis of the diverse perspectives that exist; an in-depth understanding of the local context in which you are trying to operate; and a broad (usually much broader than people assume) set of knowledge of concepts and fields that might help.

The following is an attempt to describe one perspective and to explain why I think there may be other perspectives that highlight more fruitful was forward. In an attempt to enliven the discussion, some of the terms I use may be seen to denigrating. That is not my attempt. I’m simply trying to keep people awake and encourage them to read and ponder the following.

A poor craftsman

Anyone who has known me for any length of time will have heard me use the phrase: “A poor workman blames his tools”. Generally this in relation to someone blaming vi or the UNIX command line for being difficult, or someone at cricket blaming their bat for getting them out.

A recent post of mine entitle technology will not change the way we teach sparked a reaction from Ray Tolley. I think initially, because I used eportfolios as the specific case for a broader point, i.e. that the introduction of a new technology will not, by itself, change the way academics within universities teach.

In the subsequent discsussion within the comments on the post, Ray used the same quote, different words.

A bad craftsman blames his tools but a good craftsman always uses the right tools for the job.

In addition, his post in response to mine certainly includes some description of some “poor crafstman”.

I do believe there is something in this quote when applied to learning and teaching in a university context (I’m limiting myself to the context of which I have some experience – learning in other contexts may be another matter, but there might be connections). There are a number of academics who are “poor craftsman” and should be dealt with as such.

However, it is very common to hear university management and university staff who are employed to support/enable the work of teaching academic staff to extend the “poor craftsman” assumption too far. When this happens it becomes what I’ve called the “blame the teacher” approach to university management. (This earlier post explains the origins of the “blame the teacher” idea and how it is borrowed from Biggs’ constructive alignment work.

The technologists alliance

The “blame the teacher” approach is also used by technology innovators to explain why their brilliant innovation hasn’t been adopted by more than a handful of other folk. I know, I’ve used this line in the past myself. However, for a while now I believe that this sort of approach is not productive and illustrates an developer-focus, rather than an adopter focus (Surry and Farquhar, 1997).

“Blame the teacher” allows the innovator/manager to avoid responsibility, or at least avoid the more difficult task of understanding what it is about the context within which the learning and teaching is occuring which is allowing and encouraging teaching academic staff to be “poor teachers”.

It is this avoidance, which I believe, contributes to the problems that Geoghegan (1994) establishes with the “technologists alliance”. I’ve talked about this before but the point is in this quote from Geoghegan

Ironically, while this alliance has fostered development of many instructional applications that clearly illustrate the benefits that technology can bring to teaching and learning, it has also unknowingly worked to prevent the dissemination of these benefits into the much larger mainstream population.

By ignoring the context, the “alliance” is, apparently unknowingly, working towards preventing the adoption of their innovation or the achievement of their stated goal.

Or put another way, by ignoring the perspectives of the context of the “poor craftsman” they are turning them off their idea. Especially, if the local context is particularly troubling.


Surry, D. and J. Farquhar (1997). “Diffusion Theory and Instruction Technology.” Journal of Instructional Science and Technology 2(1): 269-278.

It is true that those who learn under difficult conditions are better students, but are they better because they have surmounted difhculties or do they surmount them because they are better? In the guise of teaching thinking we set difficult and confusing situations and claim credit for the students who deal with them successfully.


Skinner, B. F. (1958). “Teaching Machines.” Science 128: 969-977.

Making the LMS/VLE "mythic"

In my last post I pointed to a talk by Postman that outlined five things we should know about technological change. This list has resonated me due to my involvement with elearning within universities and feelings that it is failing, often due to naive views of how technology can be implemented and what effects it will have on teaching and learning. This post continues/starts an attempt to make connections with Postman’s list and elearning.

Has elearning failed? Zemsky/Massey versus Sloan-C

Back in 2004 a report came out entitled “Thwarted Innovation: What happened to e-learning and why.” (Zemsky & Massey, 2004). It caused quite a furore because it basically claimed that elearning had failed. Claiming a major flaw in something that a lot of people hold near and dear is, to the cynical amongst us, a well-known and quite effective publishing strategy for increasing citations (150+ on Google Scholar) – an important measure of academic quality. But there can be to it than that. That’s one of the points this tries to make in general, without making any final claim about the Zemsky and Massey (2004) report.

Their claim that elearning had “failed” was always going to get a rise out of Geoghegan’s (1994) technologists alliance.

Aside: note of the date on the Geoghegan quote – 1994. This is not an idea arising from the last 10/15 years of elearning. It’s from a previous period in the history of technology-mediated learning. But I feel that it still applies to today’s practice of e-learning.

Geoghegan (1994) identifies the technologists alliance as including

faculty innovators and early adopters, campus IT (IT here is instructional technology – US phrase that includes instructional designers and information technology folk) support organizations, and information technology vendors with products for the instructional market.

I’m guessing that the Sloan Consortium (Sloan-C) could quite easily be included as a member of the technologists alliance.

Not surprisingly, Sloan-C and its members formulated a response to Zemsky and Massey. You can find it here. I’m currently working through their response, but I was struck by a particular quote that seems to connect with one of Postman’s five things.

Making elearning mythic

Sloan-C’s response to Zemsky and Massey, includes the following

That is, until technology becomes used without being noticed and, more importantly, without interfering with the mission of online education—i.e., delivering knowledge to anyone anywhere, articles such as TI may continue to be produced, claiming that eLearning has failed. Sloan-C is proud to be a part of the world-wide movement to insure that eLearning does not fail!

The “technology becomes used without being noticed” immediately made me think of Postman’s fifth idea about technological change, i.e. it becomes mythic. Postman describes it as

a common tendency to think of our technological creations as if they were God-given, as if they were a part of the natural order of things.

Now, it appears that the Sloan-C folk were trying to suggest that the difficulty and unreliability of the type of technology mentioned in Zemsky and Massey (2004) is a major problem and explanation for their findings. That is, problems with the implementation meant that it wasn’t transparent, but when those problems are fixed, all will be good.

Postman points out the problem when a technology becomes mythic

When a technology become mythic, it is always dangerous because it is then accepted as it is, and is therefore not easily susceptible to modification or control.

Problems with mythic technology

Back in the late 80s and early 90s, my current institution had quite a large, and to some extent in some areas of activity, very power distance education centre. A centre responsible for helping the university develop and deliver print-based distance education materials. For that centre, print-based technology had become mythic. i.e. if you did distance education, you used print.

The entire centre, its workflows, practices and structures were set up for print-based technology. Such technology required a lot of money and resource, and hence the inertia of that thinking is huge. It is a good 10 years since it became obvious that print-based distance education materials were becoming only a small part of a much broader collection of experiences enabled by e-learning. However, until very recently, print-based materials was still the focus of most of the money and most of the people and processes within that organisation.

Even now, many long-term academics within the institution are still expecting the old ways of print-based education to continue, even though organisational change has made that next to impossible.

Print-based education had become mythic. It became impossible to modify it. Even to question it. Even though some of us have been doing it for well over 10 years.

LMS/VLE – the mythic practice of institutional e-learning

When it comes to current practice of elearning within higher education, I’ve seen this again and again. Especially around the question of learning management systems/virtual learning environments. The selection and implementation of a LMS/VLE has become the standard, accepted and often unquestioned approach to elearning within universities. Surprisingly, that unquestioning approach is still strongly held even though there is a growing body of literature and personal experience arising from folk who have had to use and support these systems within universities.

Arguably, what has become mythic is the assumption that it is the responsibility of the institution to provide the infrastructure. So even when institutions get a small idea and think “we’ll have to do something with blogs or wikis”. The immediate assumption is that the institution must provide the blog or wiki. Only the institution can be assumed to reliably provide what is required by students.

Only a small step from there, is the idea of out-sourcing. i.e. the institution can save itself some money and resources by paying an external company to provide the infrastructure. The trouble is that this is really inserting a proxy into the equation. It still assumes that it is necessary for the institution to provide the infrastructure, the system. In this case they simply pay someone else to do it, but they are still paying it.

It’s re-arranging the deck chairs on the Titanic.

The same thing applies to open source learning management systems/virtual learning environments. It’s still the same approach that has become mythic.


The above has traveled broader afield than I had intended, and I have to get onto other things. So a quick summary of what I was thinking I might have covered in the above:

  • The adoption of an LMS within universities is mythic amongst a number of folk, current extensions (open source or out source) retain the old fundamentals, just add a few wrinkles.
  • The type of response provided by Sloan-C might be explained by folk for whom elearning, or some definition, has become mythic and consequently their response might be, at least partly, faulty. (As I read their response more I’ll form an opinion on that one).
    Actually, I just returned to that document and found I had reached the end. Sorry, but I don’t find it a convincing response. The major aspect of the response is to point to 20 million online learners. Quantity doesn’t tell us anything about quality of the learning experience – much of the literature suggests it is very poor. There are also other measures such as cost, return on investment etc.

    It then points to the National Centre for Academic Transformation projects which spend a lot of money and resources performing some radical transformations. I don’t see this practice scaling well.

  • Equally, Zemsky and Massey’s view might be flawed either through one or a combination of their lack of knowledge of e-learning – possibly through flawed methods (which appear to be there), or by being to stuck in their own patterns.

Point 1 above, raises the question of what is the alternative? Or perhaps what are the alternatives? Some will say personal learning environments. For me these are only a small aspect of the alternative. As the thesis work gets finished, I’ll share more of what I think the alternative is here.

Extension to the summary

Anyone who knows my work or has skimmed my publications will assume that the alternative answer I will propose will be Webfuse. This is the instantiation of the design theory I’m formulating for my PhD

Most people will be wrong. This is because most people are still stuck with the mythic nature of the LMS. They think Webfuse is an LMS. This can be seen in the rhetoric being circulated within the institution.

This perception is best illustrated by the comments I’ve been getting for 10 years. Why don’t you sell Webfuse?

Comments of this type assume Webfuse is an LMS. That it can be sold just like Blackboard, Desire2Learn or it can be made open source like Moodle or Sakai. They are wrong.

Webfuse is a different kettle of fish all together. The “mythic nature” of the LMS is perhaps one of the biggest hurdles to overcome in explaining the difference. Something I still need to work on.


Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

Zemsky, R. and W. F. Massey. (2004). “Thwarted innovation: What happened to e-learning and why.” Retrieved 1st July, 2004, from

Postman’s – 5 things to know about technological change and e-learning

In doing a quick search for references to help out in the last post, I came across this page, which appears to be a transcript of a speech given by Neil Postman title “Five Things We Need to Know About Technological Change”. According to this post (that page has gone away, so a new link to a transcript)it “was delivered by Postman in 1998 to a gathering of theologians and religious leaders in Denver, Colorado.”

Given my current and recent fascination with “Past Experience and e-learning, I particularly like these couple of quotes from Postman’s address.

Experiencing technological change as sleep-walkers

In the past, we experienced technological change in the manner of sleep-walkers. Our unspoken slogan has been “technology über alles,” and we have been willing to shape our lives to fit the requirements of technology, not the requirements of culture. This is a form of stupidity, especially in an age of vast technological change. We need to proceed with our eyes wide open so that we many use technology rather than be used by it.

And this one on who should be allowed to talk about new information technologies.

One might say, then, that a sophisticated perspective on technological change includes one’s being skeptical of Utopian and Messianic visions drawn by those who have no sense of history or of the precarious balances on which culture depends. In fact, if it were up to me, I would forbid anyone from talking about the new information technologies unless the person can demonstrate that he or she knows something about the social and psychic effects of the alphabet, the mechanical clock, the printing press, and telegraphy. In other words, knows something about the costs of great technologies.

I do believe that Postman is often thought as a simple Luddite. As against technology entirely. There are almost certainly other limitations on his work, however, the following quote suggests he’s not a Luddite

We must not delude ourselves with preposterous notions such as the straight Luddite position.

The 5 things

You really should read the address, but here’s a summary.

  1. Culture always pays a price for technology.
    e.g. cars and pollution (and many other less obvious examples).
  2. There are always winners and losers in a technological change.
  3. Every technology embodies a philosophy, an epistemological, political or social prejudice.
    The printing press de-values the oral tradition.
  4. Technological change is not additive, it is ecological.
    The invention of the printing press in Europe, did not create “old Europe + the printing press”. It created a new and different Europe.
  5. Technology becomes mythic, it becomes seen as part of the natural order of things.

Application to e-learning

How might this apply to e-learning – I don’t have time right now – but you might wish to take a look at this post which leverages Postman’s points into a series of questions for the use of ICTs in schools

One quick example before I go, in terms of technology being mythic, see what happens when you suggest to a university that they get rid of their learning management system. Even more mythic, what do you think would happen if you suggested getting rid of lecture theatres?

Cognition – we're not rational and how it impacts e-learning

It’s a small world. I work in Rockhampton at a university and last year traveled to Canberra for a Cognitive Edge workshop (which I recommend). One of the other participants was Cory Banks who, a few years ago, was a student at the university I work at. He’s obviously moved onto bigger and better things.

Our joint Cognitive Edge experience indicates some similar interests, which brings me to this post on cognition on Cory’s blog. In th epost he suggests a number of aspects of cognition that impact upon problem solving. He’s asking for help in validating and sourcing these aspects.

If you can help, please comment on his post.

My particular interest in cognition is that most information systems processes (e.g. governance, software development) are based on the assumption of rational people making object decisions drawing on all available evidence. My experience suggests that this is neither possible nor true. For me, this observation explains most of the limitations and failures associated with the design and support of information systems for e-learning (and information systems more generally).

I’ve written about aspects of this before and again.

So, as time progresses I’m hoping to add to this list in terms of references, examples and additional aspects.

Cory’s cognition list

Cory’s cognition list includes the following (a little paraphrasing)

  • We evolved as ‘first fit’ pattern matchers.
    A quote from Snowden (2005)

    This builds on naturalistic decision theory in particular the experimental and observational work of Gary Klein (1944) now validated by neuro-science, that the basis of human decision is a first fit pattern matching with past experience or extrapolated possible experience. Humans see the world both visually and conceptually as a series of spot observations and they fill in the gaps from previous experience, either personal or narrative in nature. Interviewed they will rationalize the decision in whatever is acceptable to the society to which they belong: “a tree spirit spoke to me” and “I made a rational decision having considered all the available facts” have the same relationship to reality

    I’m guessing that Kaplan’s law of instrument is somewhat related.

  • The fight or flight reaction.
  • We make assumptions.
  • We’re not analytical
    I wonder if this and most of the above points fit under “first fit pattern matchers”?
  • Failure imprints better than success.
  • Serendipitous recall (we only know what we need to know, when we need to know it).
  • We seek symmetry (attractiveness).


Snowden, D. (2005). Multi-ontology sense making: A new simplicity in decision making. Management Today, Yearbook 2005. R. Havenga.

PhD Update – Week #4 – Frustration and progress

This week is turning out to be perhaps the most frustrating, not due to lack of program, but instead due to connections between what I’m reading/writing and what I’m seeing in my local context. As per last week’s update the aim this week was to complete sections of chapter 2 related to the Ps Framework. The first section I targeted was “Past Experience” and this has been the source of the frustration.

The more I read, synthesize and write about the history of learning and teaching in universities, especially e-learning, the more I get frustrated. Mainly due to seeing the same mistakes being made again and again. Especially locally.

The frustration means I’ve bitten the bullet and am writing this update a bit earlier than normal.

What I’ve done this week

Here’s a summary of what I said I’d do last week and what has actually happened:

  • Complete as many sections of the Ps Framework (chapter 2) as possible and have most put onto the blog.
    I posted a first draft of the “introduction to the Ps framework” section. I’ve made some significant progress in structuring most of the 7 sections associated with components of the Ps Framework. Most progress has been made on the “Past Experience” section.
  • Need to complete reading the theory building paper and provide feedback.
    I’ve done nothing on this one. Sorry Shirley.
  • Need to tidy up a bit of the other outstanding literature I have gathered.
    This has been the other task I’ve done this week. Trouble is that there has been a minimum of tidying and a maximum of finding more literature that needs tidying up. That said, the new literature is good stuff and will help – but it’s still frustrating (and always will be) to find new insights that help inform what you’re doing.

In terms of PhD related blog posts, I’ve done the following this week

  • First draft of the “introduction to the Ps framework” section.
  • A post railing against the “technology will change teaching” matra that I’m seeing all the time these days, even though past experience suggests it’s no where near that simple.
  • A post drawing on some insights from Alavi and Leidner (2001) about organisational implementation of e-learning.
  • A first post to take a couple of lessons from history and apply it to LMS implementation.
  • A summary of a paper that applies some insights from information systems to e-learning implementation.

What’s the aim for next week?

I’m hoping by this time next week that I’ll have:

  • Completed at least 2 sections of the Ps Framework for Chapter 2 – probably “Past Experience” and “People”. If I’m motiviated, perhaps add “Product”.
  • Cleaned up a lot of the literature I’ve found in the last week.

Of course, next week is shaping up to be a particular frustrating week from other perspectives, so it will be interesting to see if any of the above gets done.

Technology will *not* change the way we teach – an example why we're an amnesiac field

I’m currently working on chapter 2 of my thesis, which is an explication of some of what is known about “e-learning” through the lens of the Ps Framework.

Last night I posted a draft of the section of the chapter that introduces the Ps Framework. Today, I’ve been working on getting a first draft of the “Past Experience” section of the chapter onto the blog.

I’ve already mined some of this work for a previous post about how the lessons of the past can inform the present. That post even used the “doomed to repeat” quote.

Today, I’ve come across a new quote, specifically about the learning technology field that was going to serve as a basis for this post. Then, as fate would have it, (pause for breath) I came across this post from George Siemens, which mentions article in “The Wired Campus”, which in turn references this essay from the current issue of Academic Commons.

The topic is e-portfolios. Something of which I am a self-confessed skeptic.

In this post I’m going to try to justify a link between the comments in the article from “The Wired Campus” and an important lesson that we haven’t learned from history. In particular, I’d like to create another couple of data points to support this claim from Oliver (2003)

Learning technology often seems an amnesiac field

What’s claimed for e-portfolios

The article from “The Wired Campus” starts of with the claim

If we truly want to advance from a focus on teaching to a focus on student learning, then a strategy involving something like electronic student portfolios, or ePortfolios, is essential.

The article ends with

At the moment, ePortfolios represent perhaps the most promising strategy for responding to calls for accountability and at the same time nurturing a culture of experimentation with new forms of learning.

In between it suggests four fundamental features of ePortfolios:

  1. Integrate student learning in an expanded range of media, literacies and viable intellectual work.
  2. Enable students to link together diverse parts of their learning including the formal and informal curriculum.
  3. Engage students with their learning.
  4. Offer colleges a meaningful mechanism for accessing and organising the evidence of student learning.

Wow! The solution has been found

What’s been forgotten?

Zemsky and Massey (2004) claim

One of the more hopeful assumptions guiding the push for e-learning was the belief that the use of electronic technologies would force a change in how university students are taught.

Along similar lines, Littlejohn and Peacock (2003) say

There was, in many, a false assumption that exposure to computers and CAL packages was sufficient to drive the development of new forms of teaching with technology

Conole (2003) weighs in from a different track, but still somewhat related (at least in my PhD ravaged mind – my emphasis added)

Politics is a very strong theme that runs across all learning technology research. This in part relates to the over hyping which occurs, leading to an over expectation of what is possible. It is also partly due to different local agendas and associated in-fighting as well as the major impact that technologies can have.

One example of this over-hyping is Suppes (1966) – a Stanford professor writing in Scientific American in the 60s about computer-assisted learning.

the processing and the uses of information are undergoing an unprecedented technological revolution……One can predict that in a few more years millions of school-children will have access to what Philip of Macedon’s son Alexander enjoyed as a royal prerogative: the personal services of a tutor as well-informed and responsive as Aristotle.

Well, it’s 43 years later, have you got your personal Aristotle?

Do you get any sense that this has a connection with what’s happening with e-portfolios (or in some contexts open source learning management systems) at the moment?

How do you change teaching?

I don’t think technology is going to change teaching. If anything the history of e-learning (and other innovations) offer strong evidence that new technology will get used as “horseless carriages”. Doing the old stuff, with the new tools.

If you want to change teaching, and subsequently student learning, I currently ascribe to some of the work of Trigwell (2001) (and others) as represented by the following figure. i.e. if you want to change teaching, you have to change the strategies used by teachers.

Zemsky and Massey (2004) again

Elearning will become pervasive only when faculty change how they teach—not before.

I believe the same applies for e-portfolios.

Trigwell's model of teaching

At best, a new technology will offer a small change in the outer onion skin in the above figure. The teaching and learning context. A new technology, like eportfolios, with some affordances that actively support better teaching strategies will certainly make it possible for teaching to change. I just don’t think the addition of a new technology will make it likely (or certain) that teaching will change.

There are too many other complicating factors within the teaching and learning context (I’m assuming a university context) that are likely to overwhelm the addition of the new technology. Not too mention the complexity of the interactions between the changes in teaching and learning context and all the other onion skins. For example, the change in teaching will, to some extent, rely on students seeing the value in the change and adapting to it. This is not always a given.

Why do we continue to focus on the technology?

(Where I’m definining technology as more than just the eportfolio system. But also all the learning designs and other resources that exist to help staff to use the system.)

Back to Littlejohn and Peacock (2003)

This is because technological issues have in the main been easier to solve than the more complex, social, cultural and organisational issues involved in mainstreaming technology in learning and teaching.

It’s easy to introduce an e-portfolio system and offer training sessions in how to use it. This is all an example of “level 2” knowledge about how to improve learning and teaching (see this post introducing reflective alignment) .

One of the related reasons (at least for me) is the “technologists alliance” (Geoghegan, 2004) that is talked about in my original comments on eportfolios.


Any form of technology will not improve learning and teaching. Actually, I think the authors of the Wired Campus article and most of the people who will read this, will know this. So am I simply creating a strawman argument? Am I simply stating the obvious?

The reason I bother in pointing out the obvious, is that I continue to see this happening in universities. It’s happening in my current university right now. While the folk that are deeply into learning technology understand that technology will not change learning and teaching. Many of the folk that take on the task of improving learning and teaching use this rhetoric to justify their task. In many cases, some of these folk do believe it.

For example, I’d love to do a survey of all the universities in the world and find out how many started the process of adopting an e-portfolio after someone in the institution’s senior leadership read the Wired Campus article or the Academic Commons paper on eportfolios and thought that sounded like a really good. Regardless of the current state or requirements of the local institution. And, almost certainly without a detailed knowledge of the factors at play in the Trigwell figure above.


Conole, G. (2003). Understanding enthusiasm and implementation: E-learning research questions and methodological issues. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 129-146.

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

Littlejohn, A. and S. Peacock (2003). From pioneers to partners: The changing voices of staff developers. Learning Technology in Transition: From Individual Enthusiasm to Institutional Implementation. J. K. Seale. Lisse, Netherlands, Swets & Zeitlinger: 77-89.

Suppes, P. (1966). The Uses of Computers in Education. Scientific American: 207-220.

Trigwell, K. (2001). “Judging university teaching.” The International Journal for Academic Development 6(1): 65-73.

Zemsky, R. and W. F. Massey. (2004). “Thwarted innovation: What happened to e-learning and why.” Retrieved 1st July, 2004, from

Coordination, support and knowledge sharing associated with e-learning – where does your organisation fit?

A recent post summarised a paper that was taking some insights from the information systems discipline and applied it to the implementation of a LMS/VLE. This post draws on some insights from Alavi and Leidner (2001), an influential paper (208 citations on Google Scholar) from the information systems discipline. A paper that calls for IS researchers to focus more on technology-mediated learning – i.e. e-learning.

Of the many ways the paper suggests IS researchers can make a contribution the following is the focus of this post.

Lastly, at the organizational level of analysis, IS scholars might have insights to provide to the question of how to encourage instructors (in the role of end-users) to incorporate IT tools to improve their instructional design, development, and delivery.

Based on a review of the literature the authors suggest a simple matrix – the figure below – to summarise four common approaches universities take to the coordination, support and knowledge sharing around e-learning at the organisational level.

Coordination, support and knowledge sharing

The four quadrants can be described as:

  1. Acquisition of technology and their support is uncoordinated. Sharing of knowledge is random – generally limited to ad hoc social networks.
  2. Technology acquisition and support remains uncoordinated, but some facilitation of knowledge sharing occurs.
  3. Technology acquisition and support is now coordinated across the institution, however, knowledge sharing is random.
  4. Technology acquisition and support is coordinated and knowledge sharing is facilitated.

Alavi and Leidner suggest that most universities are in quadrant #1. That may have been true in North America in 2000/2001. It might continue to be true in that country. At this point in time, in Australia, I believe most universities could probably be said to be in quadrants 3 and 4. Though, in reality, there are probably aspects of practice that dip into the other quadrants. Where is your university?

The quadrant I’m most familiar with is probably quadrant 3 though at times we might have touched on 4. Facilitation of knowledge sharing is by far the most difficult of the three tasks. One I’m not sure anyone has really grappled with effectively. Especially because facilitation of knowledge sharing is not separate from acquisition and support. Though it is often treated as separate.

Acquisition and support can easily be allocated to the information technology folk. Which means that facilitation of knowledge sharing can occur about how to use the technology. But leveraging the knowledge sharing to inform the modification of existing or adoption of new technology has to battle across the disciplinary gulf that separates learning and teaching from IT. Not only that, it also has to battle the problem of resource starvation where funding/resourcing for L&T IT gets starved of attention due to the “size” of the problem outside of L&T.


Alavi, M. and D. E. Leidner (2001). “Research commentary: technology-mediated learning – a call for greater depth and breadth of research.” Information Systems Research 12(1): 1-10.

The Ps framework

The following is a section of my thesis – chapter 2. As I get first drafts of this stuff done, I’m going to post it to the blog – where appropriate. This is the first.

This section is the first major part of chapter 2 – the literature review. It explains the background of the Ps Framework which will be used to structure the rest of the chapter.

The Ps Framework

This chapter aims to illustrate knowledge of the extant literature and associated worthy research issues around the problem of designing and implementing e-learning and the supporting information systems within universities. The development of this understanding and its description in the remainder of this chapter has been achieved through the formulation of the Ps Framework as a theory to enable analysis, understanding and description of that extant literature. Elsewhere (Jones 2008; Jones, Vallack et al. 2008) the Ps Framework has been used to illustrate how the framework can be a useful tool for helping the diverse stakeholders to effectively share and negotiate their various perspectives and consequently, make sound and pragmatic decisions around e-learning. In this chapter, the Ps Framework helps illustrate that the literature survey is “constructively analytical rather than merely descriptive” (Perry 1998) and its components make the main section headings in this chapter.

The first part of this section (Why the Ps Framework?) provides a brief description of why the Ps Framework is necessary. Next (Components of the Ps framework), the individual components of the Ps Framework and their graphical representation is explained. The next section (hopefully online later this week) begins the use of components of the Ps Framework, in this case “Past Experience”, to describe one aspect of what is currently known about e-learning.

Why the Ps Framework?

The focus of this work is the development of an Information Systems Design Theory (ISDT) for e-learning. The aim is to develop insight into appropriate approaches to the design and implementation of e-learning. Consequently, the research in this thesis can be seen as a design problem. There is growing interest in design, design research and design theory in fields such as management (Boland 2002; van Aken 2004; van Aken 2005), information systems (Walls, Widmeyer et al. 1992; Hevner, March et al. 2004; Walls, Widmeyer et al. 2004; Gregor and Jones 2007), and education (Brown 1992; Collins 1992; Savelson, Phillips et al. 2003). Design is the core of all professional training (Simon 1996). Design can be seen as a transformation from some known situation (the initial state) which is deemed to be problematic by some interested parties into a target state (Jarvinen 2001). The formulation of the initial state into an effective representation is crucial to finding an effective design solution (Weber 2003). Representation has a profound impact on design work (Hevner, March et al. 2004) particularly on the way in which tasks and problems are conceived (Boland 2002).

The organisational selection, adoption and use of educational technology by universities is increasingly seen as an information systems implementation project (Jones, Vallack et al. 2008). How such projects are conceptualised significantly influence the design of the resulting system. Jamieson and Hyland (2006) suggest that there are relationships between decisions made in the pre-implementation phase of an information systems project, the factors considered in those decisions and the degree of success of the project outcomes. During the pre-implementation phase, decisions involve a high volume of information, are incredibly complex, and are associated with a high degree of uncertainty (Jamieson and Hyland 2006). There remains some distance until there is complete understanding of the complexity of innovation and change around university implementation of e-learning (Cousin, Deepwell et al. 2004).

Bannister and Remenyi (1999) contend that given such difficult decisions, both individual and corporate decision makers will more than likely base their decisions on instinct. Given the non-linear nature of e-learning implementation it becomes more complex to handle and there is a need for meaning-makers or mediators between the different elements of the “implementation ecology” (Cousin, Deepwell et al. 2004). How a design problem is conceptualised by the members of an organization influences what they see as valid solutions to that problem, it impacts directly on the quality of the decisions they make about projects. Different members of an organization will, as a result of their different experiences, have varying perspectives on a design problem. Too often, the full diversity of experience is so difficult to capture, compare and contrast that decision-making processes often, both consciously and unconsciously, avoid the attempt.

Frameworks offer new ways of looking at phenomena and provide information on which to base sound, pragmatic decisions (Mishra and Koehler 2006). Gregor (2006) defines taxonomies, models, classification schema and frameworks as theories for analysing, understanding and describing the salient attributes of phenomena and the relationships therein. The development of taxonomies, models and frameworks to aid understanding is common in most disciplines. Examples from the educational technology field include:

  • the 4Es conceptual model (Collis, Peters, & Pals, 2001);
    This is a model to predict the acceptance of ICT innovations by an individual within an educational context. It proposes that an individual’s acceptance of educational ICT innovations is based upon four concepts: environment, effectiveness, ease of use and engagement.
  • the ACTIONS model (Bates, 2005); and
    This framework provides guidance to the process of selecting a particular educational technology by drawing on 7 components: Access, Costs, Teaching and learning, Interactivity and user-friendliness, Organisational issues, Novelty and Speed.
  • E-learning ecology elements (Cousin, Deepwell et al. 2004).
    Four elements or domains are identified as requiring consideration during the implementation of e-learning within universities: pedagogical, technological, cultural and organisational.

Components of the Ps Framework

The Information Technology (IT) artifact is often taken for granted or assumed it to be unproblematic which often results in narrow conceptualisations of what technology is, how it has effects and how and why it is implicated in social change (Orlikowski and Iacono 2001). Such limited conceptualisations often view IT as fixed, neutral and independent of their context of use. The position taken in this thesis, and demonstrated in this chapter through the use of the Ps framework, is that IT is one of a number of components of an emergent process of change where the outcomes are indeterminate because they are situationally and dynamically contingent (Markus and Robey 1988). On-going change is not solely “technology led” or solely “organisational/agency driven”, instead, change arises from a complex interaction between technology, people and the organization (Marshall and Gregor 2002). This view of the Ps framework and the IT artifact connects with Orlikowski and Iacono’s (2001) ensemble view of the IT artifact where technology is seen to be embedded with the conditions of its use.

The Ps Framework consists of 7 components. Only one of which – Product – specifically encompasses technology. The remaining six seek to describe and understand the parts of the complex and dynamic social context within which e-learning is applied. The seven components of the Ps framework are: (as work progresses and I post additional sections of the chapter, I’ll link to them from the following)

  1. The problem and purpose;
    What is the purpose or reason for the organization in adopting e-learning or changing how it currently implements e-learning? What does the organization hope to achieve? How does the organization conceptualise e-learning?
  2. Place;
    What is the nature of the organization in which e-learning will be implemented? What is the social and political context within which it is placed?
  3. People;
    What type of people and roles existing within the organization? Management, professional and academic staff, students. What are their beliefs, biases and cultures?
  4. Pedagogy;
    What are the conceptualisations about learning and teaching which the people within the place bring to e-learning? What practices are being used to learn and teach? What practices might they like to adopt?
  5. Past experience;
    What has gone on before with e-learning, both within and outside of this particular place? What worked and what didn’t?
  6. Product; and
    What system has been chosen or designed to implement e-learning? Where system is used in the broadest possible definition to include the hardware, software and support roles.
  7. Process.
    What are the characteristics of the process used to choose how or what will be implemented and what process will be used to implement the chosen approach?

One, of potentially many, explanation of the relationship between the seven components starts with purpose. Some event, problem or factor arises that will require the organization to change the way in which it supports e-learning. This becomes the purpose underlying a process used by the organization to determine how (process) and what it (product) will change. This change will be influenced by a range of factors including: characteristics of the organization and its context (place); the nature and conceptions of the individuals and cultures within it (people); the conceptualisations of learning and teaching (pedagogy) held by the people and the organization; and the historical precedents by within and outside the organisation (past experience).

This is not to suggest that there exists a simple linear, or even hierarchical, relationship between the components of the Ps Framework. The context of implementing educational technology within a university is too complex for such a simple reductionist view (Jones, Vallack et al. 2008). As stated above, the perspective underpinning the Ps Framework is one where the technology is one of even components of an emergent process of change where the outcomes are indeterminate because they are situationally and dynamically contingent (Markus and Robey 1988).

Figure 1 provides a representation of the 7 components of the Ps Framework for E-Learning. The situationally contingent nature of these components is represented by the Place component encapsulating all of the remaining six. The dynamically contingent nature of these components is represented by the messiness of their representation. It is intended also that each component be connected in someway to every other component as a representation that each component can influence the other, and vice versa.

The Ps Framework: a messy version

"Blame the teacher" and its negative impact on learning and e-learning

The following post is sparked by reading Findlow (2008) as part of my PhD work. I’m about halfway through it and finding it very interesting. In particular, this post is sparked by the following paragraph from the paper

The institutional counterpoint to this was the feeling expressed, implicitly or explicitly, by all the administrative staff I talked to that, in the words of one mid-ranking administrator, ‘No offence, but some academics need a kick up the bum’. Only five survey respondents cited constraints not explicitly linked to aspects of ‘the system’, singling out academics’ ‘attitudes’, which they elaborated as: ‘lack of imagination’ and/or ‘a reluctance to take risks’

Blame the teacher – level 1

In promulgating the idea of reflective alignment I borrowed and reworked Biggs (Biggs and Tang, 2007) ideas of constructive alignment to take it from looking at how individual academics could design teaching to looking at how a university could improve the design of teaching performed by its academics.

The sentiments outlined in the above quote from Findlow (2008) are a perfect example of, what I framed as, level 1 knowledge about improving teaching. The “blame the teacher” level. The level at which management can feel consoled that the on-going problems with the quality of teaching is not the fault of the system they manage. It’s the fault of those horrible academics. It would all be better if the academics simply got a “kick up the bum”.

Escalation into “accountability” – level 2

The immediate problem that arises from level 1 knowledge about improving teaching, is that management very quickly want to provide that “kick up the bum”. This is typically done by introducing “accountability”. As Findlow (2008) writes

Key to the general mismatch seemed to be the ways in which the environment – both institution and scheme – demanded subscription to a view of accountability that impeded real innovation; that is, the sort of accountability that is modelled on classic audit: ‘conducted by remote agencies of control’ (Power 1994, 43), presuming an absence of trust, and valuing standardisation according to a priori standards.

This approach fits nicely into Level 2 knowledge about improving teaching – i.e. it is a focus on what management does. The solution here is that management spend their time setting up strategic directions against which all must be evaluated. They then set up “accountability courts” (i.e. “remote agencies of control) to evaluate everything that is being done to ensure that it contributes to the achievement of those strategic directions.

This can be seen in such examples as IT governance or panels that evaluate applications for learning and teaching innovation grants. A small select group sits in positions of power as accountability judges to ensure that all is okay.

Once the directions are set and the “accountability courts” are set up, management play their role within those courts, especially in terms of “kicking but” when appropriate.

Mismatch and inappropriate

There is an argument to be made that such approaches are an anathema to higher education. For example, Findlow (2008) makes this point

New managerialism approaches knowledge as a finished product, packaged, positive, objective, externally verifiable and therefore located outside the knower. By contrast, an ‘academic exceptionalist’ (Kogan and Hanney 2000, 242) view of knowledge places it in the minds of knowledgeable individuals, with the holder of the knowledge also the main agent in its transmission (Brew 2003). This kind of expert or ‘professional knowing’, closely related to conventionally acquired ‘wisdom’ (Clegg 2005, 418), is produced through an organic process between people in a culture of nurturing new ideas. The process is allowed to take as long as it takes, and knowledge is not seen as a finished product.

There are arguments back and forth here. I’m going to ignore them as beyond scope for this post.

I will say that I have no time for many of the academics who, at this stage, will generally trot out the “academic freedom” defense to “accountability courts”. Accountability, of an appropriate sort, is a key component of being an academic, peer review anyone? Findlow (2008) has this to say

accountability is intrinsic to academia: the sort of accountability that is about honesty and responsibility, about making decisions on the basis of sound rationales, on the understanding that you may be called to account at any point. Strathern (2000a, 3) suggests that ‘audit is almost impossible to criticise in principle – after all, it advances values that academics generally hold dear, such as responsibility, openness about outcomes’.

Academics should be open and clear about what and why the perform certain tasks. Hiding behind “academic freedom” is to often an excuse to avoid being “called to account”. (That said there are always power issues that complicate this).

My argument against “accountability courts” is not on the grounds of principle, but on pragmatic grounds. It doesn’t work

It doesn’t work

Remember, we’re talking here about improving the design of courses across an institution. To some extent this involves innovation – the topic of Findlow (2008) – who makes the following point about innovation (emphasis added)

The nature of innovation … is change via problematisation and risk. In order to push the boundaries of what we know, and break down dogma, problems have to be identified and resolved (McLean and Blackwell 1997, 96). Entering uncharted territory implies risk, which requires acceptance by all stakeholders.

This last point is where the problems with “accountability courts” arise. It starts with the SNAFU principle which in turn leads to task corruption.

SNAFU principle

Believed to arise from the US army in World War II the phrase SNAFU is commonly known as an acronym that is expanded out to Situation Normal, All Fouled Up – where “Fouled” is generally replaced with a more colloquial term. Interestingly, and as a pause to this diatribe, here’s a YouTube video of Private Snafu – a cartoon series made by the US armed services during World War II to educate the troops about important issues. You may recognise Mel Blanc’s voice.

However, the SNAFU principle gets closer to the problem. The principle is defined as

“True communication is possible only between equals, because inferiors are more consistently rewarded for telling their superiors pleasant lies than for telling the truth.”

This is illustrated nicely by the fable on this SNAFU principle page.

Can this be applied to innovation in higher education? Surely it wouldn’t happen? Findlow (2008) again

My own experience as a funded innovator, and the prevailing experience of my respondents, was that participation in a funded ‘scheme’ made authentic problematisation, and honest description of risk, difficult. Problematisation was inhibited by the necessary consideration given to funding body and institutional agendas in defining parameters for approval. Audit can be seen as a response to fear of risk, and audit-managerially governed schemes require parameters pre-determined, expected outcomes and costs known in advance. Respondents in this case related the reluctance of the scheme to provide for unanticipated needs as they arose, without which effective innovation was much harder.

Task corruption

Task corruption can be defined as

is where either an institution or individual, conciously or unconsciously, adopts a process or an approach to a primary task that either avoids or destroys the task.

It can arise when the nature of the system encourages people to comply through a number of different mechanisms. Findlow (2008) reports on one as applied to innovation

The discussion groups of new academics unanimously recounted a feeling of implicit pressure not to acknowledge problems. They all said they had quickly learned to avoid mention of ‘problems’, that if necessary the word ‘issues’ was preferable, but that these ‘issues’ should be presented concisely and as if they had already been dealt with. While their formal institutional training programme emphasised the importance of honestly addressing shortcomings, their informal exposure to management culture conveyed a very different message. They had learned, they said, that to get on in academia you had to protect yourself and the institution, separate rhetoric from reality, strategy from truth – that authentic problematisation was non-productive and potentially dangerous.

Findlow goes on to reference Trowler’s (1998) term “coping strategies” and the phrase “work to rule”. Findlow gives examples, such as innovators have to lie about a particular aspect of their innovation in the formal documents required by an “accountability court” in order to fulfill requirements. Even thought the rationale was accepted by senior adminstrators.

Academics start to work the system. This creates a less than stellar confidence in the nature of the system and subsequently reduce the chances of innovation. Findlow (2008) again

Allen’s (2003) study of institutional change found that innovation was facilitated by the confidence that comes with secure working environments. Where change was judged by staff to be successful, it tended to emerge from university environments where holistic and humanistic views of scholarship and systems of implicit trust were embedded. These gave academics the confidence to take risks. Allen found that insecure environments created what Power (1994, 13) describes as ‘the very distrust that [they were] meant to address’, removed the expectation and obligation for genuinely responsible academic accountability (Giri 2000, 174), and made staff reluctant to devote time, signpost problems or try something that might not work and could reflect negatively on their career portfolios.

A solution?

The last quote from Findlow (2008) seems to provide a possible suggestion in “university environments where holistic and humanistic views of scholarship and systems of implicit trust were embedded”. Perhaps such an environment would embody level 3 knowledge of how to improve design of courses. Such an environment might allow academics to enage in Reflective Problematisation.

Such an environment might focus on some of the features of this process.


Biggs, J. and C. Tang (2007). Teaching for Quality Learning. Maidenhead, England, Open University Press.

Findlow, S. (2008). “Accountability and innovation in higher education: a disabling tension?” Studies in Higher Education 33(3): 313-329.

Comparing VLEs/LMS to the past: flaws and implications for development models

George Santayana, a Spanish American philosopher and writer

I’m working on chapter 2 of the thesis and, in particular, on the “Past Experience” section. As part of the Ps Framework, “Past Experience” is meant to talk about

What has gone on before with e-learning, both within and outside of this particular place? What worked and what didn’t? What other aspects of previous experience at this particular institution will impact upon current plans?

. So it’s fairly obvious that at some stage I’m going to use the following quote from George Santayana

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

Early and new insights

The thesis is aimed at e-learning and, in particular, web-based education. Based on my experience and resulting perspectives I’ve had quite a few ideas about what might come out in this section (remember the aim of chapter 2 of the thesis is to illustrate I know the area and also to identify some problems). The point of this post is to summarise a couple of new perspectives that have been brought to bear by my first reading in the area.

The reading is Jesse Heines 2004 chapter on “Technology for Teaching: Past Masters Versus Present Practices” (Heines, 2004). This chapter goes back into the history of technology use for learning and compares what was known and possible with systems from last century with what is possible with more modern technology. Given the tone of this post, I doubt it’s any surprise that his is not a favourable comparison for the modern systems.

The two insights that have been highlighted for me are:

  1. VLEs/LMSes/CMSes are not informed by best practice.
  2. The commercial model of these systems constrains the ability to be informed by best practice.

Consideration of these points raised a question for me about the open source systems and whether they suffer from the same problem – which I kind of think they do. More on this in the last section of the post.

VLEs/LMSes/CMSes are not informed by best practice

In the early noughties the vendors of course management systems (CMSes) caught onto the growing adoption of enterprise resource planning systems within universities. They knew that this trend created all sorts of advantages for them in convincing universities to fork out big money on their systems. So they started labeling their systems as “enterprise” systems.

Now one of the assumed features of “enterprise” systems is that their design is meant to be informed by and encapsulate “best practice” (Jones et al, 2004). This is used as one excuse why the organisation should adapt its processes to suit the new system, because it encapsulates “best practice”.

One of the more common features of an CMS that academics use is the quiz facility. Heines (2004) describes much of the history of work around programmed instruction – i.e. automated testing using technology. He relates this story

In a recent conversation with a representative of one of the leading CMS vendors about their testing subsystem, I asked about the system’s capability to analyze the data it stored and present it to teachers. [CMS stands for “course management system,” another new term applied to a capability that’s been around for years.] I was told that the system can show the teacher each student’s response to every question. OK, I responded, but can a busy teacher see a summary of that data so that s/he can see trends and identify widespread class misunderstandings? The representative didn’t know. He said something about computing an average, but he was not familiar with the terms “item analysis,” “difficulty index,” “discrimination index,” and “standard deviation.” (Sigh.)

He then proceeds to highlight some additional limitations

  • Many CMS don’t even store the data necessary to do item analysis and other features available in much earlier systems.
  • Facilities to construct test banks doesn’t enforce “even the most basic, long-established rules of good test construction”.


  • Is this still true of more recent versions of these systems?
  • Is this true of the open source alternatives – e.g. Moodle, Sakai etc.

The commercial model causes this

Exterior of Pressey Testing Machine, patent dates 1928 and 1930.

Heines (2004) then makes the point that economic and commercial system used to produce these systems may be somewhat to blame. He starts by offering this quote from Sidney Pressey (who developed the “teaching machine” in the image to the left in 1928)

The writer has found from bitter experience that one person alone can accomplish very little.”

i.e. he funded much of the development of his machine and without commercial support had difficulty.

Heines (2004) then suggests that you need “product commercialization” to have a real impact on the education system. But, he also suggests a flaw for this approach

the cost of developing and marketing commercial products today is so huge that they must often cater to the lowest common denominator in an effort to appeal to the widest possible audience

If this is true, and I do think this is fairly well accepted, then what does it say for the assumption that “enterprise” systems embody “best practice”.

Is open source the solution?

Heines (2004) suggests that showing the vendors how to expand their product capabilities is the solution. Funny that. Just last week I saw someone from an Australian university asking about a basic function within the most recent, commercial version of Blackboard. Apparently, Blackboard had been told that this basic function was necessary quite sometime ago but still hadn’t included it.

This basic function was a simple “reporting” problem, i.e. how information was being displayed. It wasn’t some as difficult as storing additional data about student performance on quizzes and implementing known algorithms for item analysis. But even it hadn’t been done yet. And this is for a function that was reported through vendor initiated “user dialogue”.

So, of course, open source must be the answer. That seems to be what the latest fad sweeping higher education might suggest. As that previous post suggests, I have my doubts.

One simple empirical test might be to look at the testing engines within existing open source CMSes and see if they suffer the same flaw. My quick look at Moodle suggests that it does.

Do you know better?


Okay, complicated quiz reporting systems may not be the best example of modern pedagogy. Consequently, it may not be the best test. But I’m sure you could find similar things in terms of discussion forums, student/staff interaction etc. There’s probably an interesting paper in this.

How do you solve it?

So, if both the commercial and the open source “enterprise” systems suffer this same flaw, how do you solve this problem?

Heines (2004) suggests that a “plug-in” approach might be possible. The reality of this, however, may be a little more complex. Some of the features that need changing may be “core” to the system, something a plug-in couldn’t change. Being able to change the “core” also raises some problems.

If I can’t give an answer to how you would do it, I can at least describe a situation that would not solve it. That’s the old “implement vanilla” approach to enterprise systems – the situation where the local organisation actively decides not to make any changes.

For me this approach ignores the messiness of information systems.


Jones, D., S. Behrens, et al. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Heines, J. (2004). Technology for Teaching: Past Masters Versus Present Practices. Online Learning: Personal Reflections on the Tranformation of Education. G. Kearsley, Educational Technology Publications: 144-162.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén