Assembling the heterogeneous elements for (digital) learning

Category: cognitiveEdge

Leadership as defining what's successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?

References

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Alternate ways to get the real story in organisations

I’ve just been to a meeting with a strangely optimistic group of people who are trying to gather “real stories” about what is going on within an organisation through focus groups. They are attempting to present this information to senior management in an attempt to get them to understand what staff are experiencing, to indicate that something different might need to be done.

We we asked to suggest other things they could be doing. For quite some time I’ve wanted to apply some of the approaches of Dave Snowden to tasks like this. The following mp3 audio is an excerpt from this recording of Dave explaining the results of one approach they have used. I recommend the entire recording or any of the others that are there.

Why do we shit under trees?

Imagine this type of approach applied to students undertaking courses at a university as a real alternative to flawed smile sheets.

Tell a story about your garden – narrative and SenseMaker

There have been a few glimmers in this blog in my undeveloped, long stalled but slowly growing interest in the use of narrative, metaphor and myth to understand and engage in innovation around learning and teaching. Much, but not all, of this arises from the work of Dave Snowden and attending one of his workshops.

A chance to play with SenseMaker

One of my interests is in the SenseMaker suite as a tool that might be useful for a number of tasks. In particular, I’m interested in seeing if this might provide some interesting alternatives to the evaluation of learning and teaching. However, apart from seeing SenseMaker in action at the workshop I attended and reading about it, I haven’t had a chance to play with it.

In a recent blog post Dave Snowden extends an invitation to use a part of the SenseMaker suite to contribute to an open project about gardens.

I encourage you to go to Dave’s post and contribute a story about your garden. The rest contains some reflections on my contribution.

Some reflections

The flash interface has some issues, at least on my combination of hardware and software. The drop down boxes on the initial set of questions don’t provide some of the traditional cues you expect

  • highlighting options as you hover the mouse while figuring out which one to select;
  • you have to click on the actual down arrow to get the menu of options to appear rather than being able to click anywhere on the box;
  • it only appears to take the first letter to go to choices
    i.e. selecting which country you are from I often will type “aust” to bring up those options (I’m in Australia), rather than scroll through a long list. The flash interface only appears to take the first letter ‘a’.

Finding a story to tell about my garden was interesting and took a while. In fact the story emerged and changed as I was writing it. It took perhaps as long, possibly longer, than a survey might. I wonder how that impacts on the likelihood of people contributing.

In some of the questions asked after contributing the story – used as signifiers – I sometimes found myself wanting a “not applicable” option. I wonder what effect this has on the usefulness of the stories and the signifiers.

Quotes from Snowden and the mismatch between what univeristy e-learning does and what it needs

For the PhD I’m essentially proposing that the current industrial model of e-learning adopted (almost without exception) by universities is a complete and utter mismatch with the nature of the problem. As a consequence of this mismatch e-learning will continue to have little impact, be of limited quality and continue to be characterised by 5 yearly projects to replace a software system rather than a focus on an on-going process of improving learning and teaching by using the appropriate and available tools.

Dave Snowden has recently described a recent keynote he gave and from that description/keynote I get the following two quotes which illustrate important components of my thesis and its design theory. I share them here.

Tools and fit

The technology in e-learning is a tool. A tool to achieve a certain goal. The trouble is that the Learning Management System/LMS (be it open source or not) model, as implemented within universities, typically sacrifices flexibility. It’s too hard to adapt the tool, so the people have to adapt. The following is a favourite quote of mine from Sturgess and Nouwens (2004). It’s from a member of the technical group evaluating learning management systems

“we should change people’s behaviour because information technology systems are difficult to change”

While I recognise that this actually may be the case with existing LMSes and the constraints that exist within universities about how they can be supported. I do not agree with this. I believe the tools should adapt with the needs of the people. That a lot more effort needs to be expended doing this, and if it does significant benefits flow.

Consequently, it’s no surprise that Dave’s quote about tools, resonates with me

Technology is a tool and like all tools it should fit your hand when you pick it up, you shouldn’t have to bio-re-engineer your hand to fit the tool.

Seneca the Younger and ateleological design

Dave closes his talk with the following quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

For me this connects back to the fact that (almost) all implementation of e-learning within universities focus on using a plan-driven approach, a teleological design process. It assumes that they can know what is needed into the future, which given the context of universities and the rhetoric about “change being the only thing that is constant” is just a bit silly.

Teleological design causes problems, ateleological design is a better fit.

Cognition – we're not rational and how it impacts e-learning

It’s a small world. I work in Rockhampton at a university and last year traveled to Canberra for a Cognitive Edge workshop (which I recommend). One of the other participants was Cory Banks who, a few years ago, was a student at the university I work at. He’s obviously moved onto bigger and better things.

Our joint Cognitive Edge experience indicates some similar interests, which brings me to this post on cognition on Cory’s blog. In th epost he suggests a number of aspects of cognition that impact upon problem solving. He’s asking for help in validating and sourcing these aspects.

If you can help, please comment on his post.

My particular interest in cognition is that most information systems processes (e.g. governance, software development) are based on the assumption of rational people making object decisions drawing on all available evidence. My experience suggests that this is neither possible nor true. For me, this observation explains most of the limitations and failures associated with the design and support of information systems for e-learning (and information systems more generally).

I’ve written about aspects of this before and again.

So, as time progresses I’m hoping to add to this list in terms of references, examples and additional aspects.

Cory’s cognition list

Cory’s cognition list includes the following (a little paraphrasing)

  • We evolved as ‘first fit’ pattern matchers.
    A quote from Snowden (2005)

    This builds on naturalistic decision theory in particular the experimental and observational work of Gary Klein (1944) now validated by neuro-science, that the basis of human decision is a first fit pattern matching with past experience or extrapolated possible experience. Humans see the world both visually and conceptually as a series of spot observations and they fill in the gaps from previous experience, either personal or narrative in nature. Interviewed they will rationalize the decision in whatever is acceptable to the society to which they belong: “a tree spirit spoke to me” and “I made a rational decision having considered all the available facts” have the same relationship to reality

    I’m guessing that Kaplan’s law of instrument is somewhat related.

  • The fight or flight reaction.
  • We make assumptions.
  • We’re not analytical
    I wonder if this and most of the above points fit under “first fit pattern matchers”?
  • Failure imprints better than success.
  • Serendipitous recall (we only know what we need to know, when we need to know it).
  • We seek symmetry (attractiveness).

References

Snowden, D. (2005). Multi-ontology sense making: A new simplicity in decision making. Management Today, Yearbook 2005. R. Havenga.

How to improve L&T and e-learning at universities

Over the last week or so I’ve been criticising essentially all current practice used to improve learning and teaching. There are probably two main prongs to my current cynicism:

  1. Worse than useless evaluation of learning and teaching; and
    Universities are using evaluation methods that are known to be worthless and/or can’t get significant numbers of folk to agree as some definition of “good” learning and teaching.
  2. A focus on what management do.
    Where, given the difficulty of getting individual academics (let alone a significant number of them), to change and/or improve their learning and teaching (often because of the problems with point #1), the management/leadership/committee/support hierarchy within universities embark on a bit of task corruption and start to focus on what they do, rather than on what the teaching staff do.

    For example, the university has improved learning and teaching if the academic board has successfully mandated the introduction of generic attributes into all courses, had the staff development center run appropriate staff development events, and introduced “generic attributes” sections within course outlines. They’ve done lots of things, hence success. Regardless of what the academics are really doing and what impacts it is having on the quality of learning and teaching (i.e. see point #1).

So do you just give up?

So does this mean you can’t do anything? What can you do to improve learning and teaching? Does the fact that learning and teaching (and improving learning and teaching are wicked problems mean that you can’t do anything? This is part of the problem Col is asking about with his indicators project. This post is mostly aimed at trying to explain some principles and approaches that might work. As well as attempting to help Col, it’s attempting to make concrete some of my own thoughts. It’s all a work in progress.

In this section I’m going to try and propose some generic principles that might help inform how you might plan something. In the next section I’m going to try and apply these principles to Col’s problem. Important: I don’t think this is a recipe. The principles are going to be very broad and leave a lot of room for the application of individual knowledge. Knowledge of both generic theories of teaching, learning, people etc. and also of the specific contexts.

The principles I’m going to suggest are drawn from:

  • Reflective alignment – a focus on what the teachers do.
  • Adopter-based development processes.
  • A model for evaluating innovations informed by diffusion theory.
  • Emergent/ateleological design.
  • The Cynefin framework.

Reflective alignment

In proposing reflective alignment I believe it is possible to make a difference. But only if

The focus is on what the teacher does to design and deliver their course. The aim is to ensure that the learning and teaching system, its processes, rewards and constraints are aiming to ensure that the teacher is engaging in those activities which ensure quality learning and teaching. In a way that makes sense for the teacher, their course and their students.

The last sentence is important. It what make sense for the teacher. It is not what some senior manager thinks should work, or what the academic board thinks is important or good. Any attempt to introduce something that doesn’t engage with the individual teacher and doesn’t encourage them to reflect on what they are doing and hopefully make a small improvement, will fail.

Adopter-based development

This has strong connections with the idea of adopted-based development processes, which are talked about in this paper (Jones and Lynch, 1999)

places additional emphasis on being adopter-based and concentrating on the needs of the individuals and the social system in which the final system will be used.

Forget about the literature, forget about the latest fad (mostly) and concentrate first and foremost on developing a deep understanding of the local context, the social system and its mores and the people within it. What they experience, what their problems are, what their strengths are and what they’d like to do. Use these as the focus for deciding what you do next, not the latest, greatest fad.

How do you decide?

In this paper (Jones, Jamieson and Clark, 2003) we drew on Rogers’ diffusion theory (Rogers, 1995) to develop a model that might help folk make these sorts of decisions. The idea was to evaluate a potential innovation against the model in order to

increase their awareness of potential implementation issues, estimate the likelihood of reinvention, and predict the amount and type of effort required to achieve successful implementation of specific … innovations.

Variables influencing rate of adoption

The model consists of five characteristics of an innovation diffusion process that will directly influence the rate of adoption of the innovation. These characteristics, through the work of Rogers and others, also help identify potential problems facing adoption and potential solutions.

This model can be misused. It can be used as an attempt to encourage adoption of Level 2 approaches to improving learning and teaching. i.e. someone centrally decides on what to do and tries to package it in a way to encourage adoption. IMHO, this is the worst thing that can happen. Application of the model has to be driven by a deep understanding of the needs of the people within the local context. In terms of reflective alignment, driven by a desire to help encourage academics to reflect more on their learning and teaching.

Emergent/ateleological design

Traditional developer-based approaches to information systems are based on a broadly accepted and unquestioned set of principles that are completely and utterly inappropriate for learning and teaching in universities. Since at least this paper (Jones, 2000) I’ve been arguing for different design processes based on emergent development (Truex, Baskerville and Klein, 1999) and ateleological design (Introna, 1996).

Truex, Baskerville and Klein (1999) suggest the following principles for emergent development:

  • Continual analysis;
  • Dynamic requirements negotiation;
  • Useful, incomplete specifications;
  • Continuous redevelopment; and
  • The ability to adapt.

They are expanded in more detail in the paper. There have been many similar discussions about processes. This paper talks about Introna’s ateleological design process and its principles. Kurtz and Snowden (2007) talk about idealistics versus naturalistic approaches that are summarised in the following table.

Idealistic Naturalistic
Achieve ideal state Understand a sufficiency of the present in order to stimulate evolution
Privilege expert knowledge, analysis and interpretation Favour enabling emergent meaning at the ground level
Separate diagnosis from interfention Diagnosis and intervention to be intertwined with practice

No surprises for guessing that I believe that a naturalistic process is much more appropriate.

Protean technologies

Most software packages are severely constraining. I’m thinking mostly of enterprise systems here that tend to illustrate the underlying assumptions in their design where the control of what users do is necessary to ensure efficiency. I believe it just constrains what people can do, limits innovation and in an environment like learning and teaching this is a huge problem.

Truex et al (1999) make this point about systems and include “ability to adapt” as a prime requirement for emergent development. The software/systems in play have to be adaptable. As many people as possible, as quickly as possible, need to be able to modify the software to enable new functionality as the need becomes apparent. The technology has to enable, in Kurtz and Snowden’s (2007) words, “emergent meaning at the ground level”. It also to allow “diagnosis and intervention to be intertwined with practice”.

That is the software has to be protean. As much as possible the users of the system need to be able to play with the system, to try new things and where appropriate there have to be developers who can help and enable these things to happen more quickly. This implies that the software has to enable and support discussion, amongst many different people, to occur. To help share perspectives and ideas. The mixing of ideas help generate new and interesting ideas for change to the software.

Cynefin framework

Cynefin framework

Which brings us to the Cynefin framework. As a wicked problem, I place teaching and attempting to improve teaching into the Complex domain of the Cynefin framework. This means that the most appropriate approach is to “Probe – Sense – Respond”. i.e. do something small, see how it works and then encourage the stuff that works and cease/change the stuff that doesn’t.

Some ideas for a way forward

So to quickly finish this off, some off the cuff ideas for the indicators project:

  • Get the data from the indicators into a form that provides some information to real academics in a form that is easy to access and preferably as a part of a process or system they already use.
  • Make sure the form is perceived by the academics to provide some value.
  • Especially useful if the information/services provided by the indicators project enables/encourages reflection on the part of the academics.
    For example, giving a clear, simple, regular update on some information about student activity that is currently unknown. Perhaps couched with advice that helps provide options for a way to solve any potential problems.
  • Use a process and/or part of the product that encourages a lot of people talking about/contributing to ideas about how to improve what information/services the indicators provides.
  • Adopt the “open source” development ethos “release early, release often”
  • Perhaps try and create a community of academics around the project that are interested and want to use the services.
  • Pick people that are likely to be good change agents. Keep in mind Moore’s chasm and Geohegan’s identification of the technologists alliance.

References

Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

David Jones, Teresa Lynch, (1999). A Model for the Design of Web-based Systems that supports Adoption, Appropriation, and Evolution, Proceedings of the 1st ICSE Workshop on Web Engineering, Murugesan, S. & Deshpande, Y. (eds), Los Angeles, pp 47-56

David Jones, Kieren Jamieson, Damien Clark, (2003). “A Model for Evaluating Potential WBE Innovations,” Hawaii International Conference on System Sciences, vol. 5, no. 5, pp. 154a, 36th Annual Hawaii International Conference on System Sciences (HICSS’03) – Track 5, 2003.

Kurtz, C. and D. Snowden (2007). Bramble Bushes in a Thicket: Narrative and the intangiables of learning networks. Strategic Networks: Learning to Compete. Gibbert, Michel, Durand and Thomas, Blackwell.

Rogers, E. (1995). Diffusion of Innovations. New York, The Free Press.

Truex, D., R. Baskerville, et al. (1999). “Growing systems in emergent organizations.” Communications of the ACM 42(8): 117-123.

Patterns for e-learning – a lost opportunity or destined to fail

In the following I reflect on my aborted and half-baked attempts at harnessing design patterns within the practice of e-learning at universities and wonder whether it was a lost opportunity and/or a project that was destined to fail. This is written in the light shed by the work of a number of other folk (Google “patterns for e-learning”), including the current JISC-emerge project and, I believe, the related Pattern Language Network.

I think I’ll end up contending that it was destined to fail and hope I can provide some justification for that. Or at least that’s what I currently think, before writing the following. Any such suggestion will be very tentative.

Context

Way back in 1999 I was a young, naive guy at the crossroads of software development and e-learning, I was wondering why more academics weren’t being innovative. Actually, the biggest and most troubling question was much simpler, “Why were they repeating the same mistakes I and others had made previously?”. For example, I lost count of the number of folk who tried to use email for online assignment submission in courses with more than 10 or 20 students. Even though many folk tried it, had problems and talked about the problems with additional workload it creates.

At the same time I was looking at how to improve the design of Webfuse, the e-learning system I was working upon, and object-oriented programming seemed like a good answer (it was). Adopting OOP also brought me into contact with the design patterns community within the broader OOP community. Design patterns within OOP were aimed at solving many of the same problems I was facing with e-learning.

Or perhaps this was an example of Kaplan’s law of instrument. i.e. patterns were the hammer and the issues around e-learning looked like a nail.

Whatever the reason some colleagues and I tried to start up a patterns project for online learning (I’m somewhat amazed that the website is still operating). The why page” for the project explains the rationale. We wrote a couple of papers explaining the project (Jones and Stewart, 1999; Jones, Stewart and Power, 1999), gave a presentation (the audio for the presentation is there in RealAudio format, shows how old this stuff is) and ran an initial workshop with some folk at CQU. One of the publications also got featured in ERIC and on OLDaily.

The project did produce a few patterns before dieing out:

There’s also one that was proposed but nothing concrete was produced – “The Disneyland Approach”. This was based on the idea of adapting ideas from how Disney designs their theme parks to online learning.

I can’t even remember what all the reasons were. Though I did get married a few months afterwards and that probably impacted my interest in doing additional work. Not to mention that my chief partner in crime also left the university for the paradise of private enterprise around the same time. That was a big loss.

One explanation and a “warning” for other patterns projects?

At the moment I have a feeling (it needs to be discussed and tested to become more than that) that these types of patterns projects are likely to be very difficult to get to work within the e-learning environment, especially if the aim is to get a broad array of academics to, at least, read and use the patterns. If the aim is to get a broad array of academics to contribute to patterns, then I think it’s become almost impossible. This feeling/belief is based on three “perspectives” that I’ve come to draw upon recently:

  1. Seven principles for knowledge management that suggest pattern mining will be difficult;
  2. the limitations of using the Technologists’ Alliance to bridge the gap;
  3. people (and academics) aren’t rational and this is why they won’t use patterns when designing e-learning and

7 Principles – difficulty of mining patterns

Developing patterns is essentially an attempt at knowledge management. Pattern mining is an attempt to capture what is known about a solution and its implementation and distill it into a form that is suitable for others to access and read. To abstract that knowledge.

Consequently, I think the 7 principles for knowledge management proposed by Dave Snowden apply directly to pattern mining. To illustrate the potential barriers here’s my quick summary of the connection between these 7 principles and pattern mining.

  1. Knowledge can only be volunteered it cannot be conscripted.
    First barrier in engaging academics to share knowledge to aid pattern mining is to get them engaged. To get them to volunteer. By nature, people don’t share complex knowledge, unless they know and trust you. Even then, if their busy…. This has been known about for a while.
  2. We only know what we know when we need to know it.
    Even if you get them to volunteer, then chances are they won’t be able to give you everything you need to know. You’ll be asking them out of the context when they designed or implemented the good practice you’re trying to abstract for a pattern.
  3. In the context of real need few people will withhold their knowledge.
    Pattern mining is almost certainly not going to be in a situation of real need. i.e. those asking aren’t going to need to apply the provided knowledge to solve an immediate problem. We’re talking about abstracting this knowledge into a form someone may need to use at some stage in the future.
  4. Everything is fragmented.
    Patterns may actually be a good match here, depending on the granularity of the pattern and the form used to express it. Patterns are generally fairly small documents.
  5. Tolerated failure imprints learning better than success.
    Patterns attempt to capture good practice which violates this adage. Though the idea of anti-patterns may be more useful, though not without their problems.
  6. The way we know things is not the way we report we know things.
    Even if you are given a very nice, structured explanation as part of pattern mining, chances are that’s not how the design decisions were made. This principle has interesting applications to how/if academics might harness patterns to design e-learning. If the patterns become “embedded” amongst the academics “pattern matching” process, it might just succeed. But that’s a big if.
  7. We always know more than we can say, and we will always say more than we can write down.
    The processes used to pattern mine would have to be well designed to get around this limitation.

Limitations of the technologists’ alliance

Technology adoption life-cycle - Moore's chasm

Given that pattern mining directly to coal-face academics is difficult for the above reasons, a common solution is to use the “Technologists’ Alliance” (Geoghegan, 1994). i.e. the collection of really keen and innovative academics and the associated learning designers and other folk who fit into the left hand two catagories of the technology adoption life cycle. i.e. those to the left of Moore’s chasm.

The problem with this is that the folk on the left of Moore’s chasm are very different to the folk on the right (the majority of academic staff). What the lefties think appropriate is not likely to match what the righties are interested in.

Geoghegan (1994) goes so far as to claim that the “alliance”, and the difference between them the righties, has been the major negative influence on the adoption of instructional technology.

Patterns developed by the lefties are like to be in language not understood by the righties and solve problems that the righties aren’t interested and probably weren’t even aware existed. Which isn’t going to positively contribute to adoption.

People aren’t rational decision makers

The basic idea of gathering patterns is that coal face academics will be so attracted to the idea of design patterns as an easy and effective way to design their courses that they will actually use the resulting pattern language to design their courses. This ignores the way the human mind makes decisions.

People aren’t rational. Most academics are not going to follow a structured approach to the design of their courses. Most aren’t going to quickly adopt a radically different approach to learning and teaching. Not because their recalcitrant mongrels more interested in research (or doing nothing), because they have the same biases and ways of thinking as the rest of us.

I’ve talked about some of the cognitive biases or limitations on how we think in previous posts including:

In this audio snippet (mp3) Dave Snowden argues that any assumption of rational, objective decision making that entails examining all available data and examining all possible alternate solutions is fighting against thousands of years of evolution.

Much of the above applies directly to learning and teaching where the experience of most academics is that they aren’t valued or promoted on the value of their teaching. It’s their research that is of prime concern to the organisation, as long as they can demonstrate a modicum of acceptable teaching ability (i.e. there aren’t great amounts of complaints or other events out of the ordinary).

In this environment with these objectives, is it any surprise that they aren’t all that interested in spending vast amounts of time to overcome their cognitive biases and limitations to adopt radically different approaches to learning and teaching?

Design patterns anyone?

It’s just a theory

Gravity, just a theory

Remember what I said above, this is just a theory, a thought, a proposition. Your mileage may vary. One of these days, when I have the time and if I have the inclination I’d love to read some more and maybe do some research around this “theory”.

I have another feeling that some of the above have significant negative implications for much of the practice of e-learning and attempts to improve learning and teaching in general. In particular, other approaches that attempt to improve the design processes used by academics by coming up with new abstractions. For example, learning design and tools like LAMS. To some extent some of the above might partially explain why learning objects (in the formal sense) never took off.

Please, prove me wrong. Can you point to an institution of higher education where the vast majority of teaching staff have adopted an innovative approach to the design or implementation of learning? I’m talking at least 60/70%.

If I were setting the bar really high, I would ask for prove that they weren’t simply being seen to comply with the innovative approach, that they were actively engaging and embedding it into their everyday thinking about teaching.

What are the solutions?

Based on my current limited understanding and the prejudices I’ve formed during my PhD, I believe that what I currently understand about TPACK offers some promise. Once I read some more I’ll be more certain. There is a chance that it may suffer many of the same problems, but my initial impressions are positive.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD, IBM.

David Jones, Sharonn Stewart, The case for patterns in online learning, Proceedings of Webnet’99 Conference, De Bar, P. & Legget, J. (eds), Association for the Advancement of Computing in Education, Honolulu, Hawaii, Oct 24-30, pp 592-597

David Jones, Sharonn, Stewart, Leonie Power, Patterns: using proven experience to develop online learning, Proceedings of ASCILITE’99, Responding to Diversity, Brisbane: QUT, pp 155-162

Getting half-baked ideas out there: improving research and the academy

In a previous post examining one reason folk don’t take to e-learning I included the following quote from a book by Carolyn Marvin

the introduction of new media is a special historical occasion when patterns anchored in older media that have provided the stable currency for social exchange are reexamined, challenged, and defended.

In that previous post I applied this idea to e-learning. In this post I’d like to apply this idea to academic research.

Half-baked ideas

In this post Jon Udell talks about the dissonance between the nature of blogs, the narrative form he recommends for blogs and the practices of academics. In it he quotes an academic’s response to his ideas for writing blogs as

I wouldn’t want to publish a half-baked idea.

Jon closes the blog post with the following paragraph

That outcome left me wondering again about the tradeoffs between academia’s longer cycles and the blogosphere’s shorter ones. Granting that these are complementary modes, does blogging exemplify agile methods — advance in small increments, test continuously, release early and often — that academia could use more of? That’s my half-baked thought for today.

I think this perspective sums it up nicely. The patterns of use around the old/current media for academic research (conference and journal papers) are similar to heavyweight software development methodologies. They rely on a lot of up-front analysis and design to ensure that the solution is 100% okay. While the patterns of use of the blogosphere is very much more like that of agile development methods. Small changes, get it working, get it out and learn from that experience to inform the next small change.

Update: This post talks a bit more about Udell’s views in light of a talk he gave at an EDUCAUSE conference. There is a podcast of the presentation.

There are many other examples of this, just two include:

Essentially the standard practices associated with research projects in academia prevent many folk from engaging in getting the “half-baked ideas” out into the blogosphere. There are a number of reasons, but most come back to not looking like a fool. I’ve seen this many times with my colleagues wanting to spend vast amounts of time completing a blog post.

As a strong proponent and promoter of ateleological design processes, I’m interested in how this could be incorporated into research. Yesterday, in discussions with a colleague, I think we decided to give it a go.

What we’re doing and what is the problem?

For varying reasons, Col and I are involved, in different ways, with a project going under the title of the indicators project.. However, at the core of our interest is the question

How do you data mine/evaluate usage statistics from the logs and databases of a learning management system to draw useful conclusions about student learning, or the success or otherwise of these systems.

This is not a new set of questions. The data mining of such logs is quite a common practice and has a collection of approaches and publications. So, the questions for use become:

  • How can we contribute or do something different than what already exists?
  • How can we ensure that what we do is interesting and correct?
  • How do we effectively identify the limitations and holes underpinning existing work and our own work?

The traditional approach would be for us (or at least Col) to go away, read all the literature, do a lot of thinking and come up with some ideas that are tested. The drawback of this approach is that there is limited input from other people with different perspectives. A few friends and colleagues of Col’s might get involved during the process, however, most of the feedback comes at the end when he’s published (or trying to publish) the work.

This might be too late. Is there a way to get more feedback earlier? To implement Udell’s idea of release early and release often?

Safe-fail probes as a basis for research

The nature of the indicators project is that there will be a lot of exploration to see if there are interesting metrics/analyses that can be done on the logs to establish useful KPIs, measurements etc. Some will work, some won’t and some will be fundamentally flawed from a statistical, learning or some other perspective.

So rather than do all this “internally” I suggested to Col that we blog any and all of the indicators we try and then encourage a broad array of folk to examine and discuss what was found. Hopefully generate some input that will take the project in new and interesting directions.

Col’s already started this process with the latest post on his blog.

In thinking about this I can come up with at least two major problems to overcome:

  • How to encourage a sufficient number and diversity of people to read the blog posts and contribute?
    People are busy. Especially where we are. My initial suggestion is that it would be best if the people commenting on these posts included expertise in: statistics; instructional design (or associated areas); a couple of “coal-face” academics of varying backgrounds, approaches and disciplines; a senior manager or two; and some other researchers within this area. Not an easy group to get together!
  • How to enable that diversity of folk to understand what we’re doing and for us to understand what they’re getting at?
    By its nature this type of work draws on a range of different expertise. Each expert will bring a different set of perspectives and will typically assume everyone is aware of them. We won’t be. How do you keep all this at a level that everyone can effectively share their perspectives?

    For example, I’m not sure I fully understand all of the details of the couple of metrics Col has talked about in his recent post. This makes it very difficult to comment on the metrics and re-create them.

Overcoming these problems, in itself, is probably a worthwhile activity. It could establish a broader network of contacts that may prove useful in the longer term. It would also require that the people sharing perspectives on the indicators would gain experience in crafting their writing in a way that maximises understandability by others.

If we’re able to overcome these two problems it should produce a lot of discussion and ideas that contributes to new approaches to this type of work and also to publications.

Questions

Outstanding questions include:

  • What are the potential drawbacks of this idea?
    The main fear I guess of folk is that someone, not directly involved in the discussion, steals the ideas and publishes them unattributed and before we can publish. There’s probably a chance that we’ll also look like fools.
  • How do you attribute ideas and handle authorship of publications?
    If a bunch of folk contribute good ideas which we incorporate and then publish, should they be co-authors, simply referenced appropriately, or something else? Should it be a case by case basis with a lot of up-front discussion?
  • How should it be done?
    Should we simply post to our blogs and invite people to participate and comment on the blogs? Should we make use of some of the ideas Col has identified around learning networks? For example, agree on common tags for blog posts and del.icio.us etc. Provide a central point to bring all this together?

References

Lucas Introna. (1996) Notes on ateleological information systems development, Information Technology & People. 9(4): 20-39

On the silliness of "best practice" – or why you shouldn't (just) copy successful organisations

The very idea of “best practice” is silly. In any meaningful complex activity the idea of simply copying what someone else did is destined to fail because it doesn’t seek to understand the reasons why that best practice worked for them and what are the differences between “them” and “us”.

This post over at 37 Signals expounds a bit more on this and references an article titled Why your startup shouldn’t copy 37signals or Fog Creek. The article gives one of the explanations of why best practices are silly.

Dave Snowden has an article called Managing for Serendipity: why we should lay off “best practice” in Knowledge Management that takes the discussion even further. Some of the reasons he gives include:

  • Human beings naturally learn more effectively from failure than success.
  • There is only a very limited set of circumstances in which you are able to identify some “best way” of doing something (see wicked problems).
  • It’s very unlikely that we can codify this “best way” in a way that makes it possible for others to fully understand and adopt the practice.
  • People are unlikely to actually follow the best practice.

My favourite one, from a number of sources, is that “best” practice, “good” practice and even “bad” bad practice from somewhere else tends to be adopted because it is easier than attempting to really understand the local context and draw on expertise and knowledge to develop solutions appropriate to that context.

Doing that is hard. Much easier to see what “important organisation X” has done and copy them. This is where fads come from.

This is a small part of the argument made in the book Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail by Robert Birnbaum that I’m currently reading. More on this soon.

Seven principles of knowledge management and applications to e-learning, curriculum design and L&T in universities

I’ve been a fan of Dave Snowden and his work for a couple of years. In this blog post from last year Dave shares 7 principles for “rendering knowledge”. For me, these 7 principles have direct connection with the tasks I’m currently involved with e-learning, curriculum design and helping improve the quality of learning and teaching.

If I had the time and weren’t concentrating on another task I’d take some time to expound upon the connections that I see between Snowden’s principles and the tasks I’m currently involved with. I don’t so I will leave it as an exercise for you. Perhaps I’ll get a chance at some stage.

Your considerations would be greatly improved by taking a look at the keynote presentation on social computing at the Knowledge Management Asia conference given by Dave based on these 7 principles. I listened to the podcast yesterday and slides are also available.

I strongly recommend these to anyone working in fields around e-learning, curriculum design etc. to listen to this podcast.

For example

Let’s take #2

  • We only know what we know when we need to know it.
    Human knowledge is deeply contextual and requires stimulus for recall. Unlike computers we do not have a list-all function. Small verbal or nonverbal clues can provide those ah-ha moments when a memory or series of memories are suddenly recalled, in context to enable us to act. When we sleep on things we are engaged in a complex organic form of knowledge recall and creation; in contrast a computer would need to be rebooted.

The design of both e-learning software and learning and teaching currently rely a great deal on traditional design process that rely on analysis, design, implementation and evaluation. For example, at the start of the process people are asked to reflect and share insights and requirements about the software/learning design divorced from the reality of actually using the software or learning design. Based on the knowledge generated by that reflection, decisions are made about change.

The trouble is asking people these questions divorced from the context is never going to get to the real story.

Some alternate foundations for leadership in L&T at CQUniversity

On Monday the 25th of August I am meant to be giving a talk that attempts to link complexity theory (and related topics) to the practice of leadership of learning and teaching within a university setting. The talk is part of a broader seminar series occurring this year at CQUniversity as part of the institution’s learning and teaching seminars. The leadership in L&T series is being pushed/encouraged by Dr Peter Reaburn.

This, and perhaps a couple of other blogs posts, is meant to be a part of a small experiment in the use of social software. The abstract of the talk that goes out to CQUniversity staff will mention this blog post and some related del.icio.us bookmarks. I actually don’t expect it to work all that well as I don’t have the energy to do the necessary preparations.

Enough guff, what follows is the current abstract that will get sent out.

Title

Some alternate foundations for leadership in L&T at CQUniversity

Abstract

Over recent years an increasing interest in improving the quality of university learning and teaching has driven a number of projects such as the ALTC, LTPF and AUQA. One of the more recent areas of interest has been the question of learning and teaching leaders. In 2006 and 2007 ALTC funded 20 projects worth about $3.4M around leadership in learning and teaching. Locally, there has been a series of CQUniversity L&T seminars focusing on the question of leadership in L&T.

This presentation arises from a long-term sense of disquiet about the foundations of much of this work, an on-going attempt to identify the source of this disquiet and find alternate, hopefully better, foundations. The presentation will attempt to illustrate the disquiet and explain how insights from a number of sources (see some references below) might help provide alternate foundations. It will briefly discuss the implications these alternate foundations may have for the practice of L&T at CQUniversity.

This presentation is very much a work in progress and is aimed at generating an on-going discussion about this topic and its application at CQUniversity. Some parts of that discussion and gathering of related resources is already occuring online at
http://cq-pan.cqu.edu.au/david-jones/blog/?p=202
feel free to join in.

References and Resources

Snowden, D. and M. Boone (2007). A leader’s framework for decision making. Harvard Business Review 85(11): 68-76

Lakomski, G. (2005). Managing without Leadership: Towards a Theory of Organizational Functioning, Elsevier Science.

Davis, B. and D. Sumara (2006). Complexity and education: Inquiries into learning, teaching, and research. Mahwah, New Jersey, Lawrence Erlbaum Associates

Initial thoughts from CogEdge accreditation course

As I’ve mentioned before Myers-Briggs puts me into the INTP box, a Kiersey Archiect-Rational. Which amongst many other things I have an interest in figuring out the structure of things.

As part of that interest in “figuring out the structure” I spent three days last week in Canberra at a Cognitive Edge accreditation course. Primarily run by Dave Snowden (you know that a man with his own Wikipedia page must be important), who along with others has significant criticisms of the Myers-Briggs stuff, the course aims to bring people up to speed with Cognitive Edge’s approach, methods and tools to management and social sciences.

Since this paper in 2000, like many software people who found a resonance with agile software development, I’ve been struggling to incorporate ideas with a connection to complex adaptive systems into my practice. Through that interest I’ve been reading Dave’s blog, his publications and listening to his presentations for sometime. When the opportunity to attend one of his courses arose, I jumped at the chance.

This post serves two main roles:

  1. The trip report I need to generate to explain my absence from CQU for a week.
  2. Forcing me to write down some immediate thoughts about how it might be applied at CQU before I forget.

Over the coming weeks on this blog I will attempt to engage, reflect and attempt to integrate into my context the huge amount of information that was funneled my way during the week. Some of that starts here, but I’m likely to be spending years engaging with some of the ideas.

What’s the summary

In essence the Cognitive Edge approach is to take insights from science, in particular complex adaptive systems theory, cognitive science and techniques from other disciplines and apply them to social science, in particular management.

That’s not particularly insightful or original. It’s essentially a rephrasing of the session blurb. In my defence, I don’t think I can come up with a better description and it is important to state this because the Cognitive Edge approach seriously questions much of the fundamental assumptions of current practices in management and the social sciences.

It’s also important to note that the CogEdge approach only questions these assumptions in certain contexts. The approach does not claim universality, nor does it accept claims of universality from other approaches.

That said, the CogEdge approach does provides a number of theoretical foundations upon which to question much of what passes for practices within the Australian higher education sector and within organisations more broadly. I’ll attempt to give some examples in a later section. The next few sub-sections provide a brief overview of some of these theoretical foundations. I’ll try and pick up these foundations and their implications for practice at CQU and within higher education at a later date.

The Cynefin Framework

At the centre of the CogEdge approach is the Cynefin framework.

The Wikipedia page describes it as a decision making framework. Throughout the course we were shown a range of contexts in which it can be used to guide people in making decisions. The Wikipedia page lists knowledge management, conflict resolution and leadership. During the course there were others mentioned including software development.

My summary (see the wikipedia page for a better one) is that the framework is based on the idea that there are five different types of systems (the brown bit in the middle of the above image is the fifth type of system – disorder, when you don’t know which of the four other systems you’re dealing with). Most existing principles are based on the idea of there being just one type of system. An ordered system. The type of system where causality is straight forward and one that the right leader(ship group) can fully understand and design (or most likely adopt them from elsewhere) interventions that will achieve some desired outcome.

If the intervention happens to fail, then it is a problem with the implementation of the intervention. Someone failed, there wasn’t enough communication, not enough attention paid to the appropriate culture and values etc.

The Cynefin Framework suggests that there are 5 different contexts. This suggests an alternate perspective for failure. That is, that the nature of the approach was not appropriate for the type of system.

A good example of this mismatch is the story which Dave regularly tells about the children’s birthday party. Some examples of this include: an mp3 audio description (taken from this presentation) or a blog post that points to a video offering a much more detailed description.

The kid’s birthday party is an example of what they Cynefin framework calls a complex system. The traditional management by objectives approach originally suggested for use is appropriate for the complicated and simple sectors of the Cynefin framework, but not the complex.

Everything is fragmented

“Everything is fragmented” was a common refrain during the course. It draws on what cognitive science has found out about human cognition. The ideal is that human beings are rational decision makers. We gather all the data, consider the problem from all angles, perhaps consult some experts and then make the best decision (we optimize).

In reality, the human brain only gets access to small fragments of the information that is presented. We compare those small fragments against the known patterns we have in our brain (our past experience) and then choose the first match (we satisfice). The argument is that we take fragments of information and assemble them into something, somewhat meaningful.

The CogEdge approach recognises this and its methods and software are designed to build on this strength.

Approach, methods and software

The CogEdge approach is called “naturalising sensemaking”. Dave offers a simple definition of sensemaking here

the way in which we make sense of the world so that we can act in it

Kurtz and Snowden provide a comparison between what passes for the traditional approaches within organisations (idealistic) and their approach (naturalistic). I’m trying to summarise this comparison in the following table.

Idealistic Naturalistic
identify the future state and implement approaches to achieve that state gain sufficient understanding of the present context and choose projects to stimulate the evolution of the system, monitor that evolution and intervene as necessary
Emphasis is on expert knowledge and their analysis and interpretation Emphasis on the inherent un-knowability of a complex system which means affording no privelege to expert interpretation and instead favouring emergent meaning at the coal-face
Diagnosis precedes and is separate from intervention. Diagnosis/research identifies best practice and informs interventions to close the gap between now and the identified future state All diagnosis are also interventions and all interventions provide an opportunity for diagnosis

As well as providing the theoretical basis for these views the CogEdge approach also provides a collection of methods that help management actually act within a naturalistic, sense-making approach. It isn’t an approach that says step back and let it all happen.

There is also the SenseMaker Suite. Software that supports (is supported by) the methods and informed by the same theoretical insights.

Things too question

Based on the theoretical perspective taken by CogEdge it is possible to raise a range of questions (many of a very serious nature) against a range of practices currently within the Australian Higher Education sector. The following list is a collection of suggestions, I need to work more on these.

The content of this list is based on my assumption that learning and teaching within a current Australian university is a context system and fits into the sector of the Cynefin framework. I believe all of the following practices only work within the simple or the complicated sectors of the Cynefin framework.

My initial list includes the following, and where possible I’ve attempted to list what some of the flaws might be of this approach within the complex sector of the :

  • Quality assurance.
    QA assumes you document all your processes. As practiced the written down practices are quite complete. It assumes you can predict the future. As practiced by AUQA it assumes that a small collection of auditors from outside the organisational context can come in, look around for a few days and make informed comments on the validity of what is being done. It assumes that these auditors are experts making rational decisions, not pattern-matchers fiting what they see against their past experience.
  • Carrick grants emphasising cross institutional projects to encourage adoption.
    Still thinking about this one, but my current unease is based on the belief of the uniqueness of each context and the difficulty of moving the same innovation across different institutional contexts as is.
  • Requiring teaching qualifications from new academic staff.
    There is an assumption that the quality of university learning and teaching can be increased by requiring all new academic staff to complete a graduate certificate in learning and teaching. This assumes that folk won’t game the requirement. i.e. complete the grad. cert. and then ignore the majority of what they “learnt” when they return to a context which does not value or reward good teaching. It assumes that academics will gain access to the knowledge they need to improve in such a grad cert. A situation in which they are normally not going to be developing a great deal of TPCK. i.e. the knowledge they get won’t be contextualised to their unique situation.
  • The application of traditional, plan-driven technology governance and management models to the practice of e-learning.
    Such models are inherently idealistic and simply do not work well to a practice that is inherently complex.
  • Current evaluation of learning and teaching.
    The current surveys given to students at the end of term are generally out of context (i.e. applied after the student has had the positive/negative experience). The use of surveys also limit the bredth of the information that can be provided by students to the limitations enshrined in the questions. The course barometer idea we’ve been playing with for a long time is a small step in the right direction.

There are many more, but it’s getting past time to post this.

Possible projects

Throughout the course there were all sorts of ideas about how aspects of the CogEdge approach could be applied to improve learning and teaching at CQU. Of course, many of these have been lost or are still in my notebooks waiting to be saved.

A first step would be to fix the practices which I believe are now highly questionable outlined in the previous section. Some others include

  • Implement a learning and teaching innovation scheme based on some of the ideas of the Grameen bank.
    e.g. if at least 3 academics from different disciplines can develop an idea for a particular L&T innovation and agree to help each other implement it in each of their courses, then it gets supported immediatley. No evaluation by an “expert panel”.
  • Expand/integrate the course barometer idea to collect stories from students (and staff?) during the term and have those stories placed into the SenseMaker software.
    This could significantly increase CQU’s ability to pick up weak signals about trouble (but also about things that are working) and be able to intervene. Not to mention generating a strong collection of evidence to use with AUQA etc.
  • A number of the different CogEdge methods to help create a context in which quality learning and teaching arise more naturally.

There are many others, but it’s time to get this post, posted.

Disclaimers

I’ve been a believer in complexity informed, bottom-up approaches for a long time. My mind has a collection patterns about this stuff to which I am positively inclined. Hence it is no great surprise that the CogEdge approach resonates very strongly with me.

Your mileage may vary.

In fact, I’d imagine that most hard-core, plan-driven IT folk, those in the business process re-engineering and quality assurance worlds and others from a traditional top-down management school probably disagree strong with all of the above.

If so, please feel free to comment. Let’s get a dialectic going.

I’m also still processing all of the material covered in the three day course and in the additional readings. This post was done over a few days in different locations there are certain to be inconsistencies, typos, poor grammar and basic mistakes.

If so, please feel free to correct.

From scarcity to over abundance – paradigm change for IT departments (and others)

Nothing all that new in this post, at least not that others haven’t talked about previously. But writing this helps me think about a few things.

Paradigms, good and bad

A paradigm can be/has been defined as a particularly collection of beliefs and ways of seeing the world. Perhaps as the series of high level abstractions which a particular community create to enable very quick communication. For this purpose a common paradigm/collection of abstractions is incredibly useful, especially within a discipline. It provides members of a community from throughout a wide geographic area with a shared language which they can use.

It also has a down side, paradigm paralysis. The high level abstractions, the ways of seeing the world, become so ingrained that members of that community are unable to see outside of that paradigm. A good example is the longitude problem where established experts ignored an innovation from a non-expert because it fell outside of their paradigm, their way of looking at the world.

Based on my previous posts it is no great surprise to find out that I think that there is currently a similar problem going on with the practice of IT provision within organisations.

What’s changed

The paradigm around organisational IT provision arose within a context that was very different. A context that has existed for quite sometime, but is now under-going a significant shift caused by (at least) three factors

  1. The rise of really cheap, almost ubiquitous computer hardware.
  2. The rise of cheap (sometimes free), easy to use software.
  3. The spread of computer literacy beyond the high priests of ITD.

The major change is that what was once scarce and had to be managed as a scarce resource (hardware, software and expertise) is now available in abundance.

Hardware

From the 50s until recently, hardware was really, really expensive, generally under-powered and consequently had to be protected and managed. For example, in the late 1960s in the USA there weren’t too many human endeavours that would have had more available computing power than the Apollo 11 moon landing. And yet, in modern terms, it was a pitifully under-resourced enterprise.

Mission control, the folk on earth responsible for controlling/supporting the flight had access to computer power equivalent to (probably less) than the Macbook Pro I’m writing this blog entry with. The lunar module, the bit that took the astronauts from moon orbit, down, and then back again is said to have had less power than the digital watch I am currently wearing.

Moore’s law means that computer power increases exponentially with a similar impact on price.

Software

Software has traditionally been something you had to purchase. Originally, only from the manufacturer of the hardware you used. Then software vendors arose, as hardware became more prevalent. Then there was public domain software, open source software and recently Web 2.0 software.

Not only was there more software available in these alternate approaches, this software became easier to use. There are at least half a dozen free blog services and a similar number of email services available on the Web. All offering a better user experience than similar services provided by organisations.

Knowledge and literacy

The primitive nature of the “old” computers meant that they were very difficult to program and support. But since their introduction the ability to maintain and manipulate computers in order to achieve something useful has become increasingly easy. Originally, it was only the academics, scientists and engineers who were designing computers who could maintain and manipulate them. Eventually a profession arose around the maintenance and manipulation of computers. As the evolution continued teenage boys of a certain social grouping became extremely proficient through to today when increasing numbers (but still not the majority) are able to maintain and manipulate computers to achieve their ends.

At the same time the spread of computers meant that more and more children grew up with computers. A number of the “uber-nerds” that grew up in the 60s and 70s had parents who worked in industries that enabled the nascent uber-nerds to access computers. To grow up with them. Today it is increasingly rare for anyone not to grow up with some familiarity with technology.

For example, Africa has the fastest growing adoption rate of mobile phones in the world. I recently read that the diffusion of mobile phones in South Africa put at 98%.

Yes, there is still a place for professionals. But the increasing power and ease of use of computers means that their place is increasingly not about providing specialised services for a particular organisation, but instead providing generalised platforms which the increasingly informed general public can manipulate and use without the need for IT.

For example, there’s an increasingly limited need (not quite no need) for an organisation to provide an email service when there are numerous free email services that are generally more reliable, more accessible and provide greater functionality than internal organisational services.

From scarcity to abundance

The paradigm of traditional IT governance etc is based around the idea that hardware, software and literacy are scarce. This is no longer the case. All are abundant. This implies that new approaches are possible, perhaps even desirable and necessary.

This isn’t something that just applies to IT departments. The line of work I’m in, broadly speaking “e-learning”, is also influenced by this idea. The requirement for universities to provide learning management systems is becoming increasingly questionable, especially if you believe this change from scarcity to abundance suggests the need for a paradigm change.

The question for me is what will the new paradigm be? What problems will it create that need to be addressed? Not just the problems caused by an old paradigm battling a new paradigm, the problems that the new paradigm will have. What shape will the new paradigm take? How can organisations make use of this change?

Some initial thoughts from others – better than free.

A related question is what impact will this have on the design of learning and teaching?

Powered by WordPress & Theme by Anders Norén

css.php