Assembling the heterogeneous elements for (digital) learning

All models are wrong, but some are useful and its application to e-learning

In a section with the heading “ALL MODELS ARE WRONG BUT SOME ARE USEFUL”, Box (1979) wrote

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Over recent weeks I’ve been increasingly interested in the application of this aphorism to the practice of institutional e-learning and why it is so bad.

Everything in e-learning is a model

For definition’s sake, the OECD (2005) defines e-learning as the use of information and communications technology (ICT) to support and enhance learning and teaching.

As the heading suggests, I’d like to propose that everything in institutional e-learning is a model. Borrowing from the Wikipedia page on this aphorism you get the definition of model as “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002).

The software that enables e-learning is a model. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model (in the form of the software) that aims to fulfill those requirements.

Instructional design and teaching are essentially the creations of models intended to enable learning. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some learning outcome.

Organisational structures are models. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some operational and strategic requirements. Those same set of smart people probably also worked on developing a range of models in the form of organisational policies and processes. Some of which may have been influenced by the software models that are available.

The theories, tools, and schema used in the generation of the above models, are in turn models.

And following Box, all models are wrong.

But it gets worse.

In e-learning, everyone is an expert model builder

E-learning within an institution – by its nature – must bring together a range of different disciplines, including (but not limited to): senior leadership, middle management, quality assurance (boo) and related; researchers; librarians; instructional designers, staff developers and related learning and teaching experts; various forms of technology experts (software developers, network and systems administrators, user support etc); various forms of content development experts (editors, illustrators, video and various multimedia developers); and, of course the teachers/subject matter experts. I’ll make special mention of the folk from marketing who are the experts of the institutional brand.

All of these people are – or at least should be – expert model builders. Experts at building and maintaining the types of models mentioned above. Even the institutional brand is a type of model.

This brings problems.

Each of these expert model builders suffer from expertise bias.

What do you mean you can’t traverse the byzantine mess of links from the staff intranet and find the support documentation? Here, you just click here, here, here, here, here, here, here, and here. See, obvious……

And each of these experts thinks that the key to improving the quality of e-learning at the institution can be found in the institution doing a much better job at their model. Can you guess which group of experts is most likely to suggest the following?

The quality of learning and teaching at our institution can be improved by:

  • requiring every academic to have a teaching qualification.
  • ensuring we only employ quality researchers who are leaders in their field.
  • adopt the latest version of ITIL, i.e. ITIL (the full straight-jacket).
  • all courses are required to meet the 30 page checklist of quality criteria.
  • all courses were redesigned using constructive alignment.
  • we re-write all our systems using an API-centric architecture.
  • adopted my latest theory on situated cognitive, self regulated learning and maturation.

What’s common about most of these suggestion is that it will be all better if we just adopt this new better model. All of the problems we’ve faced previously are due to the fact that we’ve used the wrong model. This model is better. It will solve it.

Some recent examples

I’ve seen a few examples of this recently.

Ben Werdmuller had an article on Medium titled “What would it take to save #EdTech?” Ben’s suggested model solution was an open startup.

Mark Smithers blogged recently reflecting on 20 years in e-learning. In it Mark suggests a new model for course development teams as one solution.

Then there is this post on Medium titled “Is Slack the new LMS?”. As the title suggests, the new model here is that embodied by Slack.

Tomorrow I’ll be attending a panel session titled “The role of Openness in Creating New Futures in higher education” (being streamed live). Indicative of how the “open” model is seen as yet another solution to the problem of institutional e-learning.

And going back a bit further Holt et al (2011) report on the strategic contributions of teaching and learning centres in Australian higher education and observe that

These centres remain in a state of flux, with seemingly endless reconfiguration. The drivers for such change appear to lie in decision makers’ search for their centres to add more strategic value to organisational teaching, learning and the student experience (p. 5)

i.e. every senior manager worth their salt does the same stupid thing that senior managers have always done. Changed the model that underpins the structure of the organisation.

Changing the model like this is seen as suggesting you know what you are doing and it can sometimes be made to appear logical.

And of course in the complex adaptive system that is institutional e-learning it is also completely and utterly wrong and destined to fail.

A new model is not a solution

This is because any model is “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002) and “it would be very remarkable if any system existing in the real world could be exactly represented by any simple model” (Box, 1979, p. 202).

As Box suggested, this is not to say you should ignore all models. After all, all models are wrong, but some are useful. You can achieve some benefits from moving to a new model.

But a new model can never be “the” solution. Especially as the size of the impact of the model grows. A new organisational structure for the entire university is never going to be the solution, it will only be really, really costly.

There are always problems

This is my 25th year working in Universities. I’ve spent my entire 25 years identifying and fixing the problems that exist with whatever model the institution has used. Almost my entire research career has been built around this. A selection of the titles from my publications illustrates the point

  1. Computing by Distance Education: Problems and Solutions
  2. Solving some problems of University Education: A Case Study
  3. Solving some problems with University Education: Part II
  4. How to live with ERP systems and thrive.
  5. The rise and fall of a shadow system: Lessons for Enterprise System Implementation
  6. Limits in developing innovative pedagogy with Moodle: The story of BIM
  7. The life and death of Webfuse: principles for learning and learning into the future
  8. Breaking BAD to bridge the reality/rhetoric chasm.

And I’m not alone. Scratch the surface at any University and you will find numerous examples of individual or small groups of academics identifying and fixing problems with whatever models the institutions has adopted. e.g. A workshop at CSU earlier this year included academics from CSU presenting a raft of systems they’ve had to develop to solve problems with the institutional models.

The problem is knowing how to combine the multitude of models

The TPACK (Technological Pedagogical Content Knowledge) framework provides one way to conceptualise what is required for quality learning and teaching with technology. In proposing the TPACK Framework, Mischra and Koehler (2006) argue that

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements (p. 1029).

i.e. good quality teaching requires the development of “appropriate, context-specific” combinations of all of the models involved with e-learning.

The reason why “all models are wrong” is because when you get down to the individual course (remember I’m focusing on university e-learning) you are getting much closer to the reality of learning. A reality that is hidden from the senior manager developing policy, the QA person deciding on standards for the entire institution, the software developer working on a system (open source or not) etc. are all removed from the context. They are all removed from the reality.

The task of the teacher (or the course design team depending on your model) is captured somewhat by Shulman (1987)

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The task is to mix all those models together and produce the most effective learning experience for these particular students in this particular context. The better you can do that, the more pedagogical value. The better the learning.

All of the work outlined in my publications listed above has been attempts to mix the various models available into a form that has greater pedagogical value within the context which I was teaching.

A new model means a need to create a new mix

When a new LMS, a new organisational structure, a new QA process, or some other new model replaces the old model it doesn’t automatically bring an enhancement in the overall experience of e-learning. That enhancement is really only maximised by each of the teachers/course design teams having to go back and re-do all the work they’d previously done to get the mix of models right for their context.

This is where (I think) the “technology dip” comes from, Underwood and Dillon (2011)

Introducing new technologies into the classroom does not automatically bring about new forms of teaching and learning. There is a significant discontinuity between the introduction of ICT into any educational setting and the emergence of measurable impacts on pedagogy and learning outcomes (p. 320

Instead the quality of learning and teaching dips after the introduction of new technologies (new models) as teachers struggle to work out the new mix of models that are most appropriate for their context.

It’s not how bad you start, it’s how quickly you get better

In reply to my comment on his post, Mark asks the obvious question

What other model is there?

Given the argument that “all models are wrong”, how do I propose a model that is correct?

I’m not going expand on this very much, but I will point you to Dave Snowden’s recent series of posts, including this one titled “Towards a new theory of change” and his general argument

that we need to stop talking about how things should be, and start changing things in the here and now

For me this means, stop focusing on your new model of the ideal future. e.g. If only we used Slack for the LMS. Instead develop an on-going capacity to know in detail what is going on now (learner experience design is one enabler here), enable anyone and everyone in the organisation to be able to remix all of the models (the horrendously poor way most universities don’t use network technology to promote connections between people currently prevent this), make it easy for people to know about and re-use the mixtures developed by others (too much of the re-mixing that is currently done is manual), find out what works and promote it (this relies on doing a really good job on the first point, not course evaluation questionnaires), and find out what doesn’t work and kill it off.

This doesn’t mean doing away with strategic projects, it just means scaling them back a bit and focusing more on helping all the members of the organisation learn more about the unique collection of model mixtures that work best in the multitude of contexts that make up the organisation.

My suggestion is that there needs to be a more fruitful combination of the BAD and SET frameworks and a particular focus on developing the organisation’s distributed capacity to develop it’s TPACK.

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Holt, D., Palmer, S., & Challis, D. (2011). Changing perspectives: Teaching and Learning Centres’ strategic contributions to academic development in Australian higher education. International Journal for Academic Development, 16(1), 5–17. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/1360144X.2011.546211

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/education/e-learning-in-tertiary-education_9789264009219-en

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachers’ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932

Previous

Refining a visualisation

Next

It’s not how bad you start, but how quickly you get better

4 Comments

  1. There is an intersting qualifier throughout your post here David, suggesting to me that you know (or at least suspect) unexplored territory for ‘new’ or alternative models. You use the phrase “institutional e-learning”, which to me signals a massive subtext you’re using to qualify everything else you say in this post. I do the same. Might you point me to where you elaborate on non, un, de institutionalised elearning?

  2. My direct involvement in e-learning is within the context of institutional e-learning. As I have to figure out how to do something meaningful within that context, that’s where my focus is. There really isn’t much (if anything) that I write which is focused beyond the confines of institutional e-learning.

  3. What an excellent memory you have, to return to that discussion from 5 years ago. I thought it was a very relevant connection and I’ve left a comment there too. I find it most interesting that we both work inside an institution, in similar job descriptions, yet each interpret the scope and limitations quite differently. Admittedly, my “outside in” principle is a lonely existence, despite it gradually becoming the mainstream mode of practice (if you accept my comment at the post you link to). At RMIT there appears to be quite a distinct and vocal group of teachers struggling with inside/outside paradoxes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén

css.php