Assembling the heterogeneous elements for (digital) learning

Category: academicdevelopment

Productivity commission recommended a need to grow access to higher education, contain fiscal costs, and improve teaching quality

Gatherers, Weavers and Augmenters: Three principles for dynamic and sustainable delivery of quality learning and teaching

Henry Cook, Steven Booten and I gave the following presentation at the THETA conference in Brisbane in April 2023.

Below you will find

  • Summary – a few paragraphs summarising the presentation.
  • Slides – copies of the slides used.
  • Software – some of the software produced/used as part of the work.
  • References – used in the summary and the slides.
  • Abstract – the original conference abstract.


The presentation used our experience as part of a team migrating 1500+ course sites from Blackboard to Canvas to explore a broader challenge. A challenge recently expressed in the Productivity Commission’s “Advancing Prosperity” report with its recommendations to grow access to tertiary education while containing cost and improving quality. This challenge to maximise all cost efficiency and quality and access (diversity & scale) is seen as a key issue for higher education (Ryan et al., 2021). It has even been labelled the “Iron Triangle” because – unless you change the circumstances and conditions – improving one indicator will almost inevitably lead to deterioration in the other indicators (Mulder, 2013). The pandemic emergency response being the most recent example of this. Necessarily rapid changes to access (moving from face-to-face to online) required significant costs (staff workload) to produce outcomes that are perceived to be of questionable quality.

Leading to the question we wanted to answer:

How do you stretch the iron triangle? (i.e. maximise cost efficiency, quality, and accessibility)?

In the presentation, we demonstrated that the fundamental tasks (gather and weave) of an LMS migration are manual and repetitive. Making it impossible to stretch the iron triangle. We illustrated why this is the case, demonstrated how we addressed this limitation, and proposed three principles for broader application. We argue that the three principles can be usefully applied beyond LMS migration to business as usual.

Gatherers and weavers – what we do

Our job is to help academic staff design, implement, and maintain quality learning tasks and environments. We suggest that the core tasks required to do this is to gather and weave disparate strands of knowledge, ways of knowing (especially various forms of design and contextual knowledge and knowing), and technologies (broadly defined). For example, a course site is the result of gathering and weaving together such disparate strands as: content knowledge (e.g. learning materials); administrative information (e.g. due dates, timetables etc); design knowledge (e.g. pedagogical, presentation, visual etc); and information & functionality from various technologies (e.g. course profiles, echo360, various components of the LMS etc).

An LMS migration is a variation on this work. It has a larger (all courses) and more focused purpose (migrate from one LMS to another). But still involves the same core tasks of gathering and weaving. Our argument is that to maximise the cost efficiency, accessibility, and quality of this work you must do the same to the core tasks of gathering and weaving. Early in our LMS migration it was obvious that this was not the case. The presentation included a few illustrative examples. There were many more that could’ve been used. Both from the migration and business as usual. All illustrating the overly manual and repetitive nature of gathering and weaving required by contemporary institutional learning environments.

Three principles for automating & augmenting gathering & weaving  – what we did

Digital technology has long been seen as a key enabler for improving productivity through its ability to automate processes and augment human capabilities. Digital technology is increasingly pervasive in the learning and teaching environment, especially in the context of an LMS migration. But none of the available technologies were actively helping automate or augment gathering and weaving. The presentation included numerous examples of how we changed this. From this work we identified three principles.

  1. On-going activity focused (re-)entanglement.
    Our work was focused on high level activities (e.g. analysis, migration, quality assurance, course design of 100s of course sites). Activities not supported by any single technology, hence the manual gathering and weaving. By starting small and continually responding to changes and lessons learned, we stretched the iron triangle by digitally gathering and weaving disparate component technologies into assemblages that were fit for the activities.
  2. Contextual digital augmentation.
    Little to none of the specific contextual and design knowledge required for these activities was available digitally. We focused on usefully capturing this knowledge digitally so it could be integrated into the activity-based assemblages.
  3. Meso-level focus.
    Existing component technologies generally provide universal solutions for the institution or all users of the technology. Requiring manual gathering and weaving to fit contextual needs for each individual variation. By leveraging the previous two principles we were able to provide “technologies that were fit for meso-level solutions. For example, all courses for a program or a school. All courses, that use a complex learning activity like interactive orals.

Connections with other work

Much of the above is informed by or echoes related research and practice in related fields. It’s not just we three. The presentation made explicit connections with the following:

  • Learning and teaching;
    Fawns’ (2022) work on entangled pedagogy as encapsulating the mutual shaping of technology, teaching methods, purposes, values and context (gathering and weaving). Dron’s (2022) re-definition of educational technology drawing on Arthur’s (2009) definition of technology. Work on activity centered design – which understands teaching as a distributed activity – as key to both good learning and teaching (Markauskaite et al, 2023), but also key to institutional management (Ellis & Goodyear, 2019). Lastly – at least in the presentation – the nature and need for epistemic fluency (Markauskaite et al, 2023)
  • Digital technology; and,
    Drawing on numerous contemporary practices within digital technology that break the false dilemma of “buy or build”. Such as the project to product movement (Philip & Thirion, 2021); Robotic Process Automation; Citizen Development; and the idea of lightweight IT development (Bygstad, 2017)
  • Leadership/strategy.
    Briefly linking the underlying assumptions of all of the above as examples of the move away from corporate and reductionist strategies that reduce people to “smooth users” toward possible futures that see us as more “collective agents” (Macgilchrist et al, 2020). A shift seen as necessary to more likely lead – as argued by Markauskaite et al (2023) – to the “even richer convergence of ‘natural’, ‘human’ and ‘digital’ required to respond effectively to global challenges.

There’s much more.


The presentation does include three videos that are available if you download the slides.

Related Software

Canvas QA is a Python script that will perform Quality Assurance checks on numerous Canvas courses and create a QA Report web page in each course’s Files area. The QA Report lists all the issues discovered and provides some scaffolding to address the issues.

Canvas Collections helps improve the visual design and usability/findability of the Canvas modules page. It is Javascript that can be installed by institutions into Canvas or by individuals as a userscript. It enables the injection of design and context specific information into the vanilla Canvas modules page.

Word2Canvas converts a Word document into a Canvas module to offer improvements to the authoring process in some contexts. At Griffith University, it was used as part of the migration process where Blackboard course site content was automatically converted into appropriate Word documents.  With a slight edit, these Word documents could be loaded directly into Canvas.


Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bessant, S. E. F., Robinson, Z. P., & Ormerod, R. M. (2015). Neoliberalism, new public management and the sustainable development agenda of higher education: History, contradictions and synergies. Environmental Education Research, 21(3), 417–432.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT. Journal of Information Technology, 32(2), 180–193.

Cassidy, C. (2023, April 10). ‘Appallingly unethical’: Why Australian universities are at breaking point. The Guardian.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728.

Hagler, B. (2020). Council Post: Build Vs. Buy: Why Most Businesses Should Buy Their Next Software Solution. Forbes. Retrieved April 15, 2023, from

Inside Track Staff. (2022, October 19). Citizen developers use Microsoft Power Apps to build an intelligent launch assistant. Inside Track Blog.

Lodge, J., Matthews, K., Kubler, M., & Johnstone, M. (2022). Modes of Delivery in Higher Education (p. 159).

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89.

Markauskaite, L., Carvalho, L., & Fawns, T. (2023). The role of teachers in a sustainable university: From digital competencies to postdigital capabilities. Educational Technology Research and Development, 71(1), 181–198.

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105.

Philip, M., & Thirion, Y. (2021). From Project to Product. In P. Gregory & P. Kruchten (Eds.), Agile Processes in Software Engineering and Extreme Programming – Workshops (pp. 207–212). Springer International Publishing.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

Schmidt, A. (2017). Augmenting Human Intellect and Amplifying Perception and Cognition. IEEE Pervasive Computing, 16(1), 6–10.

Smee, B. (2023, March 6). ‘No actual teaching’: Alarm bells over online courses outsourced by Australian universities. The Guardian.


The pandemic reinforced higher educations’ difficulty responding to the long-observed challenge of how to sustainably and at scale fulfill diverse requirements for quality learning and teaching (Bennett et al., 2018; Ellis & Goodyear, 2019). Difficulty increased due to many issues, including: competition with the private sector for digital talent; battling concerns over the casualisation and perceived importance of teaching; and, growing expectations around ethics, diversity, and sustainability. That this challenge is unresolved and becoming increasingly difficult suggests a need for innovative practices in both learning and teaching, and how learning and teaching is enabled. Starting in 2019 and accelerated by a Learning Management System (LMS) migration starting in 2021 a small group have been refining and using an alternate set of principles and practices to respond to this challenge by developing reusable orchestrations – organised arrangements of actions, tools, methods, and processes (Dron, 2022) – to sustainably, and at scale, fulfill diverse requirements for quality learning and teaching. Leading to a process where requirements are informed through collegial networks of learning and teaching stakeholders that weigh their objective strategic and contextual concerns to inform priority and approach. Helping to share knowledge and concerns and develop institutional capability laterally and in recognition of available educator expertise.

The presentation will be structured around three common tasks: quality assurance of course sites; migrating content between two LMS; and, designing effective course sites. For each task a comparison will be made between the group’s innovative orchestrations and standard institutional/vendor orchestrations. These comparisons will: demonstrate the benefits of the innovative orchestrations; outline the development process; and, explain the three principles informing this work – 1) contextual digital augmentation, 2) meso-level automation, and 3) generativity and adaptive reuse. The comparisons will also be used to establish the practical and theoretical inspirations for the approach, including: RPA and citizen development; and, convivial technologies (Illich, 1973), lightweight IT development (Bygstad, 2017), and socio-material understandings of educational technology (Dron, 2022). The breadth of the work will be illustrated through an overview of the growing catalogue of orchestrations using a gatherers, weavers, and augmenters taxonomy.


Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology.

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

A proposal for fixing what's broken with ed tech support in some universities

This paper analyses the outcomes of what a small group of academics (myself included) attempted to do to develop the knowledge/capability to develop effective learning for hundreds of pre-service teachers via e-learning. That experience is analysed using a distributive view of knowledge and learning and illustrates just how broken what passes for ed tech support/academic staff development in some universities. Picking up on yesterday’s post, the paper reports on academics harnessing their digital fluency to address the almost complete lack of usefulness of the institutionally developed attempts at supporting academic staff in developing the knowledge necessary for effective e-learning.

The distributive view of knowledge and learning used in the paper drew on three conceptual themes from Putnam and Borko (2000) and one theme we’ve added. Those themes suggests that knowledge and learning is/should be

  1. situated;
    Context matters. An inappropriate context can limit transfer of learning into different contexts. The entire system/context in which learning takes place is fundamental to what is learned.
  2. social;
    How we think and what we know arises from on-going interactions with groups of people of time.
  3. distributed; and,
    The knowledge required to perform a task does not exist solely within an individual person or even groups of people. It also resides in artifacts. Appropriate tools can enhance, transform and distribute cognition and expand “a system’s capacity for innovation and invention” (Putnam & Borko, 2000, p. 10).
  4. protean.
    The computer is “the first metamedium, and as such has degrees of freedom and expression never before encountered” (Kay, 1984, p. 59) it has a “protean nature”. i.e. digital technology can be flexible and should be open to be manipulated in response to needs.

This post is an attempt to propose one way in which institutional attempts at ed tech support could be transformed to actually support these four themes. i.e. to actually be made useful and appropriate for the task.

Analysing existing practice

When I first started on this post the plan was to analyse my existing institution’s attempts at ed tech support. So I logged into the Moodle site for the course I’ll be teaching next semester and asked the question, “If I had problem with X (whatever that is), how would I find an answer?”. The answer to that question was summarised in this post. A post that is password protected because of the embarrassing difficulty I had in answering that question.

Using the four themes the following criticisms might be made

  1. situated;
    The support resources were not situated in the context. I could not find any help with the course site from within the course site. I had to go to another website and waste time figuring out which labyrinth of links I would follow to get to the support resources. The first time I tried it I failed.

    I just wanted to check some aspect of my earlier analysis. Initially I had difficulties finding my way through the labyrinth.

  2. social;
    Almost the entire support resources were centrally produced and approved. Some with heavy production values. Pure information distribution. There was a small collection of Moodle discussion forums intended I imagine to encourage social interaction. Half of those forums had no posts, the other half had single posts all from the same author.

    Apart from the discussion forums, the only way to add to these resources was via the small number of people from central support.

    This also means that the “message” shared via these resources can be controlled by the institution. Raising the question about whether differing views can be expressed. For example, there is a section on using the Mahara e-portfolio that extols the educational virtues of Mahara. There’s no way I can contribute the reasons I don’t use Mahara and use something different. The point isn’t that Mahara is a bad tool, but that there are some issues with using it and alternatives. More importantly, there’s no way to share this alternate view.

  3. distributed; and
    Any content to be added to the site had to be manually added by a small select group of people. There was no integration between the resources and other systems. For example, the IT Help Desk system was in no way integrated. So if there was a known problem with the “discussion forums” being raised through the IT help desk, there was no way for that information to appear in the support resources on “discussion forums”.
  4. protean.
    The support resources were implemented in an ad hoc collection of Moodle-based course sites. The resources were all static and professionally designed. Little or no way to repurpose those resources or to add to them.

    Moodle discussion forums can generate RSS feeds and also have the option of subscribing to a forum via email. These are methods that allow a user to modify how they interact with the discussion forum. If I’m interested in a forum I can integrate new activity on the forum into my daily routine either through my feed reader or email.

    The ability to grab the RSS feed or subscribe via email to the discussion forums in these support resources are not visible via the interface that has been used.

Some design principles

Beyond critiquing what exists, the four conceptual themes above might also be useful in terms of developing guidelines for what might be. Here’s an initial brainstorm of potential guidelines. Feel free to add and argue.

  1. The support resources should be situated within the context of the academics.
    Some of what this might suggests includes

    • If you want to learn about using the discussion forums better, you should be able to do this from within the discussion forum.
    • If you want to know how to use the discussion forum for a introductory/warm up activity at the start of semester, this should be possible.
    • The support you receive should be tailored(able) to the type of course or discipline you are teaching.
    • The support system should know who you are, what you’re teaching, what you’ve done before, what groups you belong to, what time of semester (e.g. before semester starts, first couple of weeks, assessment due, end of semester etc) it is etc.
    • The support system should use the tools that people use (not the institution).
      i.e. not this from Dutton

      Organizations aren’t thinking about the ‘networked individual’ – the networking choices and patterns of individual Internet users. They’re still focused on their own organizational information systems and traditional institutional networks.

  2. The support area should encourage/enable participation in various discourse communities.
    Which might suggest approaches such as

    • The system should make you aware of the communities/individuals that are using the tool you are currently (thinking of) using.
      e.g. if you’re looking at using the BIM Moodle module the support system should help you become aware of who else has used the tool and perhaps how (leading into..)
    • The system should help capture and make available for on-going use the “residue of experience” (Riel & Pollin, 2004) of other members of the community.
      The discussion, reflections and analysis of prior use of tools and methods should be available. At a simple level, this might be ensuring that any and all questions about the discussion forum (including those from the helpdesk) be visible/searchable from the support site about the discussion forum.
  3. The support area should integrate with and integrate into it all of the appropriate organisational and external systems and processes.
    This might include such things as

    • Knowledge from other systems offering support appear automatically.
      For example, any known issues information about tools are integrated appropriately into the environment.
    • Organisational information sources such as student records systems, teaching responsibilities databases, results of course evaluation surveys etc should be integrated into the support and used to situate and modify resources appropriately.
    • Knowledge from the support area should be openly available (as appropriate) for integration into other systems.
      Might be as simple as generating an RSS/OPML feed (or two) or allowing email subscription. Perhaps publish an API.
    • The “how to do” advice in the support area should actually help you do it.
      i.e. rather than a sequence of steps describing what you do, there’s actually a link that will take you back to actual system and help you do it. Linked to the idea of Context Appropriate Scaffolding Assemblages (CASA).
  4. The support area should support manipulation and change by the users and their actions (protean).
    This might mean

    • Something as simple as having decent customisation options.
    • Something more radical like Smallest Federated Wiki.
      i.e. where each individual or group could fork the support resources and make their own changes. Changes that might be potentially integrated back into the original institutional version.

One illustration

So how might that work in action. Here’s one possible illustration.


You start by logging in to one of the institutional systems (e.g. the LMS).

Straight away I have a qualm about whether or not a login is required. In order for the system to know about you (see the situated principle above) some form of identification is required. But requiring a login means that system isn’t open. So perhaps there’s an avenue that doesn’t require a login.

The Mini-map appears

Not only do you login to the LMS but you also login to the “support system” and the mini-map appears.

The mini-map is a small icon (or three) that appears in the browser. Perhaps in the top right hand corner of the page. From now on, where ever you go the mini-map is there. But as you move around to different systems it will likely change because it knows your situation and responds accordingly.

This is based on the mini-maps concept from games occuring in immersive 3D worlds. The suggestion isn’t that this mini-map be represented as an actual map (though perhaps it might be), the point is that the purpose is to help orient you within the e-learning space.

What the mini-map might do

Nesbitt et al (2009) suggest that a

mini-map might also display the position of key landmarks along with the position of the player’s avatar and any other relevant actors in the game

which gives some idea of what the “mini-map” in this context might do.

Specific functionality might include

  1. You are here.
    Provide a summary of what it knows about your current location within the teaching and learning environment. This might include insight into the time of term, common or required tasks you may need to complete soon (or have completed at similar times in the past), updates and announcements updating you on what’s going on in the environment since you were last here.

    e.g. new problems that have arisen around where you are. Such as the lecture capture system being down and that it is being worked upon. This would also suggest that the support system is independent (distributed) from the various services. So it can keep working if they are down.

  2. Who else is here.
    Let you know who else is on this particular page, or who else is using this particular service in another course or at another time. e.g. other people who have used this particular service. Provide some functionality to allow you to control and organise who you want to know about.
  3. What have they done
    Access to the residue of experience, what have these people actually done within your current location. What worked. What didn’t. This might also be links to literature etc.
  4. How to do stuff.
    Advice on how to perform various tasks. Pedagogical patterns, learning designs etc including potential CASA’s that would help or do stuff for you.

And on a non-institutional system

The mini-map would appear when you visit any online location that has been used for learning and teaching. For example, if you want to Google Drive you would have access to (almost) the same functionality described above.

If the mini-map didn’t appear, because you’ve visited a tool that no-one else has used before, you could choose to add the tool to the mini-map and that addition would then be visible to others.


I see the mini-map being implemented with something like a Greasemonkey script. This is how it’s possible for it to appear independent of whether you’re viewing an institutional or non-institutional system.

It might work something like the following

  1. You’ve installed the Greasemonkey script on your browser.
  2. You can choose to enable or disable the script at anytime.
  3. Then, whenever you visit a web page the mini-map grabs the URL for that page and sends it to a server.
  4. The server checks to see if that URL matches anything supported by the mini-map.
  5. The version of the mini-map that is displayed depends on whether the URL is currently supported
    • Supported – then show the full mini-map.
    • Not supported – then show the minimal mini-map with just the option to “add this page”
  6. If viewing the supported mini-map you then have access to a range of functionality.
  7. Some functionality will pop up new information.
    e.g. click on the “People” icon and the mini-map might show a list of people you “know” that are/have been here.
  8. Some functionality will take you into a different system.
    e.g. click on one of the people in the list and you might get taken to a web page that shows what they were doing, when, and also provides access to details of what others have done.

The different systems used to provide the various support services should tend to be whatever makes sense but with a focus on being tools people use all the time and not limited to institutional tools. You might use Slack for some functions. SFW might be good for others.

An interesting and challenging extension to this would be to allow the “mini-map” to be extensible by just about anyone at anytime.

Time for lunch.


Dutton, W. (2010). Networking distributed public expertise: strategies for citizen sourcing advice to government. One of a Series of Occasional Papers in Science and Technology Policy. Retrieved from

Nesbitt, K., Sutton, K., Wilson, J., & Hookham, G. (2009). Improving player spatial abilities for 3D challenges. Proceedings of the Sixth …, 1–3. doi:10.1145/1746050.1746056

Putnam, R., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4–15. Retrieved from

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge: Cambridge University Press.

Some stories from teaching awards

This particular post tells some personal stories about teaching awards within Australian higher education. It’s inspired by a tweet or two from @jonpowles

Some personal success

For my sins, I was the “recipient of the Vice-Chancellor’s Award for Quality Teaching for the Year 2000”. The citation includes

in recoginition of demonstrated outstanding practices in teaching and learning at…., and in recognition of his contribution to the development of online learning and web-based teaching within the University and beyond

I remain a little annoyed that this was pre-ALTC. The potential extra funds from a national citation would have help the professional development fund. But the real problems with this award was the message I received about the value of teaching to the institution from this experience. Here’s a quick summary. (BTW, the institutional teaching awards had been going for at least 2 or 3 years before 2000, this was not the first time they’d done this.)

Jumping through hoops

As part of the application process, I had to create evidence to justify that my teaching was good quality. That’s a fairly standard process.

What troubled me then and troubles me to this day, is that the institution had no way of knowing. It’s core business is learning and teaching and it had no mechanisms in place that could identify the good and the bad teachers.

In fact, at that stage the institution didn’t have a teaching evaluation system. One of my “contributions to the development of online learning” was developing a web-based survey mechanism that I used in my own publications. This publication reports response rates of between 29-41% in one of my courses.

It is my understanding that the 2010 institutional evaluation system still dreams about reaching percentages that high.

Copy editing as a notification mechanism

Want to know how I found out I’d won the award? It was when an admin assistant from the L&T division rang me up and asked me to approve the wording of the citation.

Apparently, the Vice-Chancellor had been busy and/or away and hadn’t yet officially signed off on the result, or that I could be officially notified. However, the date for the graduation ceremony at which the award was to be given was fast approaching. In order to get the citation printed, framed and physically available at the ceremony the folk responsible for implementation had to go ahead and ask me to check the copy.

Seeing the other applications

I actually don’t remember exactly how this happened. I believe it was part of checking the copy of the citation, however it happened I ended up with a package that contained the submissions from all of the other applicants.

Double dipping

The award brought with it some financial reward, both at the faculty level (winning the faculty award was the first step) and the university level. The trouble was that even this part of the process was flawed. Though it was flawed in my favour. I got paid twice!

The money went into a professional development fund that was used for conference travel, equipment etc. Imagine my surprise and delight when my professional development fund received the reward, twice.

You didn’t make a difference

A significant part of the reason for the reward was my work in online learning and, in particular, the development of the Webfuse e-learning system. Parts of which are still in use at the institution and the story is told in more detail in my thesis.

About 4 years after receiving this award, recognising the contribution, a new Dean told me not to worry about working on Webfuse anymore, it had made no significant different to learning and teaching within the faculty.

Mixed messages and other errors

Can you see how the above experience might make someone a touch cynical about the value of teaching awards? It certainly didn’t appear to me that the recognition of quality teaching was so essential to the institution’s operations that they had efficient and effective processes. Instead it felt that the teaching award was just some add on. Not to mention a very subjective add on at that.

But the mixed messages didn’t stop there. They continued on with the rise of the ALTC. Some additional observed “errors”.

Invest at the tail end

With the rise of the ALTC it became increasingly important that an institution and its staff be seen to receive teaching citations. The number of ALTC teaching citations received became a KPI on management plans. Resources started to be assigned to ensuring the awarding of ALTC citations.

Obviously those resources were invested at the input stage of the process. Into the teaching environment to encourage and enable university staff to engage in quality learning and teaching. No.

Instead it was invested in hiring part-time staff to write the assist in the writing of the ALTC citation applications. It was invested in performing additional teaching evaluations for the institutional teaching award winners to cover up the shortcomings (read absence) of an effective broad-scale teaching evaluation system. It was invested in bringing ALTC winners onto campus to give “rah-rah” speeches about the value of teaching quality and “how I did it” pointers.

Reward the individual, not the team

Later in my career I briefly – in-between organisational restructures – was responsible for the curriculum design and development unit at the institution. During that time, a very talented curriculum designers worked very hard and very well with a keen and talented accounting academic to entirely re-design an Accounting course. The re-design incorporated all the right educational buzz words – “cognitive apprenticeship” – and the current ed tech fads – Second Life – and was a great success. Within a year or two the accounting academic received an institutional award and then an ALTC citation.

The problem was that the work the citation was for, could never have been completed by the academic alone. Without the curriculum designer involved – and the sheer amount of effort she invested in the project – the work would never have happened. Not unsurprisingly, the curriculum designer was somewhat miffed.

But it goes deeper than that. The work would not also have been possible without the efforts of a range of staff within the curriculum design unit, not to mention a whole range of other teaching staff (this course often has 10s of teaching staff at multiple campuses).

I know there are some ALTC citations that have been awarded to teams, but most ALTC citations are to individuals and this is certainly one example where a team missed out.

Attempt to repeat the success and fail to recognise diversity

But it goes deeper than that. The work for this course was not planned. It did not result from senior management developing a strategic plan that was translated into a management plan that informed decision making of some group that decided to invest X resources in Y projects to achieve Z goals.

It was all happenstance. There were the right people in the right place at the right time and they were encouraged and enable to run with their ideas. Some of the ideas were a bit silly, they had to be worked around, manipulated and cut back, but it was through a messy process of context-sensitive, collaboration between talented people that this good work arose.

Ignoring this perception, some folk then mistakenly tried to transplant the approach taken in this course into other courses. The failed to recognise that “lightning doesn’t strike twice”. You couldn’t transplant a successful approach from one course context into another. What you really had to do was start another messy process of context-sensitive, collaboration between talented people.

Quality teaching has to be embedded

This bring me back to some of the points that I made about the demise of the ALTC. Quality teaching doesn’t arise from external bodies and their actions, it arises from conditions within an university that enable and encourage messy processes of context-sensitive, collaboration between talented people.

Becoming aware of the existence of different perceptions

One of the ideas proposed, or at least reportedly proposed, in Shekerjian (1990) is that the act of becoming aware that other people hold different perceptions of some task helps you think about your own strategy. I like this idea and tend to believe that being aware of diversity of opinion can help.

My question then is why does the implementation of most Learning Management Systems in universities preclude the ability to become aware of a diversity of perceptions? Access to most course websites is generally limited to the staff and students associated with the specific course. The very design of most of the tools and services within LMS are designed so that they make no mention of how the service is being used by other folk.

Actually, why is probably a combination of factors. It’s easier to program this way. Complete transparency between courses would worry some folk and could potentially create problems. Not to mention that being away of different perceptions and being able to accept them is not always that easy.

I do think, however, that modifying the design and implementation of LMS is one interesting avenue into interventions that can help modify the teaching environment into one that enables and encourages improvement.

The Moodle 2.0 Community Hub approach is one concrete example of this happening already. However, I think there are at least two limitations of this approach. First, it appears to be at the course level. I think this might end up being too coarse grain and not as useful as some form of transparency below the course level. Second, I think it will be interesting how many universities configure the installations of Moodle to participate in the community hub approach. Not to mention the reasons why they make this decision and how such a decision is implemented in terms of academic buy-in, support etc.

A lack of awareness of the different perceptions of teaching and associated processes is one of the limitations I think may be holding back improvements in university teaching. Especially different perceptions that are represented as concrete strategies currently being implemented within a specific institutional context. It’s this concreteness and its specific connection to the institutional context which makes it more like to have impact than more abstract approaches such as learning designs and staff development presentations from outside experts or “good” teachers.

Situated shared practice, curriculum design and academic development

Am currently reading Faegri et al (2010) as part of developing the justificatory knowledge for the final ISDT for e-learning that is meant to be the contribution of the thesis. The principle from the ISDT that this paper connects with is the idea of a “Multi-skilled, integrated development and support team” (the name is a work in progress). The following is simply a placeholder for a quote from the paper and a brief connection with the ISDT and what I think it means for curriculum design and academic development.

The quote

The paper itself is talking about an action research project where job rotation was introduced into a software development firm with the aim of increasing the quality of the knowledge held by software developers. The basic finding was that in this case, there were some benefits, however, the problems outweighed them. I haven’t read all the way through, I’m currently working through the literature review. The following quote is from the review.

Key enabling factors for knowledge creation is knowledge sharing
and integration [36,54]. Research in organizational learning has emphasized the value of practice; people acquire and share knowledge in socially situated work. Learning in the organization occurs in the interplay between tacit and explicit knowledge while it crosses boundaries of groups, departments, and organizations as people participate in work [17,54]. The process should be situated in shared practice with a joint, collective purpose [12,14,15].

Another related quote

The following is from a bit more related reading, in particular Seely Brown & Duguid (1991) – emphasis added

The source of the oppositions perceived between working, learning, and innovating lies primarily in the gulf between precepts and practice. Formal descriptions of work (e.g., “office procedures”) and of learning (e.g., “subject matter”) are abstracted from actual practice. They inevitably and intentionally omit the details. In a society that attaches particular value to “abstract knowledge,” the details of practice have come to be seen as nonessential, unimportant, and easily developed once the relevant abstractions have been grasped. Thus education, training, and technology design generally focus on abstract representations to the detriment, if not exclusion of actual practice. We, by contrast, suggest that practice is central to understanding work. Abstractions detached from practice distort or obscure intricacies of that practice. Without a clear understanding of those intricacies and the role they play, the practice itself cannot be well understood, engendered (through training), or enhanced (through innovation).


I see this as highly relevant to the question of how to improve learning and teaching in universities, especially in terms of the practice of e-learning, curriculum design and academic development. It’s my suggestion that the common approaches to these tasks in most universities ignore the key enabling factors mentioned in the above quote.

For example, the e-learning designers/developers, curriculum designers and academic developers are generally not directly involved with the everyday practice of learning and teaching within the institution. As a result the teaching academics and these other support staff don’t get the benefit of shared practice.

A further impediment to shared practice is the divisions between e-learning support staff, curriculum designers and academic developers that are introduced by organisational hierarchies. At one stage, I worked at a university where the e-learning support people reported to the IT division, the academic staff developers reported to the HR division, the curriculum designers reported to the library, and teaching academics were organised into faculties. There wasn’t a common shared practice amongst these folk.

Instead, any sharing that did occur was either at high level project or management boards and committees, or in design projects prior to implementation. The separation reduce the ability to combine, share and create new knowledge about what was possible.

The resulting problem

The following quote is from Seely Brown and Duiguid (1991)

Because this corporation’s training programs follow a similar downskilling approach, the reps regard them as generally unhelpful. As a result, a wedge is driven between the corporation and its reps: the corporation assumes the reps are untrainable, uncooperative, and unskilled; whereas the reps view the overly simplistic training programs as a reflection of the corporation’s low estimation of their worth and skills. In fact, their valuation is a testament to the depth of the rep’s insight. They recognize the superficiality of the training because they are conscious of the full complexity of the technology and what it takes to keep it running. The corporation, on the other hand, blinkered by its implicit faith in formal training and canonical practice and its misinterpretation of the rep’s behavior, is unable to appreciate either aspect of their insight.

It resonates strongly with some recent experience of mine at an institution rolling out a new LMS. The training programs around the new LMS, the view of management, and the subsequent response from the academics showed some very strong resemblances to the situation described above.

An alternative

One alternative, is what I’m proposing in the ISDT for e-learning. The following is an initial description of the roles/purpose of the “Multi-skilled, integrated development and support team”. Without too much effort you could probably translate this into broader learning and teaching, not just e-learning. Heaven forbid, you could even use it for “blended learning”.

An emergent university e-learning information system should have a team of people that:

  • is responsible for performing the necessary training, development, helpdesk, and other support tasks required by system use within the institution;
  • contains an appropriate combination of technical, training, media design and production, institutional, and learning and teaching skills and knowledge;
  • through the performance of its allocated tasks the team is integrated into the everyday practice of learning and teaching within the institution and cultivates relationships with system users, especially teaching staff;
  • is integrated into the one organisational unit, and as much as possible, co-located;
  • can perform small scale changes to the system in response to problems, observations, and lessons learned during system support and training tasks rapidly without needing formal governance approval;
  • actively examines and reflects on system use and non-use – with a particular emphasis on identifying and examining what early innovators – to identify areas for system improvement and extension;
  • is able to identify and to raise the need for large scale changes to the system with an appropriate governance process; and
  • is trusted by organisational leadership to translate organisational goals into changes within the system, its support and use.


Faegri, T. E., Dyba, T., & Dingsoyr, T. (2010). Introducing knowledge redundancy practice in software development: Experiences with job rotation in support work. Information and Software Technology, 52(10), 1118-1132.

Seely Brown, J., & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57.

The rider, elephant, and shaping the path

Listened to this interview of Chip Heath, a Stanford Professor in Organizational Behaviour about his co-authored book Switch: How to change things when change is hard. My particular interest in this arises from figuring out how to improve learning and teaching in universities. From the interview and the podcast this seems to be another one in a line of “popular science” books aimed at making clear what science/research knows about the topic.

The basic summary of the findings seems to be. If you wish to make change more likely, then your approach has to (metaphorically):

  • direct the rider;
    The rider represents the rational/analytical decision making capability of an individual. This capability needs to be appropriately directed.
  • engage the elephant; and
    The elephant represents the individual’s emotional/instinctive decision making approach. From the interview, the elephant/rider metaphor has the express purpose of showing that the elephant is far stronger than the rider. In typical situations, the elephant is going to win, unless there’s some engagement.
  • shape the path.
    This represents the physical and related environment in which the change is going to take place. My recollection is that the shaping has to support the first two components, but also be designed to make it easier to traverse the path and get to the goal.

There are two parts of the discussion that stuck with me as I think they connect with the task of improving learning and teaching within universities.

  1. The over-rationalisation of experts.
  2. Small scale wins.

Over-rationalisation of experts

The connection between organisational change and losing weight seems increasingly common, it’s one I used and it’s mentioned in the interview. One example used in the interview is to show how a major problem with change is that it is driven by experts. Experts who have significantly larger “riders” (i.e. rational/analytical knowledge) of the problem area/target of change than the people they are trying to change. This overly large rider leads to change mechanisms that over complicate things.

The example they use is the recently modified food pyramid from the United States that makes suggestions something like, “For a balanced diet you should consume X tablespoons of Y a day”. While this makes sense to the experts, a normal person has no idea of how many tablespoons of Y is in their daily diet. In order to achieve the desired change, the individual needs to develop all sorts of additional knowledge and expertise. Which is just not likely.

They compare this with some US-based populariser of weight loss who proposes much simpler suggestions e.g. “Don’t eat anything that comes through your car window”. It’s a simpler, more evocative suggestion that appears to be easier for the rider to understand and helps engage the elephant somewhat.

I can see the equivalent of this within learning and teaching in higher education. Change processes are typically conceived and managed by experts. Experts who over rationalise.

Small scale wins

Related to the above is the idea that change always consists of barriers or steps that have to be stepped over. Change is difficult. The suggestion is that when shaping the path you want to design it in such a way so that the elephant can almost just walk over the barrier. The interviewer gives the example of never being able to get her teenage sons to stop taking towels out of the bathroom and into their bedroom. Eventually what worked was “shaping the path” by storing the sons’ underwear in the bathroom, not their bedroom.

When it comes to improving learning and teaching in universities, I don’t think enough attention is paid to “shaping the path” like this. I think this is in part due to the process being driven by the experts, so they simply don’t see the need. But it is also, increasingly, due to the fact that the people involved can’t shape the path. Some of the reasons the path can’t be shaped include:

  • Changing the “research is what gets me promoted” culture in higher education is very, very difficult and not likely to happen effectively if just one institution does it.
  • When it comes to L&T path (e.g. the LMS product model or the physical infrastructure of a campus) it is not exactly set up to enable “shaping”.
  • The people involved at a university, especially in e-learning, don’t have the skills or the organisational structure to enable “shaping”.

Possible uses of academic analytics

The following is a slightly edited copy of a message I’ve just sent off to the Learning Analytics GoogleGroup set up by George Siemens. I’m into reuse. Essentially it tries to highlight a small subset of the uses of learning analytics that I see as most interesting.

Some colleagues and I have been taking some baby steps in this area. In terms of trying to understand where we might end up doing we’ve started talking about three aspects. All very tentative, but can highlight a small subset of potential uses.

I have perhaps used a broad definition for analytics.

1. What?

This is the visualisation of what has happened in learning.

Lots of work to be done here in terms of finding out what patterns to look for and how to represent them.

For example, we took factors identified by Fresen (2007) as promoting quality as a guide to look for particular patterns.

Use #1 – Analytics can be used to test theories/propositions around learning and teaching. Perhaps by supplementing existing methods.

Fresen, J. (2007). A taxonomy of factors to promote quality web-supported learning. International Journal on E-Learning, 6(3), 351-362.

2. Why?

Once we’ve seen certain patterns, the obvious question is why did that pattern arise?

e.g. A colleague found a pattern where, on average, the older a distance education student was, the more they used the LMS.

The obvious question is why? Various theories are possible, but which apply? The 2nd person to comment on the above post is a psychology research masters student who is just completing some research attempting to identify an answer to they why question.

Use #2 – Analytics can be used to identify areas for future research.

3. How?

How can you harness analytics to improve learning and teaching.

Rowan made the point about using analytics to encourage changes in policy. I’ve seen this happen. Some early analysis at one institution showed that very few course websites had a discussion forum and even fewer used it effectively. Policy changed.

Use #3 – Analytics can inform policy change.

As we work/worked in areas supporting university academics in their teaching we were most interested in this question. In particular, we were interested in how analytics could be used to improve academics teaching.

Use #4 – Analytics can be used to encourage academics to reflect on their teaching.

e.g. Another colleague used analytics to reflect on how his conceptions of teaching matched what analytics showed about what happened within his courses.

Use #5 – Analytics can be used to encourage students to think differently about their learning.

Presenting students with different visualisations of what they are doing (or not doing) around learning can also encourage them to change practice.

I’ve heard reports that use of the SNAPP tool has achieved this. I’ve heard similar reports about the
use of the Progress Bar block for Moodle.

It’s possible to see a common trend in the last few uses. To some extent analytics is being used to improve “distributed cognition” in terms of putting some more smarts into the environment, which in turn becomes more likely to be seen and acted upon by policy makers, students or teachers.

However, what these people do in response to the improved knowledge they have, is still fairly open. I have a particular interest in how to encourage and enable these folk to use their improved knowledge in useful and interesting ways.

Which in turns will generate changed behaviour and hopefully changed use of the system. This takes us back to the what question at the start.

Use #6: Analytics can be an important component to on-going learning about what works and what doesn’t in learning and teaching.

Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.


Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

Oil sheiks, Lucifer and university learning and teaching

The following arises from a combination of factors including:

Old wine in new bottles

Perhaps the key quote from Mark’s post is

This post is simply to try and say what many people don’t want to say and that is, that most universities really don’t care about educational technology or elearning.

My related perspective is that the vast majority of university learning and teaching is, at best (trying to be very positive), just ok. There’s a small bit that is really, really bad; and a small bit that is really, really good. In addition, most interventions to improve learning and teaching are not doing anything to change this distribution. At best, they might change the media, but the overall distribution is the same.

There’s a quote from Dutton and Loader (2002) that goes something like

without new learning paradigms educators are likely to use technology to do things the way they have always done; but with new and more expensive technology.

I am currently of the opinion that without new management/leadership paradigms to inform how universities improve learning and teaching, the distribution is going to remain the same just with new and more expensive organisational structures. This article from the Goldwater Institute about administrative bloat at American universities might be an indicator of that.

Don’t blame the academics

The “When good people turn bad” radio program is an interview with Philip Zimbardo. He’s the guy responsible for the Stanford prisoner study, an example of where good people turned really bad because of the situation in which they were place. The interview includes the following from Prof Zimbardo

You no longer can focus only on individual freedom of will, individual rationality. People are always behaving in a context, in a situation, and those situations are always created and maintained by powerful systems, political systems, cultural, religious ones. And so we have to take a more complex view of human nature because human beings are complex.

This resonates somewhat with a point that Mark makes

the problem of adoption is primarily not a technical one but one of organisational culture

. I agree. It’s the culture, the systems, the processes and the policies within universities that are encouraging/enshrining this distribution where most university learning and teaching is, at best, just ok.

The culture/system doesn’t encourage nor enable this to change. When management do seek to do something about this, their existing “management paradigm” encourages an emphasis on requiring change without doing anything effective to change the culture/system.

The proposition and the interest

Which is where I am interested in and propose the following

If you really wish to improve the majority of learning and teaching within a university, then you have to focus on changing the culture/system so that academics staff are encouraged and enabled to engage in learning about how to teach.

In addition, I would suggest that requiring learning (e.g. through requiring all new academic staff to obtain a formal qualification in learning) without aligning the entire culture/system to enable academic staff to learn and experiment (some of these characteristics are summarised here) is doomed to failure.

I’d also suggest that there is no way you can “align” the culture/system of a university to enable and encourage academic staff learning about teaching. At best you can engage in a continual process of “aligning” the culture/system as that process of “aligning” is itself a learning process.

Easy to say

I can imagine some universities leaders saying “No shit Sherlock, what do you think we’re doing?”. My response is you aren’t really doing this. Your paradigm is fundamentally inappropriate, regardless of what you claim.

However, actually achieving this is not simple and I don’t claim to have all the answers. This is why this is phrased as a proposition, it’s an area requiring more work.

I am hoping that within a few days, I might have a small subset of an answer in the next, and hopefully final, iteration of the design theory for e-learning that is meant to be the contribution of my thesis.


Dutton, W. and B. Loader (2002). Introduction. Digital Academe: The New Media and Institutions of Higher Education and Learning. W. Dutton and B. Loader. London, Routledge: 1-32.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

Powered by WordPress & Theme by Anders Norén