Assembling the heterogeneous elements for (digital) learning

Category: bad Page 1 of 8

Higher ed L&T’s scale problem?

Contemporary higher education appears to have a scale problem.

Ellis & Goodyear (2019) explain in some detail Bain’s and Zundans-Fraser’s (2017) diagnosis of why attempts by universities to improve learning and teaching rarely scale, including the observation that L&T centers try to “influence learning and teaching through elective, selective, and exemplary approaches that are incompatible with whole-organizational change” (Bain & Zundans-Fraser, 2017, p. 12). While most universities offer design support services the combination of high demand and limited resources mean that many academics are left to their own devices (Bennet, Agostinho & Lockyer, 2017). Moving from working at scale across an institution, Ryan et al (2021) suggest that maintaining the quality of L&T while teaching at scale is a key issue for higher education. Massification brings both increased numbers and diversity of learners creating practical and pedagogical challenges for educators having to teach at scale.

Attempts to address the challenge of scale (e.g. certain types of MOOC, course site templates) tend to strike me as limited. Why?

Perhaps it is because…

A Typology of Scale

Morel et al (2019) argue that there is a lack of conceptual clarity around scale. In response, they offer a typology of scale, very briefly summarised in the following table.

Concept of scale
Adoption Widespread use of an innovation – market share. Limited conceptualisation of expected use.
Replication Widespread implementation with fidelity will produce expected outcomes.
Adaptation Widespread use of an innovation that is modified in response to local needs.
Reinvention Intentional and systematic experimentation with an innovation. Innovation as catalyst for further innovation.

The practice of scale

Most institutional attempts at scale I’ve observed appear to fall into the first two conceptualisations.

MOOCs – excluding Connectivist MOOCs – aimed to scale content delivery through scale as replication. Institutional practice around the use of an LMS is increasingly driven by consistency in the form of templates. Leading to exchanges like that shared by Macfarlan and Hook (2022)

‘Can I do X?’ or ‘How would I do Y?’, until the ED said, ‘You can do anything you like, as long as you use the template.’ With a shrug the educator indicated their compliance. The ironic surrender was palpable.

At best, templates fall into the replication conception of scale. Experts produce something which they think will be an effective solution to a known problem. A solution that – if only everyone would just use as intended – will generate positive outcomes for learners. Arguments could be made that it quickly devolves into the adoption category. Others may claim their templates support adaptation, but only “as long as you use the template”?

Where do other institutional attempts fit on this typology?

Institutional learning and teaching frameworks, standards, plans and other abstract approaches? More adoption/replication?

The institutional LMS and the associated ecosystem of tools? The assumption is probably adaptation. The tools can be creatively adapted to suit whatever design intent would be the argument. However, for adaptation to work (see below) the relationship between the users and the tools needs to offer the affordance for customisation. I don’t think the current tools help enough with that.

Which perhaps explains why use of the LMS and associated tools is so limited/time consuming. But the current answer appears to be templates and consistency.

Education’s diversity problem

The folk who conceive of scale as adaptation, like Clark and Dede (2009) argue that

One-size-fits-all educational innovations do not work because they ignore contextual factors that determine an intervention’s efficacy in a particular local situation (p. 353)

Morel et al (2019) identify that this adaptation does assume/require the capacity from users to make modifications in response to contextual requirements. This will likely require more work from both the designers and the users. Which, for me, raises the following questions

  1. Does the deficit model of educators (they aren’t trained L&T professionals) held by some L&T professionals limit the ability to conceive of/adopt this type of scale?
  2. Does the difficulty of institutions face in customising contemporary digital learning environment (i.e. the LMS) – let alone enabling learners and teachers to do that customisation – limit the ability to conceive of/adopt this type of scale?
  3. For me, this also brings in the challenge of the iron triangle. How to (cost) efficiently scale learning and teaching in ways that respond effectively to the growing diversity of learners, teachers, and contexts?

How do you answer those questions at scale?


Bain, A., & Zundans-Fraser, L. (2017). The Self-organizing University. Springer.

Bennett, S., Agostinho, S., & Lockyer, L. (2017). The process of designing for learning: Understanding university teachers’ design work. Educational Technology Research & Development, 65(1), 125–145.

Clarke, J., & Dede, C. (2009). Design for Scalability: A Case Study of the River City Curriculum. Journal of Science Education and Technology, 18(4), 353–365.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

Macfarlan, B., & Hook, J. (2022). ‘As long as you use the template’: Fostering creativity in a pedagogic model. ASCILITE Publications, Proceedings of ASCILITE 2022 in Sydney, Article Proceedings of ASCILITE 2022 in Sydney.

Morel, R. P., Coburn, C., Catterson, A. K., & Higgs, J. (2019). The Multiple Meanings of Scale: Implications for Researchers and Practitioners. Educational Researcher, 48(6), 369–377.

Productivity commission recommended a need to grow access to higher education, contain fiscal costs, and improve teaching quality

Gatherers, Weavers and Augmenters: Three principles for dynamic and sustainable delivery of quality learning and teaching

Henry Cook, Steven Booten and I gave the following presentation at the THETA conference in Brisbane in April 2023.

Below you will find

  • Summary – a few paragraphs summarising the presentation.
  • Slides – copies of the slides used.
  • Software – some of the software produced/used as part of the work.
  • References – used in the summary and the slides.
  • Abstract – the original conference abstract.


The presentation used our experience as part of a team migrating 1500+ course sites from Blackboard to Canvas to explore a broader challenge. A challenge recently expressed in the Productivity Commission’s “Advancing Prosperity” report with its recommendations to grow access to tertiary education while containing cost and improving quality. This challenge to maximise all cost efficiency and quality and access (diversity & scale) is seen as a key issue for higher education (Ryan et al., 2021). It has even been labelled the “Iron Triangle” because – unless you change the circumstances and conditions – improving one indicator will almost inevitably lead to deterioration in the other indicators (Mulder, 2013). The pandemic emergency response being the most recent example of this. Necessarily rapid changes to access (moving from face-to-face to online) required significant costs (staff workload) to produce outcomes that are perceived to be of questionable quality.

Leading to the question we wanted to answer:

How do you stretch the iron triangle? (i.e. maximise cost efficiency, quality, and accessibility)?

In the presentation, we demonstrated that the fundamental tasks (gather and weave) of an LMS migration are manual and repetitive. Making it impossible to stretch the iron triangle. We illustrated why this is the case, demonstrated how we addressed this limitation, and proposed three principles for broader application. We argue that the three principles can be usefully applied beyond LMS migration to business as usual.

Gatherers and weavers – what we do

Our job is to help academic staff design, implement, and maintain quality learning tasks and environments. We suggest that the core tasks required to do this is to gather and weave disparate strands of knowledge, ways of knowing (especially various forms of design and contextual knowledge and knowing), and technologies (broadly defined). For example, a course site is the result of gathering and weaving together such disparate strands as: content knowledge (e.g. learning materials); administrative information (e.g. due dates, timetables etc); design knowledge (e.g. pedagogical, presentation, visual etc); and information & functionality from various technologies (e.g. course profiles, echo360, various components of the LMS etc).

An LMS migration is a variation on this work. It has a larger (all courses) and more focused purpose (migrate from one LMS to another). But still involves the same core tasks of gathering and weaving. Our argument is that to maximise the cost efficiency, accessibility, and quality of this work you must do the same to the core tasks of gathering and weaving. Early in our LMS migration it was obvious that this was not the case. The presentation included a few illustrative examples. There were many more that could’ve been used. Both from the migration and business as usual. All illustrating the overly manual and repetitive nature of gathering and weaving required by contemporary institutional learning environments.

Three principles for automating & augmenting gathering & weaving  – what we did

Digital technology has long been seen as a key enabler for improving productivity through its ability to automate processes and augment human capabilities. Digital technology is increasingly pervasive in the learning and teaching environment, especially in the context of an LMS migration. But none of the available technologies were actively helping automate or augment gathering and weaving. The presentation included numerous examples of how we changed this. From this work we identified three principles.

  1. On-going activity focused (re-)entanglement.
    Our work was focused on high level activities (e.g. analysis, migration, quality assurance, course design of 100s of course sites). Activities not supported by any single technology, hence the manual gathering and weaving. By starting small and continually responding to changes and lessons learned, we stretched the iron triangle by digitally gathering and weaving disparate component technologies into assemblages that were fit for the activities.
  2. Contextual digital augmentation.
    Little to none of the specific contextual and design knowledge required for these activities was available digitally. We focused on usefully capturing this knowledge digitally so it could be integrated into the activity-based assemblages.
  3. Meso-level focus.
    Existing component technologies generally provide universal solutions for the institution or all users of the technology. Requiring manual gathering and weaving to fit contextual needs for each individual variation. By leveraging the previous two principles we were able to provide “technologies that were fit for meso-level solutions. For example, all courses for a program or a school. All courses, that use a complex learning activity like interactive orals.

Connections with other work

Much of the above is informed by or echoes related research and practice in related fields. It’s not just we three. The presentation made explicit connections with the following:

  • Learning and teaching;
    Fawns’ (2022) work on entangled pedagogy as encapsulating the mutual shaping of technology, teaching methods, purposes, values and context (gathering and weaving). Dron’s (2022) re-definition of educational technology drawing on Arthur’s (2009) definition of technology. Work on activity centered design – which understands teaching as a distributed activity – as key to both good learning and teaching (Markauskaite et al, 2023), but also key to institutional management (Ellis & Goodyear, 2019). Lastly – at least in the presentation – the nature and need for epistemic fluency (Markauskaite et al, 2023)
  • Digital technology; and,
    Drawing on numerous contemporary practices within digital technology that break the false dilemma of “buy or build”. Such as the project to product movement (Philip & Thirion, 2021); Robotic Process Automation; Citizen Development; and the idea of lightweight IT development (Bygstad, 2017)
  • Leadership/strategy.
    Briefly linking the underlying assumptions of all of the above as examples of the move away from corporate and reductionist strategies that reduce people to “smooth users” toward possible futures that see us as more “collective agents” (Macgilchrist et al, 2020). A shift seen as necessary to more likely lead – as argued by Markauskaite et al (2023) – to the “even richer convergence of ‘natural’, ‘human’ and ‘digital’ required to respond effectively to global challenges.

There’s much more.


The presentation does include three videos that are available if you download the slides.

Related Software

Canvas QA is a Python script that will perform Quality Assurance checks on numerous Canvas courses and create a QA Report web page in each course’s Files area. The QA Report lists all the issues discovered and provides some scaffolding to address the issues.

Canvas Collections helps improve the visual design and usability/findability of the Canvas modules page. It is Javascript that can be installed by institutions into Canvas or by individuals as a userscript. It enables the injection of design and context specific information into the vanilla Canvas modules page.

Word2Canvas converts a Word document into a Canvas module to offer improvements to the authoring process in some contexts. At Griffith University, it was used as part of the migration process where Blackboard course site content was automatically converted into appropriate Word documents.  With a slight edit, these Word documents could be loaded directly into Canvas.


Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bessant, S. E. F., Robinson, Z. P., & Ormerod, R. M. (2015). Neoliberalism, new public management and the sustainable development agenda of higher education: History, contradictions and synergies. Environmental Education Research, 21(3), 417–432.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT. Journal of Information Technology, 32(2), 180–193.

Cassidy, C. (2023, April 10). ‘Appallingly unethical’: Why Australian universities are at breaking point. The Guardian.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728.

Hagler, B. (2020). Council Post: Build Vs. Buy: Why Most Businesses Should Buy Their Next Software Solution. Forbes. Retrieved April 15, 2023, from

Inside Track Staff. (2022, October 19). Citizen developers use Microsoft Power Apps to build an intelligent launch assistant. Inside Track Blog.

Lodge, J., Matthews, K., Kubler, M., & Johnstone, M. (2022). Modes of Delivery in Higher Education (p. 159).

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89.

Markauskaite, L., Carvalho, L., & Fawns, T. (2023). The role of teachers in a sustainable university: From digital competencies to postdigital capabilities. Educational Technology Research and Development, 71(1), 181–198.

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105.

Philip, M., & Thirion, Y. (2021). From Project to Product. In P. Gregory & P. Kruchten (Eds.), Agile Processes in Software Engineering and Extreme Programming – Workshops (pp. 207–212). Springer International Publishing.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

Schmidt, A. (2017). Augmenting Human Intellect and Amplifying Perception and Cognition. IEEE Pervasive Computing, 16(1), 6–10.

Smee, B. (2023, March 6). ‘No actual teaching’: Alarm bells over online courses outsourced by Australian universities. The Guardian.


The pandemic reinforced higher educations’ difficulty responding to the long-observed challenge of how to sustainably and at scale fulfill diverse requirements for quality learning and teaching (Bennett et al., 2018; Ellis & Goodyear, 2019). Difficulty increased due to many issues, including: competition with the private sector for digital talent; battling concerns over the casualisation and perceived importance of teaching; and, growing expectations around ethics, diversity, and sustainability. That this challenge is unresolved and becoming increasingly difficult suggests a need for innovative practices in both learning and teaching, and how learning and teaching is enabled. Starting in 2019 and accelerated by a Learning Management System (LMS) migration starting in 2021 a small group have been refining and using an alternate set of principles and practices to respond to this challenge by developing reusable orchestrations – organised arrangements of actions, tools, methods, and processes (Dron, 2022) – to sustainably, and at scale, fulfill diverse requirements for quality learning and teaching. Leading to a process where requirements are informed through collegial networks of learning and teaching stakeholders that weigh their objective strategic and contextual concerns to inform priority and approach. Helping to share knowledge and concerns and develop institutional capability laterally and in recognition of available educator expertise.

The presentation will be structured around three common tasks: quality assurance of course sites; migrating content between two LMS; and, designing effective course sites. For each task a comparison will be made between the group’s innovative orchestrations and standard institutional/vendor orchestrations. These comparisons will: demonstrate the benefits of the innovative orchestrations; outline the development process; and, explain the three principles informing this work – 1) contextual digital augmentation, 2) meso-level automation, and 3) generativity and adaptive reuse. The comparisons will also be used to establish the practical and theoretical inspirations for the approach, including: RPA and citizen development; and, convivial technologies (Illich, 1973), lightweight IT development (Bygstad, 2017), and socio-material understandings of educational technology (Dron, 2022). The breadth of the work will be illustrated through an overview of the growing catalogue of orchestrations using a gatherers, weavers, and augmenters taxonomy.


Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology.

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Branches of lantana entangled with each other and a dead tree branch. Sprinkled with bright green lantana leaves

Orchestrating entangled relations to stretch the iron triangle: Observations from an LMS migration


This work arose from the depths of an institutional LMS migration (Blackboard Learn to Canvas). In particular, the observation that the default migration processes required an awful lot of low level manual labour. Methods that appeared to reduce the quality of the migration process and increase the cost. Hence we started developing different methods. As the migration project unfolded we kept developing and refining. Building on what we’d done before and further decreasing the cost of migration, increasing the quality of the end result, and increasing the scale and diversity of what we could migrate.

We were stretching the iron triangle (Ryan et al, 2021). Since stretching the iron triangle a key strategic issue for higher education (Ryan et al, 2021), questions arose, including:

  1. What was different between the two sets of orchestrations? Why are our orchestrations better than the default at stretching the iron triangle?
  2. Might those differences help stretch the iron triangle post-migration (i.e. business as usual – BAU)?
  3. Can we refine and improve those differences?

The work here is an initial exploration into answering the first question.

Table of Contents

Below you will find


A key strategic issue for higher education is how to maximise the accessibility, quality, and cost efficiency of learning and teaching (Ryan et al., 2021). Higher education’s iron triangle literature (Daniel et al, 2009; Mulder, 2013; Ryan et al, 2021) argues that effectively addressing this challenge is difficult, if not impossible, due to the “iron” connections between the three qualities. These iron connections mean maximising one quality will inevitably result in reductions in the other qualities. For example, the rapid maximisation of accessibility required by the COVID-19 pandemic resulted in a reduction in cost efficiency (increased staff costs) and a reduction in the perceived quality of learning experiences (Martin, 2020). These experiences illustrate higher education’s on-going difficulties in creating orchestrations that stretch the iron triangle by sustainably and at scale fulfilling diverse requirements for quality learning, (Bennett et al., 2018; Ellis & Goodyear, 2019). This exploratory case study aims to help reduce this difficulty by answering the question: What characteristics of orchestrations help to stretch the iron triangle?

An LMS migration is an effective exploratory case for this research question since it is one of the most labour-intensive and complex projects undertaken by universities (Cottam, 2021). It is a project commonly undertaken with the aim of stretching the iron triangle. Using a socio-material perspective (Ellis & Goodyear, 2019; Fawns, 2022) and drawing on Dron’s (2022) definition of educational technology the poster examines three specific migration tasks: migrating lecture recordings; designing quality course sites; and, performing quality assurance checks. For each task, two different orchestrations – organized arrangements of actions, tools, methods, and processes (Dron, 2022) – are described and analysed. The institutional orchestrations developed by the central project organising the migration of an institution’s 4500+ courses, and the group orchestrations developed, due to perceived limitations of the institutional orchestrations, by a sub-group directly migrating 1700+ courses.

Descriptions of the orchestrations are used to identify their effectiveness in sustainably and at scale satisfying diverse quality requirements – stretching the iron triangle. Analysis of these orchestrations identified three characteristics that are more likely to stretch the iron triangle: contextual digital augmentation; meso-level automation; and, generativity and adaptive reuse. Each of these characteristics, their presence in each orchestration, the relationships between these characteristics; linkages with existing literature and practice; and their observed impact on the iron triangle qualities is described. These descriptions are used to illustrate the very different assumptions underpinning the two sets of orchestrations. Differences in relationships evident in the orchestrations and which mirror the distinctions between ‘smooth users’ and ‘collective agency’ (Macgilchrist et al., 2020); and, industrial and convivial tools (Illich, 1973). The characteristics identified by this exploratory case study suggest that an approach that is less atomistic and industrial, and more collective and convivial may help reconnect people with educational technology more meaningfully and sustainably. Consequently this shift may also help increase higher education’s ability to maximise the accessibility, quality, and, cost efficiency of learning and teaching.


The postere is embedded below and also available directly from Google slides. The Enter full screen option available from the “three dots” button at the bottom of the poster embed is useful for viewing the poster.

Comparing orchestrations

The core of this exploratory case study is the comparison of two sets of orchestrations and how they seek to fulfill the same three tasks.

echo360 migration

Course site QA

Course site usability

About the orchestrations

The orchestrations discussed typically rely on software that we’ve developed by building on the shoulders of other giants of open source software. Software that we’re happy to share with others.

Course Analysis Report (CAR) process

The CAR process started as an attempt to make it easier for migration staff to understand what was in a Blackboard course site. It started with a gather that extract the contents of each Blackboard course site into an offline data structure. A data structure that provided a foundation for much, much more.

The echo360 migration video offers some more detail. The following image is from that video. It shows the CAR folder for a sample Blackboard course. Generated by the CAR script this folder contains

  • A folder (contentCollection) containing copies of all the files uploaded to the Blackboard course site.
    The files are organised in two ways to help the migration:

    1. Don’t migrate files that are no longer used in the course site; and,
      Files are placed into an attached or unattached folder depending on whether they are still used by the Blackboard course site.
    2. Don’t migrate all the files in one single unorganised folder.
  • A folder (coursePages) containing individual Word documents containing the content of course site pages.
  • A CAR report.
    A Word document that summarises the content, structure and features used in a course site.
  • A pickle file.
    Contains a copy of all the course site details and content in a machine readable format.

While the CAR code is not currently publicly available we are happy to share.

Copy of slide showing a CAR folder structure. With pointers to the contentCollection and coursePages folders and a Word doc (CAR doc) and pickle file


Word2Canvas is Javascript which modifies the modules page on a Canvas course site. It provides a button that allows you to convert a specially formatted Word document into Canvas module.

The coursePages folder produced by the CAR process generates these specially formatted Word documents. Enabling migration to consist largely of minor edits of a Word document and using word2canvas to create a Canvas module.

The echo360 migration video offers some more detail, including an example of using the CAR. The Word2Canvas to site provides more detail again, including how to install and use word2canvas.

Canvas Collections

Canvas Collections is also Javascript which modifies the Canvas modules page. However, Canvas Collections’ modifications seek to improve the usability and visual design of the modules page. In doing so it addresses long known limitations of the Modules page, as the following table summarises.

Limitation of Canvas modules Collections Functionality
Lots of modules leads to a long list to search Group modules into collections that are viewed separately
An overly linear and underwhelming visual design Ability to select from, change between, and create new representations of collections and their modules.
No way to add narrative or additional contextual information about modules to the modules page. Ability to transform vanilla Canvas modules into contextual objects by adding additional properties (information) for modules that are used in representations and other functionality.

The course site usability video provides more detail on Canvas Collections, as does the Canvas Collections site. Canvas Collections is available for use now, but is continually being developed.

References – Poster

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology, 32(3), 180-193.

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1).

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS Quarterly, 19(2), 213–236.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89.

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

References – Abstract

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1).

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89.

Martin, L. (2020). Foundations for good practice: The student experience of online learning in Australian higher education during the COVID-19 pandemic). Tertiary Educational Quality and Standards Agency.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

Entangled Japanese power lines

Orchestrating entangled relations to break the iron triangle: examples from a LMS migration


All university strategies for learning and teaching seek to maximise: accessibility (as many people as possible can participate – feel the scale – in as many ways as possible); quality (it’s good); and, cost effectiveness (it’s cheap to produce and offer). Ryan et al (2021) argue that this is a “key issue for contemporary higher education” (p. 1383) due to inevitable cost constraints, the benefits of increased access to higher education, and requirements to maintain quality standards. However, the literature on the “iron triangle” in higher education (e.g. Daniel et al, 2009; Mulder, 2013; Ryan et al, 2021) suggests that maximising all three is difficult, if not impossible. As illustrated in Figure 1 (adapted from Mulder, 2013, p. 100) the iron triangle suggests that changes in one (e.g. changing accessibility from on-campus to online due to COVID) will have negatively impact at least one, but probably both, of the other qualities (e.g. the COVID response involving increase in workload for staff and resulting in less than happy participants).

Figure 1: Illustrating the iron triangle (adapted from Mulder, 2013, p. 100)
Illustration of the iron triangle

Much of the iron triangle literature identifies different strategies that promise to break the iron triangle. Mulder (2013) suggests open educational resources (OER). Daniel et al (2009) suggest open and distance eLearning. Ryan et al (2021) suggest high-quality large group teaching and learning; alternative curriculum structures; and automation of assessment and feedback.

I’m not convinced that any of these will break the iron triangle. Not due to the inherent validity of the specific solutions (though there are questions). Instead my doubts arise from how such suggestions would be implemented in contemporary higher education. Each would be implemented via variations on common methods. My suspicion is that these methods are likely to limit any attempts to break the iron triangle because they are incapable of effectively and efficiently orchestrating the entangled relations that are inherent to learning and teaching.

Largely because existing methods are based on atomistic, and deterministic understandings of education, technology, and organisations. The standard methods – based on practices like stepwise refinement and loose coupling – may be necessary but aren’t sufficient for breaking the iron triangle. These methods decompose problems into smaller black boxes (e.g. pedagogy before technology; learning and teaching, requirements and implementation; enrolment, finance, and HR; learning objects etc.) making it easier to solve the smaller problem within the confines of its blackbox. The assumption is that solving larger problems (e.g. designing a quality learning experience or migrating to a new LMS) is simply a matter of combining different black boxes like lego blocks to provide a solution. The following examples illustrate how this isn’t reality.

Entangled views of pedagogy (Fawns, 2022), educational technology (Dron, 2022), and associated “distributed” views (Jones and Clark, 2014) argue that atomistic views are naive and simply don’t match the reality of learning and teaching. As Parrish (2004) argued almost two decades ago in the context of learning objects, decontextualised black boxes place an increased burden on others to add the appropriate context back in. To orchestrate the entangled relations between and betwixt the black boxes and the context in which they are used. As illustrated in the examples below, current practice relies on this orchestration being manual and time consuming. I don’t see how this foundation enables the iron triangle to be broken.

Three examples from an LMS migration

We’re in the process of migrating from Blackboard Learn to Canvas. I work with one part of an institution and we’re responsible for migrating some 1400 courses (some with multiple course sites) over 18 months. An LMS migration “is one of the most complex and labor-intensive initiatives that a university might undertake” (Cottam, 2021, p. 66). Hence much of the organisation is expending effort to make sure it succeeds. This includes enterprise information technology players such as the new LMS vendor, our organisational IT division, and various other enterprise systems and practices. i.e. there are lots of enterprise black boxes available. The following seeks to illustrate the mismatch between these “enterprise” practices and what we have to actually do as part of an LMS migration.

In particular, three standard LMS migration tasks are used as examples, these are:

  1. Connect the LMS with an ecosystem of tools using the Learning Tools Interoperability (LTI) standard.

  2. Moving content from one LMS to another using the common cartridge standard.

  3. “to make teaching and learning easier” using a vanilla LMS.

The sections below describe the challenges we faced as each of these standardised black boxes fell short. Each were so disconnected from our context and purpose to require significant manual re-entanglement to even approach being fit-for-purpose. Rather than persevere with an inefficient, manual approach to re-entanglement we did what many, many project teams have done before. We leveraged digital technologies to help automate the re-entanglement of these context-free and purposeless black boxes into fit-for-purpose assemblages that were more efficient, effective, and provided a foundation for on-going improvement and practice. Importantly, a key part of this re-entanglement was injecting some knowledge of learning design. Our improved assemblages are described below.

1. Connect the LMS with an ecosystem of tools using the LTI standard

Right now we’re working on migrating ~500 Blackboard course sites. Echo360 is used in these course sites for lecture capture and for recording and embedding other videos. Echo360 is an external tool, it’s not part of the LMS (Blackboard or Canvas). Instead, the Learning Tools Interoperability (LTI) standard is used to embed and link echo360 videos into the LMS. LTI is a way to provide loose coupling between the separate black boxes of the LMS and other tools. It makes it easy for the individual vendors – both LMS and external tools – to develop their own software. They focus on writing software to meet the LTI standard without a need to understand (much of) the internal detail of each other’s software. Once done, their software can interconnect (via a very narrow connection). For institutional information technology folk the presence of LTI support in a tool promises to make it easy to connect one piece of software to another. i.e. it makes it easy to connect the Blackboard LMS and Echo360; or, to connect the Canvas LMS and Echo360.

From the teacher perspective, one practice LTI enables is a way for an Echo360 button to appear in the LMS content editor. Press that button and you access your Echo360 library of videos from which you select the one you wish to embed. From the student perspective, the echo360 video is embedded in your course content within the LMS. All fairly seamless.

Wrong purpose, no relationship, manual assemblage

Of the ~500 course sites we’re currently working on there are 2162 echo360 embeds. Those are spread across 98 of the course sites. Those 98 course sites have on average 22 echo360 videos. 62 of the course sites have 10 or more echo360 embeds. One course has 142 echo360 embeds. The ability to provide those statistics is not common. We can do that because of the orchestration we’ve done in the next example.

The problem we face in migrating these videos to Canvas is that our purpose falls outside the purpose of LTI. Our purpose is not focused on connecting an individual LMS to echo360. We’re moving from one LMS to another LMS. LTI is not designed to help with that purpose. LTI’s purpose (one LMS to echo360) and how it’s been implemented in Blackboard creates a problem for us. The code to embed an echo360 video in Blackboard (via LTI) is different to the code to embed the same video in Canvas (via LTI). If I use Blackboard’s Echo360 LTI plugin to embed an echo360 video into Blackboard the id will be f34e8a01-4f72-46e1-XXXX-105XXXXXf75f. If I use the Canvas Echo360 LIT plugin to embed the very same video into Canvas it will use a very different id (49dbc576-XXXX-4eb0-b0d6-6bXXXXX0707). This means that to migrate from Blackboard to Canvas each of the 2162 echo360 videos in our 500+ courses we will need to regenerate/identify a new id.

The initial solution to this problem was:

  1. A migration person manually searches a course site and generates a list of names for all the echo360 videos.

  2. A central helpdesk uses that list to manually use the echo360 search mechanism to find and generate a new id for each video and update the list

    Necessary because in echo360 only the owner of the video or the echo360 “root” user can access/see the video. So either the video owner (typically an academic) or the “root” user generate the new ids. From a risk perspective, only a very small number of people should have root access, it can’t be given to all the migration people.

  3. The migration person receives the list of new video ids and manually updates the new Canvas course site.

…and repeat that for thousands of echo360 videos.

It’s evident that this process involves a great deal of manual work and a bottleneck in terms of “root” user access to echo360.

Orchestrating the relationships into a semi-automated assemblage

A simple improvement to this approach would be to automate step #2 using something like Robotic Process Automation. With RPA the software (i.e. the “robot”) could step through a list of video names, login to the echo360 web interface, search for the video, find it, generate a new echo360 id for Canvas, and write that id back to the original list. Ready for handing back to the migration person.

A better solution would be to automate the whole process. i.e. have software that will

  1. Search through an entire Blackboard course site and identify all the echo360 embeds.

  2. Use the echo360 search mechanism to find and generate a new id for each video.

  3. Update the Canvas course site with the new video ids.

That’s basically what we did with some Python code. The Python code helps orchestrate the relationship between Blackboard, Canvas, and Echo360. It helps improve the cost effectiveness of the process though doesn’t shift the dial much on access or quality.

But there’s more to this better solution than echo360. Our Python code needs to know what’s in the Blackboard course site and how to design content for Canvas. The software has to be more broadly connected. As explained in the next example.

Moving content from one LMS to another using the common cartridge standard

Common Cartridge provides “a standard way to represent digital course materials”. Within the context of an LMS migration, common cartridge (and some similar approaches) provide the main way to migrate content from one LMS to another. It provides the black box encapsulation of LMS content. Go to Blackboard and use it to produce a common cartridge export. Head over to the Canvas and use its import feature to bring the content in. Hey presto migration complete.

If only it were that simple.

2. Migrating content without knowing anything about it or how it should end up

Of course it’s not as simple as that, there are known problems, including:

  1. Not all systems are the same so not all content can be “standardised”.

    Vendors of different LMS seek to differentiate themselves from their competitors. Hence they tend to offer different functionality, or implement/label the same functionality differently. Either way there’s a limit to how standardised digital content can be and not all LMS support the same functionality (e.g. quizzes). Hence a lot of manual work arounds to identify and remedy issues (orchestrating entangled relations).

  2. Imports are ignorant of learning design in both source and destination LMS.

    Depending on the specific learning design in a course, the structure and nature of the course site can be very different. Standardised export formats – like common cartridge – use standardised formats. They are ignorant of the specifics of course learning design as embodied in the old LMS. They are also ignorant of how best to adapt the course learning design to the requirements of the new LMS.

  3. Migrating information specific to the old LMS.

    Since common cartridge just packages up what is in the old LMS, detail specific to the old LMS gets ported to the new and has to be manually changed. e.g. echo360 embeds as outlined above, but also language specific to the old lms (e.g. Blackboard) but inappropriate to the new.

  4. Migrating bad practice.

    e.g. it’s quite common for the “content collection” area of Blackboard courses to collect a large number of files. Many of these files are no longer used. Some are mistaken left overs, some are just no longer used. Most of the time the content collection is one long list of files with names like lecture 1.pptx, lecture 1-2019.pptx, lectures 1a.pptx. The common cartridge approach to migration packages up all that bad practice and ports it to the new LMS.

All these problems contribute to the initial migration outcome not being all that good. For example, the following images. Figure 2 is the original Blackboard course site. A common cartridge of that Blackboard course site was created and imported into Canvas. Figure 3 is the result.

It’s a mess and that’s just the visible structure. What were separate bits of content are now all combined together, because common cartridge is ignorant of that design. Some elements that were not needed in Canvas have been imported. Some information (Staff Information) was lost. And did you notice the default “scroll of death” in Canvas (Figure 3)?

Figure 2: Source LMS
Student view of a simple Blackboard course
Figure 3: Destination LMS
Student view of Canvas course created by importing a Common Cartridge export of the Blackboard course

The Canvas Files area is even worse off. Figure 4 shows the files area of this same course after common cartridge import. Only the first four or five files were in the Blackboard course. All the web_content0000X folders are added by the common cartridge import.

Figure 4: Canvas files area – common cartridge import
Canvas files area after Common Cartridge import

You can’t leave that course in that stage. The next step is to manually modify and reorganise the Canvas site into a design that works in Canvas. This modification relies on the Canvas web interface. Not the most effective or efficient interface for that purpose (e.g. the Canvas interface still does not provide a way to delete all the pages in a course). Importantly, remember that this manual tidy up process has to be performed for each of the 1400+ course sites we’re migrating.

The issue here is the common cartridge is a generic standard. Its purpose (in part) is to take content from any LMS (other other tool) and enable it to be imported into another LMS/tool. It has no contextual knowledge. We have to manually orchestrate that back in.

Driving the CAR: Migration scaffolded by re-entangling knowledge of source and destination structure

On the other hand, our purpose is different and specific. We know we are migrating from a specific version of Blackboard to a specific version of Canvas. We know the common approaches used in Blackboard by our courses. We eventually developed the knowledge of how what was common in Blackboard must be modified to work in Canvas. Rather than engage in the manual, de-contextualised process above, a better approach would leverage our additional knowledge and use it to increase the efficiency and the effectiveness of the migration.

To do this we developed the Course Analysis Report (CAR) approach. Broadly this approach automates the majority of the following steps:

  1. Pickle the Blackboard course site.

    Details of the structure, make up, and the HTML content of the Blackboard course site is extracted out of Blackboard and stored into a file. A single data structure (residing in a shared network folder) that contains a snapshot of the Blackboard course site.

  2. Analyse the pickle and generate a CAR.

    Perform various analysis and modifications to the pickle file (e.g. look for Blackboard specific language, modify echo360 embeds, identify which content collections files are actually attached to course content etc.) stick that analysis into a database, and generate a Word document providing a summary of the course site.

  3. Download the course files and generate specially formatted Word documents representing course site content.

    Using our knowledge of how our Blackboard courses are structured and the modifications necessary for an effective Canvas course embodying a similar design intent create a couple of folders in the shared course folder containing all of the files and Word documents containing the web content of the Blackboard course. Format these files, folders, and documents to scaffold modification (using traditional desktop tools). For example, separate out the files from the course into those that were actually used in the current course site and those that aren’t. Making it easy to decide not to migrate unnecessary content.

  4. Upload the modified files and Word documents directly into Canvas as mostly completed course content.

    Step #3 is where almost all the design knowledge necessary gets applied to the migrate the course. All that’s left is to upload it into Canvas. Uploading the files is easy and supported by Canvas. Uploading the Word documents into Canvas as modules is done via word2Canvas a semi-automated tool.

Steps #1 and #2 are entirely automatic as is the download of course content and generation of the Word documents in step #3. These are stored in shared folders available to the entire migration team (the following table provides some stats on those folders). From there the migration is semi-automated. People leveraging their knowledge to make decisions and changes using common desktop tools.

Development Window # course sites # of files Disk Usage
1 219 15,213 1633Gb
2 555 2531 336Gb

Figures 5 and 6 show the end result of this improved migration process using the same course as the Figures 3 and 4. Figure 5 illustrates how the structure of “modules” in the Blackboard site has been recreated using the matching Canvas functionality. What the figures don’t show is that Step 3 of the CAR process has removed or modified Blackboard practices to fit the capabilities of Canvas.

Figure 6 illustrates a much neater Files area compared to Figure 4. All of the unnecessary common cartridge crud is not there. Figure 5 also illustrates Step 3’s addition of structure to the Files area. The three files shown are all within a Learning Module folder. This folder was not present in the Blackboard course site’s content collection. It’s been added by the CAR to indicate where in the course site structure the files were used. These images were all used within the Learning Modules content area in the Blackboard course site (Figure 2). In a more complex course site this additional structure makes it easier to find the relevant files.

Figure 5 still has a pretty significant whiff of the ‘scroll of death’. In part because the highly visual card interface used in the Blackboard course site is not available in Blackboard. This is a “feature” of Canvas and how it organises learning content in a long, visually boring scroll of death. More on that next.

Figure 5: Canvas site via CAR
| Canvas course site created by migrating via CAR
Figure 6: Canvas files via CAR
Canvas files migrated via CAR

3. Making teaching and learning easier/better using a vanilla LMS

There’s quite a bit of literature and other work arguing about the value to learning and the learning experience of the aesthetics, findability, and usability of the LMS and LMS courses. Almost as much as there is literature and work expounding on the value of consistency as a method for addressing those concerns (misguided IMHO). Migrating to a new LMS typically includes some promise of making the experience of teaching and learning easier, better, and more engaging. For example, one of the apparent advantages of Canvas is it reportedly looks prettier than the competitors. People using Canvas generally report the user interface as feeling cleaner. Apparently it “provides students with an accessible and user-friendly interface through which they can access course learning materials”.

Using a overly linear, visually unappealing, context-free, generic tool constrained by the vendor

Of course beauty is in the eye of the beholder and familiarity can breed contempt. Some think Canvas “plain and ugly”. As illustrated above by Figures 2 and 4 the Canvas Modules view – the core of how students interact with study material – is known widely (e.g. University of Oxford) to be overly linear, involve lots of vertical scrolling, and not be very visually appealing. Years of experience has also shown that the course navigation experience is less than stellar for a variety of reasons.

There are common manual workarounds that are widely recommended to teaching staff. There is also a community of third party design tools intended to improve the Canvas interface and navigation experience. As well as requests to Canvas to respond to these observations and improve the system. Some examples include: a 2015 request; a suggestion from 2016 to allow modules within modules; and another grouping modules request in 2019. The last of which includes a comment touching on the shortcomings of most of the existing workarounds. The second of which includes comment from the vendor explaining there are no plans to provide this functionality.

As Figure 2 demonstrates, we’ve been able to do aspects of this since 2019 in Blackboard Learn, but we can’t in the wonderful new system we’re migrating to. We’ll be losing functionality (used in hundreds of courses.

Canvas Collections: Injecting context, visual design, and alternatives into the Canvas’ modules page

Canvas Collections is a work-in-progress designed to address the shortcomings of the current Canvas modules page. We’re working through the prevailing heavyweight umwelt in attempt to move it into production. For now, it’s working as a userscript. Illustrating the flexibility of light-weight approach, it’s currently updated to semi-automate the typical Canvas workaround for creating visual home pages. Canvas Collections is inspired by related approaches within the Canvas Community, including: CSS-based approaches to creating interactive cards; and, Javascript methods for inserting cards into Canvas which appears to have gone into production at the University of Oxford. But also draw upon the experiences of developing and supporting the use of the Card Interface in Blackboard.

Canvas Collections is Javascript to modify the Canvas modules view by adding support for three new abstractions. Each of the abstractions represent different ways to orchestrate entangled relations. The three abstractions are:

  1. Collections;

    Rather than a single, long list of modules. Modules can be grouped into collections that align with the design intent of the course. Figures 7 and 8 illustrate a common use of two collections: course content and assessment. A navigation bar is provided to switch between the two collections. When viewing a collection you only see the modules that belong to that collection.

  2. Representations; and,

    Each collection can be represented in different ways. No longer limited to a text-based list of modules and their contents. Figures 7 and 8 demonstrate use of a representation that borrows heavily from the Card Interface. Such representations – implemented in code – can perform additional tasks to further embed context and design intent.

  3. Additional module “metadata”.

    Canvas stores a large collection of generic information about Modules. However, as you engage in learning design you assign additional meaning and purpose to modules, which can’t be stored in Canvas. Canvas Collections supports additional design-oriented metadata about modules. Figures 7 and 8 demonstrate the addition to each module of: a description or driving question to a module to help learners understand the module’s intent; a date or date period when learners should pay attention to a module; a different label to a module to further refine its purpose; and, a picture to visually representation ([dual-coding]( anyone?).

Figures 7 and 8 illustrate each of these abstractions. The modules for this sample course have been divided into two collections: Course Content (Figure 7) and Assessment (Figure 8). Perhaps not very creative, but mirroring common organisational practice. Each Canvas module is represented by a card, which includes the title (Canvas), a specific image, a description, relevant dates, and a link to the module.

The dates are a further example of injecting context into a generic tool to save time and manual effort. The provision of specific dates (e.g. July 18, Friday, September 2) would require manual updating every time a course site was rolled over to a new offering (at a new time). Alternatively, Canvas Collections Griffith Cards representation knows both the Griffith University calendar and how Griffith’s approach to Canvas course ids specify the study period for a course. This means dates can be specific in a generic study period format (e.g. Week 1, or Friday Week 11) and the representation can figure out the actual date.

Not only does Canvas Collections improve the aesthetics of a Canvas course site it improves the findability of information within the course site by making it possible to explicitly represent the information architecture. Research (Simmunich et al, 2015) suggests that course sites with higher findability lead to increases in student reported self-efficacy and motivation, and a better overall experience. Experience with the Card Interface and early experience with Canvas Collections suggest that it is just not the students which benefit. Being able to improve a course site using Canvas Collections appears to encourage teaching staff to think more explicitly about the design of their course sites. Being asked to consider questions like: What are the core objects/activities in your course? How should they be explained? Visually represented?

Figure 7: Canvas Collections – content collection

Figure 8: Canvas Collections – assessment collection


The argument here is that more effective orchestration of entangled relations will be a necessary (though not sufficient) enabler for breaking the iron triangle in learning and teaching. On-going reliance on manual orchestration of entangled relations necessary to leverage the black-boxes of heavyweight IT will be a barrier to breaking the iron triangle. In terms of efficiency, effectiveness, and novelty. Efficiency because manual orchestration requires time-consuming human intervention. Effectiveness, at least because the time requirement will either prevent it from being done or, if one, increase significantly the chance of human error. Novelty because – as defined by Arthur (2019) – technological evolution comes from combining technologies where technology is “the orchestration of phenomena for some purpose” (Dron, 2021, p. 155). It’s orchestration all the way down. The ability to creatively orchestrate the entangled relations inherent to learning and teaching will be a key enabler to developing new learning and teaching practices.

What we’re doing is not new. In the information systems literature it has been labelled light-weight Information Technology (IT) development defined as “a socio-technical knowledge regime driven by competent users’ need for solutions, enabled by the consumerisation of digital technology, and realized through innovation processes” (Bygstad, 2017, p. 181). Light-weight IT development is increasingly how people responsible for solving problems with the black boxes of heavyweight IT (a different socio-technical knowledge regime) leverage technology to orchestrate the necessary entangled relations into contextually appropriate assemblages to solve their own needs. It is how they do this in ways that save time and enable new and more effective practice. The three examples above illustrate how we’ve done these in the context of an LMS migration and the benefits that have arisen.

These “light-weight IT” practices aren’t new in universities or learning and teaching. Pre-designed templates for the LMS (Perämäki, 2021) are an increasingly widespread and simple example. The common practice within the Canvas community of developing and sharing userscripts or sharing Python code are examples. More surprising examples is the sheer number of Universities which have significant enterprise projects in the form of Robotic Process Automation (RPA) (e.g. the University of Melbourne, the Australian National University, Griffith University, and the University of Auckland). RPA is a poster child example of lightweight IT development. These significant enterprise RPA projects are designed to develop the capability to more efficiently and effectively re-entangle the black boxes of heavyweight IT. But to date universities appear to be focusing RPA efforts on administrative processes such as HR, Finance, and student enrolment. I’m not aware of any evidence of institutional projects explicitly focused on applying these methods to learning and teaching. In fact, enterprise approaches to the use of digital technology appear more interested in increasing the use of outsourced, vanilla enterprise services. Leaving it to us tinkerers.

A big part of the struggle is that lightweight and heavyweight IT are different socio-technical knowledge regimes (Bygstad, 2017). They have different umwelten and in L&T practice the heavyweight umwelten reigns supreme. Hence, I’m not sure if I’m more worried about the absence of lightweight approaches to L&T at universities, or the nature of the “lightweight” approach that universities might develop given their current knowledge regimes. On the plus side, some really smart folk are starting to explore the alternatives.


Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology.

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1).

Daniel, J., Kanwar, A., & Uvalić-Trumbić, S. (2009). Breaking Higher Education’s Iron Triangle: Access, Cost, and Quality. Change: The Magazine of Higher Learning, 41(2), 30–35.

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105.

Perämäki, M. (2021). Predesigned course templates: Helping organizations teach online [Masters, Tampere University of Applied Sciences].

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394.

Exploring Dron’s definition of educational technology

Pre-COVID the role of technology in learning and teaching in higher education was important. However, in 2020 it became core as part of the COVID response. Given the circumstances it is no surprise that chunks of that response were not that great. There was some good work. There was a lot of a “good enough for the situation” work. There was quite a bit that really sucked. For example,

Drake Hotline Bling Meme

Arugably, I’m not sure there’s much difference from pre-COVID practice. Yes, COVID meant that the importance and spread of digital technology use was much, much higher. But, rapid adoption whilst responding to a pandemic was unlikely to be better (or as good?) qualitatively than previous practice. There just wasn’t time for many to engage in the work required to question prior assumptions and redesign prior practices to suit the very different context and needs. Let alone harness technology transformatively.

It is even less likely if – as I believe – most pre-COVID individual and organisational assumptions and practices around learning, teaching and technology were built on fairly limited conceptual foundations. Building a COVID response on that sandy foundation was never going to end well. As individuals, institutions, and vendors (thanks Microsoft?) begin to (re-)imagine what’s next for learning and teaching in higher education, it is probably a good time to improve those limited conceptual foundations.

That’s where this post comes in. It is an attempt to explore in more detail Dron’s (2021) definition of educational technology and how it works. There are other conceptual/theoretical framings that could be used. For example, postdigital (Fawns, 2019). That’s for other posts. The intent here it to consider Dron’s definition of educational technology and if/how it might help improve the conceptual foundations of institutional practices with educational technology.

After writing this post, I’m seeing some interesting possible implications. For example:

  • Another argument for limitations in the “pedagogy before technology” argument (pedagogy is technology, so this is an unhelpful tautology).
  • A possible explanation for why most L&T professional development is attended by the “usual suspects” (it’s about purpose).
  • Thoughts on the problems created by the separation of pedagogy and technology into two organisational universities (quality of learning experience is due to the combination of these two, separate organisational units, separate purposes, focused on their specific phenomena).
  • One explanation why the “blank canvas” (soft) nature of the LMS (& why the NGDLE only makes this worse) is a big challenge for quality learning and teaching (soft is hard).
  • Why improving digital fluency or the teaching qualifications of teaching staff are unlikely to address this challenge (soft is hard and solutions focused on individuals don’t adress the limitations in the web of institutional technologies – in the broadest Dron sense).

Analysing a tutorial room

Imagine you’re responsible for running a tutorial at some educational institution. You’ve rocked up to the tutorial room for the first time and you’re looking at one of the following room layouts: computer lab, or classroom. How does Dron’s definition of educational technology help understand the learning and teaching activity and experience you and your students are about to embark upon? How might it help students, teachers, and the people from facilities management and your institution’s learning and teaching centre?

Computer lab Classroom
Czeva , CC BY-SA 4.0 via Wikimedia Commons Thedofc, Public domain, via Wikimedia Commons

Ask yourself these questions

  1. What technology do you see in the rooms above (imagine you can see a tutorial being run in both)?
  2. What is the nature of the work you and your students do during the tutorial?
  3. Which of the rooms above would be “best” for your tutorial? Why?
  4. How could the rooms above be modified to be better for tutorials? Why?

What is the (educational) technology in the room?

Assuming we’re looking at a tutorial being carried out in both images. What would be on your list of technology being used?

A typical list might include chairs, tables, computers, whiteboards (interactive/smart and static), clock, notice boards, doors, windows, walls, floors, cupboards, water bottles, phones, books, notepads etc.

You might add more of the technologies that you and your students brought with you. Laptops, phones, backpacks etc. What else?

How do you delineate between what is and isn’t technology? How would you define technology?

Defining technology

Dron (2021) starts by acknowledging that this is difficult. That most definitions of technology are vague, incomplete, and often contradictory. He goes into some detail why. Dron’s definition draws on Arthur’s (2009) definition of technlogy as (emphasis added)

the orchestration of phenomena for some purpose (Dron, 2021, p. 1)

Phenomena includes stuff that is “real or imagined, mental or physical, designed or existing in the natural world” (Dron, 2021, p. 2). Phenomena can be seen as belonging to physics (materials science for table tops), biology (human body climate requirements), chemistry etc. Phenomena can be: something you touch (the book you hold); another technology (the book you hold); a cognitive practice (reading); and, partially or entirely human enacted (think/pair/share, organisational processes etc).

For Arthur, technological evolution comes from combining technologies. The phenomena being orchestrated in a technology can be another technology. Writing (technology) orchestrates language (technology) for another purpose. A purpose Socrates didn’t much care for. Different combinations (assemblies) of technologies can be used for different purposes. New technologies are built using assemblies of existing technologies. There are inter-connected webs of technologies orchestrated by different people for different purposes.

For example, in the classrooms above manufacturers of furniture orchestrated various physical and material phenomena to produce the chairs, desks and other furniture. Some other people – probably from institutional facilities management – orchestrated different combinations of furniture for the purpose of designing cost efficient and useful tutorial rooms. The folk designing the computer lab had a different purpose (provide computer lab with desktop computers) than the folk designing the classroom (provide a room that can be flexibly re-arranged). Those different purposes led to decisions about different approaches to orchestration of both similar and different phenomena.

When the tutorial participants enter the room they start the next stage of orchestration for different, more learning and teaching specific purposes. Both students and teachers will have their own individual purposes in mind. Purposes that may change in respone to what happens in the tutorial. Those diverse purposes will drive them to orchestrate different phenomena in different ways. To achieve a particular learning outcome, a teacher will orchestrate different phenomena and technology. They will combine the technologies in the room with certain pedagogies (other technologies) to create specific learning tasks. The students then orchestrate how the learning tasks – purposeful orchestrations of phenomena – are adapted to serve their individual purposes.

Some assemblies of technologies are easier to orchestrate than others (e.g. the computers in a computer lab can be used to play computer games, rather than “learning”). Collaborative small group pedagogies would probably be easier in the classroom, than the computer lab. The design of the furniture technology in the classroom has been orchestrated with the purpose of enabling this type of flexibility. Not so the computer lab.

For Dron, pedagogies are a technology and education is a technology. For some,

Them's fighting words

What is educational technology?

Dron (2021) answers

educational technology, or learning technology, may tentatively be defined as one that, deliberately or not, includes pedagogies among the technologies it orchestrates.

Consequently, both the images above are examples of educational technologies. The inclusion of pedagogies in the empty classroom is more implicit than in the computer lab which shows people apparently engaged in a learning activity. The empty classroom implicitly illustrates some teacher-driven pedagogical assumptions in terms of how it is laid out. With the chairs and desks essentially in rows facing front.

The teacher-driven pedagogical assumptions in the computer lab are more explicit and fixed. Not only because you can see the teacher up the front and the students apparently following along. But also because the teacher-driven pedagogical assumptions are enshrined in the computer lab. The rows in the computer lab are not designed to be moved (probably because of the phenomena associated with desktop computers, not the most moveable technologies). The seating positions for students are almost always going to be facing toward the teacher at the front of the room. There are even partitions between each student making collaboration and sharing more difficult.

The classroom, however, is more flexible. It implicitly enables a number of different pedagogical assumptions. A number of different orchetrations of different phenomena. The chairs and tables can be moved. They could be pushed to sides of the room to open up a space for all sorts of large group and collaborative pedagogies. The shapes of the desks suggest that it would be possible to push four of them together to support small group pedagogies. Pedagogies that seek to assemble or orchestrate a very different set of mental and learning phenomena. The classroom is designed to be assembled in different ways.

But beyond that both rooms appear embedded in the broader assembly of technology of formal education. They appear to be classrooms within the buildings of an educational institution. Use of these classrooms are likely scheduled according to a time-table. Scheduled classes are likely led by people employed according to specific position titles and role descriptions. Most of which are likely to make some mention of pedagogies (e.g. lecturer, tutor, teacher).

Technologies mediate all formal education and intentional learning

Dron’s (2021) position is that

All teachers use technologies, and technologies mediate all formal education (p. 2)

Everyone involved in education has to be involved in the orchestration of new assemblies of technology. e.g. as you enter one of the rooms above as the teacher, you will orchestrate the available technologies including your choice of explicit/implicit pedagogical approaches into a learning experience. If you enter one of the rooms as the learner, you will orchestrate the assembly presented to you by the teacher and institution with your technologies, for your purpose.

Dron does distinguish between learning and intentional learning. Learning is natural. It occurs without explicit orchestration of phenomena for a purpose. He suggests that babies and non-human entities engage in this type of learning. But when we start engaging in intentional learning we start orchestrating assemblies of phenomena/technologies for learning. Technologies such as language, writing, concepts, models, theories, and beyond.

Use and particpation: hard and soft

For Dron (2021) students and teachers are “not just users but participants in the orchestration of technologies” (p. 3).

The technology that is the tutorial you are running, requires participation from both you and the students. For example, to help organise the room for particular activities, use the whiteboard/projector to show relevant task information, use language to share a particular message, and use digital or physical notebooks etc. Individuals perform these tasks in different ways, with lesser or greater success, with different definitions of what is required, and with different preferences. They don’t just use the technology, the participate in the orchestration.

Some technologies heavily pre-deterimine and restrict what form that participation takes. For example, the rigidity of the seating arrangements in the computer lab image above. There is very limited capacity to creatively orchestrate the seating arrangement in the computer lab. The students participation is largely (but not entirely) limited to sitting in rows. The constraints this type of technology places on our behaviour leads Dron to label them as hard technologies. But even hard technologies can orchestrated in different ways by coparticipants. Which in turn lead to different orchestrations.

Other technologies allow and may require more active and creative orchestration. As mentioned above, the classroom image includes seating that can be creatively arranged in different ways. It is a soft technology. The additional orchestration that soft technologies require, requires from us additional knowledge, skills, and activities (i.e additional technology) to be useful. Dron (2021) identifies “teaching methods, musical instruments and computers” as further examples of soft technologies. Technologies that require more from us in terms of orchestration. Soft technologies are harder to use.

Hard is easy, soft is hard

Hard technologies typically don’t require additional knowledge, processes and techniques to achieve their intended purpose. What participation hard technologies require is constrained and (hopefully) fairly obvious. Hard technologies are typically easy to use (but perhaps not a great fit). However, the intended purpose baked into the hard technology may not align with your purpose.

Soft technologies require additional knowledge and skills to be useful. The more you know the more creatively you can orchestrate them. Soft technologies are hard to use because they require more of you. However, the upside is that there is often more flexibility in the purpose you can achieve with soft technologies.

For example, let’s assume you want to paint a picture. The following images show two technologies that could help you achieve that purpose. One is hard and one is soft.

Hard is easy Soft is hard
Aleksander Fedyanin CC0, via Wikimedia Commons Small easel with a blank canvas CC0

Softness is not universally available. It can only be used if you have the awareness, permission, knowledge, and self-efficacy necessary to make use of it. Since I “know” I “can’t paint”, I’d almost certainly never even think of using of a blank canvas. But then if I’m painting by numbers, then I’m stuck with producing whatever painting has been embedded in this hard technology. At least as long as I expect the hardness. Nor is hard versus soft a categorisation, it’s a spectrum.

As a brand new tutor entering the classroom shown above, you may not feel confident enough to re-arrange the chairs. You may also not be aware of certain beneficial learning activites that require moving the chairs. If you’ve never taught a particular tutorial or topic with a particular collection of students, you may not be aware that different orchestrations of technologies may work better.

Hard technologies are first and structural

Harder technologies are structural. They funnel practice in certain ways. Softer technologies tend to adapt to those funnels, some won’t be able to adapt. The structure baked into the hard technology of the computer lab above makes it difficult to effectively use a circle of voices activity. The structure created by hard technologies may mean you have to consider a different soft technology.

This can be difficult because hard technologies become part of the furniture. They become implicit, invisible and even apparently natural parts of education. The hardness of the computer lab above is quite obvious, especially the first time you enter the room for a tutorial. But what about the other invisible hard technologies embedded into the web technologies that is formal education.

You assemble the tutorial within a web of other technologies. As the number of hard technologies and interconnections between hard technologies increases, the web in which you’re working becomes harder to change. Various policies, requirements and decisions are made before you start assembling the tutorial. You might be a casual paid for 1 hour to take a tutorial in the computer lab shown above on Friday at 5pm. You might be required to use a common, pre-determined set of topics/questions. To ensure a common learning experience for students across all tutorials you might be required to use a specific pedagogical approach.

While perhaps not as physically hard as the furniture in the computer lab, these technologies tend to funnel practice toward certain forms.

Education is a coparticipative technological process

For Dron (2021) education is a coparticipative technological process. Education – as a technology – is a complex orchestration of different nested phenomena for diverse purposes.

How it is orchestrated and for what purposes are inherently situated, socially constructed, and ungeneralizable. While the most obvious coparticipants in education are students and teachers there are many others. Dron (2021) provides a sample, including “timetablers, writers, editors, illustrators of textbooks, creators of regulations, designers of classrooms, whiteboard manufacturers, developers and managers of LMSs, lab technicians”. Some of a never ending list of roles that orchestrate some of the phenomena that make up the technologies that teachers and students then orchestrate to achieve their diverse purposes.

Dron (2021) argues that how the coparticipants orchestrate the technologies is what is important. That the technologies of education – pedagogies, digital technologies, rooms, policies, etc. – “have no value at all without how we creatively and responsively orchestrate them, fuelled by passion for the subject and process, and compassion for our coparticipants” (p. 10). Our coparticipative orchestration is the source of the human, socially constructed, complex and unique processes and outcomes of learning. More than this Dron (2021) argues that the purpose of education is to both develop our knowledge and skills and to encourge the never-ending development of our ability to assemble our knowledge and skills “to contribute more and gain more from our communities and environments” (p. 10)

Though, as a coparticipant in this technological process, I assume I could orchestrate that particular technology with other phenemona to achieve a different purpose. e.g. if I were a particular type of ed-tech bro, then profit might be my purpose of choice.

Possible questions, applications, and implications

Dron (2021) applies his definition of educational technology to some of the big educational research questions including: the no significant different phenomena; learning styles; and the impossibility of replication studies for educational interventions. This produces some interesting insights. My question is whether or not Dron’s definition can be usefully applied to my practitioner experience with educational technology within Australian Higher Education. This is a start.

At this stage, I’m drawn to how Dron’s definition breaks down the unhelpful duality between technology and pedagogy. Instead, it positions pedagogy and technology as “just” phenomena that the coparticipants in education will orchestrate for their purposes. Echoing the sociomaterial and postdigital turns. The notions of hard and soft technologies and what they mean for orchestration also seem to offer an interesting lens to understand and guide institutional attempts to improve learning and teaching.

Pulling apart Dron’s (2021) definition

the orchestration of phenomena for some purpose (Arthur, 2009, p. 51)
seems to suggest the following questions about L&T as being important
1. Purpose: whose purpose and what is the purpose?
2. Orchestration: how can orchestration happen and who is able orchestrate?
3. Phenomena: what phenomena/assemblies are being orchestrated?

Questions that echo Fawn’s (2020) argument using a postdigital perspective to argue against the pedagogy before technology purpose and landing on the following

(context + purpose) drives (pedagogy [ which includes actual uses of technology])

Withi this in mind, designing a tutorial in one of the rooms would start with the content and purpose. In this case the context is the web of existing technologies that have led you and your students being in the room ready for a tutorial. The purpose includes the espoused learning goals of the tutorial, but also the goals of all the other participants, including those that emerge during the orchestration of the tutorial. This context and purpose is then what ought to drive the orchestration of various phenomena (which Fawn labels “pedagogy”) for that diverse and emergent collection of purposes.

Suggesting that it might be useful if the focus for institutional attempts to improve learning and teaching aimed to improve the quality of that orchestration. The challenge is that the quality of that orchestration should be driven by context and purpose, which are inherently diverse and situated. A challenge which I don’t think existing institutional practices are able to effectively deal with. Which is perhaps why discussions of quality learning and teaching in higher education “privileges outcome measures at the expense of understanding the processes that generate those outcomes” (Ellis and Goodyear, 2019, p. 2).

It’s easier to deal with abstract outcomes (very soft, non-specific technologies) than with the situated and contexual diversity of specifics and how to help with the orchestration of how to achieve those outcomes. In part, because many of the technologies that contribute to institutional L&T are so hard to reassemble. Hence it’s easier to put the blame on teaching staff (e.g. lack of teaching qualifications or digital fluency), than think about how the assembly of technologies that make up an institution should be rethought (e.g. this thread).

More to come.


Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Dron, J. (2021). Educational technology: What it is and how it works. AI & SOCIETY.

Fawns, T. (2019). Postdigital Education in Design and Practice. Postdigital Science and Education, 1(1), 132–145.

Understanding (digital) education through workarounds and quality indicators

COVID-19 and the subsequent #pivotonline has higher education paying a lot more attention to the use of digital and online technology for learning and teaching (digital education). COVID-19 has made digital education necessary. COVID-19 has made any form of education – and just about anything else – more difficult. For everyone. COVID-19 and it’s impact is rewriting what higher education will be after. COVID-19 is raising hopes and fears that what will come after will be (positively?) transformative. Not beholden to previous conceptions and corporate mores.

Most of that’s beyond me. Too big to consider. Too far beyond my control and my personal limitations. Hence I’ll retreat to my limited experience, practices, and conceptions. Exploring those more familiar and possibly understandable landscapes in order to reveal something that might be useful for the brave new post-COVID-19 world of university digital education. A world that I’m not confident has any hope of being positively transformed. Regardless of what the experts, prognosticators, futurists and vendors are selling. But I’m well-known for being a pessimist.

Echoing Phipps and Lanclos (2019) I believe that making changes in digital education needs to be grounded in “an understanding of the practices that staff undertake and the challenges they face” (p. 68). Some colleagues and I have started identifying our practices and challenges by documenting the workarounds we’ve used and developed. Alter (2014) defines workarounds as

a goal-driven adaptation, improvisation, or other change to one or more aspects of an existing work system in order to overcome, bypass, or minimize the impact of obstacles, exceptions, anomalies, mishaps, established practices, management expectations, or structural constraints that are perceived as preventing that work system or its participants from achieving a desired level of efficiency, effectiveness, or other organizational or personal goals (p. 1044)

Workarounds are a useful lens because they highlight areas of disconnect between what is needed and what is provided. Alter (2014) suggests that this Theory of Workarounds could be used to understand these disconnects and leverage that understanding to drive re-design. Resonating with Biggs’ (2001) notion of quality feasibility, a practice that actively seeks to understand what the impediements to quality teaching are and to remove them.

The challenge I faced was whether I could remember a reasonable percentage of the workarounds I’ve used in 20+ years.

Enter the following list of eight Online Course Quality Indicators also available as a PDF download and tested in Joosten, Cusatis & Harness (2019) (HT: @plredmond and OLDaily).My interest here isn’t in the validity/value of this type of approach (of which I have my doubts). Instead, my interest in that the eight indicators offer a prompt for the type of considerations to which a conscientious teacher might pay attention. The type of considerations that will point out limitations within institutional support for (digital) education and generate workarounds.

Initial findings

So far I’ve remembered 53 workarounds. Detail provided below. The following table maps workarounds against the quality indicators. The biggest category is Doesn’t fit. i.e. workarounds that didn’t seem to fit the quality indicators. Perhaps suggesting that the quality indictors were designed to analyse the outcome of teacher work (online course), rather than provide insight into the practices teachers undertake to produce that outcome.

Peer interaction and content interaction are the indicators with the next highest number of workarounds. Though I have collapsed both content interaction and richness indicators into content interaction.

Quality Indicator

# of workarounds









Instructor interaction


Peer interaction


Content interaction / Richness


Doesn’t fit


53 is a fair number. But perhaps not surprising given my original discipline is information technology and part of my working life has been spent designing LMS-like functionality.

What’s disappointing is that a number of these workarounds are duplicates solving the same fundamental problem. The only difference being in the institutional and technological context. For example, a number of the workarounds are focused on helping with:

  1. Production and maintenance of well-designed, rich course content.
  2. Increasing the quanity and quality of what teachers know about students background and activity.

What does that say about higher education, digital education, and me?

Proper reflection and analysis will have to wait for another time. But evidence of a difficulties in at least two fundamental practices seems important. Or, perhaps it’s just showing how blinked and obsessive my interest is.

There are some questions about whether the following are actually workarounds. In particular, some of the fairly specific learning activities aren’t actually designed to change an existing part of the institutional context. There was no part of the institutional context that provided for the learning activities. Largely because the learning activities were so specific to the learning intent that the institution would never have been able to provide any support. However, most institutions now have lists of digital tools that have been approved for use in learning and teaching. Typically, the specificity of the learning need means that no appropriate tool has been added to the list.

What does this say about the reusability paradox and institutional approaches to digital education?

Workarounds and quality indicators

The following steps through each of the quality indicators and uses them as an inspiration to answer the above question. For each workaround links to additional detail is provided and initial thoughts on the workaround given.


Systems Emergencies

One attempt at an authentic real world experience was the Systems Emergency assessment item for Systems Administration (Jones, 1995). Each student had to run a program on their computer. A program that would break their computer. Simulating an authentic error. The students had to draw on what they’d learned during the course to disagnose the problem, fix it and complete a report.

Is this a workaround? It’s so specific to a particular course and a particular pedagogical choice there is no institutional system that it is replacing.

Open Learning Computing Platform

Better example from the same course went by the acryonym OLCP (open learning computing platform) (Jones, 1994). The recommended computer systems almost all distance education students were using (Windows 3.1/95) was not up to the requirements of the course (Systems Administration). To workaround this limitation we distributed a version of Linux (Jones, 1996a), eventually relying on commercial distributions. Without Linux the course couldn’t be taught.

Personal Blogs, not ePortfolios

Arguably, my predilection for requiring students to use their choice of public blogging engines, rather than institutional ePotfolio tools was also driven by a desire for authenticity. Not to mention my skepticism about the value of institutional ePortfolio systems (which got me in trouble one time). Initially, individual student blogs were an extension of journaling (introduced in Sys Admin) and an encouragement to engage in open reflection and discussion. Intended to mirror good practice for IT professionals and first used in a Web Engineering course in 2002. Later evolving into the BAM and BIM tools to encourage reflection for assessment purposes and to encourage the development of a professional learning network.

Alignment and curriculum mapping

In terms of alignment of assessments and learning activities I’ve used and more often seen people use bespoke Word documents and spreadsheets to engage in mapping of courses and programs. Mainly because institutions did not have any practice of encouraging such practices, let alone systems to do it (e.g. this from 2009). There’s been a lot more attention paid and importance placed on mapping, but generally it remains an area of bespoke documents and spreadsheets. Perceived shortfalls that led to some design work on alternatives.


Moodle Course design

Designing a well-organised course site that is easy to navigate with manageable sections and a logical and consistent form is no easy task given the nature of most LMS. My first foray into this (before 2012 I was using an LMS I developed) added the fikkiwubg design features using bits of HTML.

A “Jump-to: Topic” navigation bar to my Moodle course sites to avoid the scroll of death.

Addition of non-topic based navigation to the top of the Moodle site to provide a sensible grouping of resources (Course background & content) that didn’t fit with the default Moodle design.

Addition of topic-based photos to generate visual interest, perhaps a bit of dual coding with the topic, and encourage some further exploration.

A “Right now” section at the top manually updated each week (along with the banner image) of term to orient students to the current focus.

Moodle Activity Viewer

Since the in-built Moodle reports aren’t that good and because I really wanted to understand how students were engaging with the Moodle sites I designed the Moodle Activity Viewer scratched an itch. It enabled an analysis of student activity.

Evernote to search a course site?

On of the on-going challenges with using Moodle was the absence of a search engine, a fairly widespread and important part of navigating any website. I did consider a number of different options and ended up trying out a kludge with Evernote. But only for one offering.

Modifying the Moodle search book block

Hosting course content on a WordPress blog

In 2012 I took over a Masters course titled Network and Global Learning. Given the focus of the course, hosting the learning environment in a closed LMS site didn’t seem appropriate. Instead, I decided to try it as an open course. It ended up as a WordPress site and has since been taken over by another academic…at least for one offering. Looks like it probably ended up back in the LMS.

Diigo for course revision

Given NGL was hosted on a course blog, this raised questions about how to take notes about what wasn’t working and ponder options for re-design. In Word, this could be done with the comments feature. For the Web I used Diigo to produce annotations like the following.

Card Interface

Late 2018 saw me stepping backwards to Blackboard 9.1. A very flexible system for structuring a site, but incredibly hard to make look good without a lot of knowledge. How to enable lots of people organise their course sites effectively? Enter the Card Interface. Easily convert a standard Blackboard content page into a contemporary, visual user interface.