Assembling the heterogeneous elements for (digital) learning

Category: bad Page 1 of 8

Higher ed L&T’s scale problem?

Contemporary higher education appears to have a scale problem.

Ellis & Goodyear (2019) explain in some detail Bain’s and Zundans-Fraser’s (2017) diagnosis of why attempts by universities to improve learning and teaching rarely scale, including the observation that L&T centers try to “influence learning and teaching through elective, selective, and exemplary approaches that are incompatible with whole-organizational change” (Bain & Zundans-Fraser, 2017, p. 12). While most universities offer design support services the combination of high demand and limited resources mean that many academics are left to their own devices (Bennet, Agostinho & Lockyer, 2017). Moving from working at scale across an institution, Ryan et al (2021) suggest that maintaining the quality of L&T while teaching at scale is a key issue for higher education. Massification brings both increased numbers and diversity of learners creating practical and pedagogical challenges for educators having to teach at scale.

Attempts to address the challenge of scale (e.g. certain types of MOOC, course site templates) tend to strike me as limited. Why?

Perhaps it is because…

A Typology of Scale

Morel et al (2019) argue that there is a lack of conceptual clarity around scale. In response, they offer a typology of scale, very briefly summarised in the following table.

Concept of scale
Description
Adoption Widespread use of an innovation – market share. Limited conceptualisation of expected use.
Replication Widespread implementation with fidelity will produce expected outcomes.
Adaptation Widespread use of an innovation that is modified in response to local needs.
Reinvention Intentional and systematic experimentation with an innovation. Innovation as catalyst for further innovation.

The practice of scale

Most institutional attempts at scale I’ve observed appear to fall into the first two conceptualisations.

MOOCs – excluding Connectivist MOOCs – aimed to scale content delivery through scale as replication. Institutional practice around the use of an LMS is increasingly driven by consistency in the form of templates. Leading to exchanges like that shared by Macfarlan and Hook (2022)

‘Can I do X?’ or ‘How would I do Y?’, until the ED said, ‘You can do anything you like, as long as you use the template.’ With a shrug the educator indicated their compliance. The ironic surrender was palpable.

At best, templates fall into the replication conception of scale. Experts produce something which they think will be an effective solution to a known problem. A solution that – if only everyone would just use as intended – will generate positive outcomes for learners. Arguments could be made that it quickly devolves into the adoption category. Others may claim their templates support adaptation, but only “as long as you use the template”?

Where do other institutional attempts fit on this typology?

Institutional learning and teaching frameworks, standards, plans and other abstract approaches? More adoption/replication?

The institutional LMS and the associated ecosystem of tools? The assumption is probably adaptation. The tools can be creatively adapted to suit whatever design intent would be the argument. However, for adaptation to work (see below) the relationship between the users and the tools needs to offer the affordance for customisation. I don’t think the current tools help enough with that.

Which perhaps explains why use of the LMS and associated tools is so limited/time consuming. But the current answer appears to be templates and consistency.

Education’s diversity problem

The folk who conceive of scale as adaptation, like Clark and Dede (2009) argue that

One-size-fits-all educational innovations do not work because they ignore contextual factors that determine an intervention’s efficacy in a particular local situation (p. 353)

Morel et al (2019) identify that this adaptation does assume/require the capacity from users to make modifications in response to contextual requirements. This will likely require more work from both the designers and the users. Which, for me, raises the following questions

  1. Does the deficit model of educators (they aren’t trained L&T professionals) held by some L&T professionals limit the ability to conceive of/adopt this type of scale?
  2. Does the difficulty of institutions face in customising contemporary digital learning environment (i.e. the LMS) – let alone enabling learners and teachers to do that customisation – limit the ability to conceive of/adopt this type of scale?
  3. For me, this also brings in the challenge of the iron triangle. How to (cost) efficiently scale learning and teaching in ways that respond effectively to the growing diversity of learners, teachers, and contexts?

How do you answer those questions at scale?

References

Bain, A., & Zundans-Fraser, L. (2017). The Self-organizing University. Springer. https://doi.org/10.1007/978-981-10-4917-0

Bennett, S., Agostinho, S., & Lockyer, L. (2017). The process of designing for learning: Understanding university teachers’ design work. Educational Technology Research & Development, 65(1), 125–145. https://doi.org/10.1007/s11423-016-9469-y

Clarke, J., & Dede, C. (2009). Design for Scalability: A Case Study of the River City Curriculum. Journal of Science Education and Technology, 18(4), 353–365. https://doi.org/10.1007/s10956-009-9156-4

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

Macfarlan, B., & Hook, J. (2022). ‘As long as you use the template’: Fostering creativity in a pedagogic model. ASCILITE Publications, Proceedings of ASCILITE 2022 in Sydney, Article Proceedings of ASCILITE 2022 in Sydney. https://doi.org/10.14742/apubs.2022.34

Morel, R. P., Coburn, C., Catterson, A. K., & Higgs, J. (2019). The Multiple Meanings of Scale: Implications for Researchers and Practitioners. Educational Researcher, 48(6), 369–377. https://doi.org/10.3102/0013189X19860531

Productivity commission recommended a need to grow access to higher education, contain fiscal costs, and improve teaching quality

Gatherers, Weavers and Augmenters: Three principles for dynamic and sustainable delivery of quality learning and teaching

Henry Cook, Steven Booten and I gave the following presentation at the THETA conference in Brisbane in April 2023.

Below you will find

  • Summary – a few paragraphs summarising the presentation.
  • Slides – copies of the slides used.
  • Software – some of the software produced/used as part of the work.
  • References – used in the summary and the slides.
  • Abstract – the original conference abstract.

Summary

The presentation used our experience as part of a team migrating 1500+ course sites from Blackboard to Canvas to explore a broader challenge. A challenge recently expressed in the Productivity Commission’s “Advancing Prosperity” report with its recommendations to grow access to tertiary education while containing cost and improving quality. This challenge to maximise all cost efficiency and quality and access (diversity & scale) is seen as a key issue for higher education (Ryan et al., 2021). It has even been labelled the “Iron Triangle” because – unless you change the circumstances and conditions – improving one indicator will almost inevitably lead to deterioration in the other indicators (Mulder, 2013). The pandemic emergency response being the most recent example of this. Necessarily rapid changes to access (moving from face-to-face to online) required significant costs (staff workload) to produce outcomes that are perceived to be of questionable quality.

Leading to the question we wanted to answer:

How do you stretch the iron triangle? (i.e. maximise cost efficiency, quality, and accessibility)?

In the presentation, we demonstrated that the fundamental tasks (gather and weave) of an LMS migration are manual and repetitive. Making it impossible to stretch the iron triangle. We illustrated why this is the case, demonstrated how we addressed this limitation, and proposed three principles for broader application. We argue that the three principles can be usefully applied beyond LMS migration to business as usual.

Gatherers and weavers – what we do

Our job is to help academic staff design, implement, and maintain quality learning tasks and environments. We suggest that the core tasks required to do this is to gather and weave disparate strands of knowledge, ways of knowing (especially various forms of design and contextual knowledge and knowing), and technologies (broadly defined). For example, a course site is the result of gathering and weaving together such disparate strands as: content knowledge (e.g. learning materials); administrative information (e.g. due dates, timetables etc); design knowledge (e.g. pedagogical, presentation, visual etc); and information & functionality from various technologies (e.g. course profiles, echo360, various components of the LMS etc).

An LMS migration is a variation on this work. It has a larger (all courses) and more focused purpose (migrate from one LMS to another). But still involves the same core tasks of gathering and weaving. Our argument is that to maximise the cost efficiency, accessibility, and quality of this work you must do the same to the core tasks of gathering and weaving. Early in our LMS migration it was obvious that this was not the case. The presentation included a few illustrative examples. There were many more that could’ve been used. Both from the migration and business as usual. All illustrating the overly manual and repetitive nature of gathering and weaving required by contemporary institutional learning environments.

Three principles for automating & augmenting gathering & weaving  – what we did

Digital technology has long been seen as a key enabler for improving productivity through its ability to automate processes and augment human capabilities. Digital technology is increasingly pervasive in the learning and teaching environment, especially in the context of an LMS migration. But none of the available technologies were actively helping automate or augment gathering and weaving. The presentation included numerous examples of how we changed this. From this work we identified three principles.

  1. On-going activity focused (re-)entanglement.
    Our work was focused on high level activities (e.g. analysis, migration, quality assurance, course design of 100s of course sites). Activities not supported by any single technology, hence the manual gathering and weaving. By starting small and continually responding to changes and lessons learned, we stretched the iron triangle by digitally gathering and weaving disparate component technologies into assemblages that were fit for the activities.
  2. Contextual digital augmentation.
    Little to none of the specific contextual and design knowledge required for these activities was available digitally. We focused on usefully capturing this knowledge digitally so it could be integrated into the activity-based assemblages.
  3. Meso-level focus.
    Existing component technologies generally provide universal solutions for the institution or all users of the technology. Requiring manual gathering and weaving to fit contextual needs for each individual variation. By leveraging the previous two principles we were able to provide “technologies that were fit for meso-level solutions. For example, all courses for a program or a school. All courses, that use a complex learning activity like interactive orals.

Connections with other work

Much of the above is informed by or echoes related research and practice in related fields. It’s not just we three. The presentation made explicit connections with the following:

  • Learning and teaching;
    Fawns’ (2022) work on entangled pedagogy as encapsulating the mutual shaping of technology, teaching methods, purposes, values and context (gathering and weaving). Dron’s (2022) re-definition of educational technology drawing on Arthur’s (2009) definition of technology. Work on activity centered design – which understands teaching as a distributed activity – as key to both good learning and teaching (Markauskaite et al, 2023), but also key to institutional management (Ellis & Goodyear, 2019). Lastly – at least in the presentation – the nature and need for epistemic fluency (Markauskaite et al, 2023)
  • Digital technology; and,
    Drawing on numerous contemporary practices within digital technology that break the false dilemma of “buy or build”. Such as the project to product movement (Philip & Thirion, 2021); Robotic Process Automation; Citizen Development; and the idea of lightweight IT development (Bygstad, 2017)
  • Leadership/strategy.
    Briefly linking the underlying assumptions of all of the above as examples of the move away from corporate and reductionist strategies that reduce people to “smooth users” toward possible futures that see us as more “collective agents” (Macgilchrist et al, 2020). A shift seen as necessary to more likely lead – as argued by Markauskaite et al (2023) – to the “even richer convergence of ‘natural’, ‘human’ and ‘digital’ required to respond effectively to global challenges.

There’s much more.

Slides

The presentation does include three videos that are available if you download the slides.

Related Software

Canvas QA is a Python script that will perform Quality Assurance checks on numerous Canvas courses and create a QA Report web page in each course’s Files area. The QA Report lists all the issues discovered and provides some scaffolding to address the issues.

Canvas Collections helps improve the visual design and usability/findability of the Canvas modules page. It is Javascript that can be installed by institutions into Canvas or by individuals as a userscript. It enables the injection of design and context specific information into the vanilla Canvas modules page.

Word2Canvas converts a Word document into a Canvas module to offer improvements to the authoring process in some contexts. At Griffith University, it was used as part of the migration process where Blackboard course site content was automatically converted into appropriate Word documents.  With a slight edit, these Word documents could be loaded directly into Canvas.

References

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bessant, S. E. F., Robinson, Z. P., & Ormerod, R. M. (2015). Neoliberalism, new public management and the sustainable development agenda of higher education: History, contradictions and synergies. Environmental Education Research, 21(3), 417–432. https://doi.org/10.1080/13504622.2014.993933

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT. Journal of Information Technology, 32(2), 180–193. https://doi.org/10.1057/jit.2016.15

Cassidy, C. (2023, April 10). ‘Appallingly unethical’: Why Australian universities are at breaking point. The Guardian. https://www.theguardian.com/australia-news/2023/apr/10/appallingly-unethical-why-australian-universities-are-at-breaking-point

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Hagler, B. (2020). Council Post: Build Vs. Buy: Why Most Businesses Should Buy Their Next Software Solution. Forbes. Retrieved April 15, 2023, from https://www.forbes.com/sites/forbestechcouncil/2020/03/04/build-vs-buy-why-most-businesses-should-buy-their-next-software-solution/

Inside Track Staff. (2022, October 19). Citizen developers use Microsoft Power Apps to build an intelligent launch assistant. Inside Track Blog. https://www.microsoft.com/insidetrack/blog/citizen-developers-use-microsoft-power-apps-to-build-intelligent-launch-assistant/

Lodge, J., Matthews, K., Kubler, M., & Johnstone, M. (2022). Modes of Delivery in Higher Education (p. 159). https://www.education.gov.au/higher-education-standards-panel-hesp/resources/modes-delivery-report

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89. https://doi.org/10.1080/17439884.2019.1656235

Markauskaite, L., Carvalho, L., & Fawns, T. (2023). The role of teachers in a sustainable university: From digital competencies to postdigital capabilities. Educational Technology Research and Development, 71(1), 181–198. https://doi.org/10.1007/s11423-023-10199-z

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105. https://doi.org/10.19173/irrodl.v14i2.1536

Philip, M., & Thirion, Y. (2021). From Project to Product. In P. Gregory & P. Kruchten (Eds.), Agile Processes in Software Engineering and Extreme Programming – Workshops (pp. 207–212). Springer International Publishing. https://doi.org/10.1007/978-3-030-88583-0_21

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

Schmidt, A. (2017). Augmenting Human Intellect and Amplifying Perception and Cognition. IEEE Pervasive Computing, 16(1), 6–10. https://doi.org/10.1109/MPRV.2017.8

Smee, B. (2023, March 6). ‘No actual teaching’: Alarm bells over online courses outsourced by Australian universities. The Guardian. https://www.theguardian.com/australia-news/2023/mar/07/no-actual-teaching-alarm-bells-over-online-courses-outsourced-by-australian-universities

Abstract

The pandemic reinforced higher educations’ difficulty responding to the long-observed challenge of how to sustainably and at scale fulfill diverse requirements for quality learning and teaching (Bennett et al., 2018; Ellis & Goodyear, 2019). Difficulty increased due to many issues, including: competition with the private sector for digital talent; battling concerns over the casualisation and perceived importance of teaching; and, growing expectations around ethics, diversity, and sustainability. That this challenge is unresolved and becoming increasingly difficult suggests a need for innovative practices in both learning and teaching, and how learning and teaching is enabled. Starting in 2019 and accelerated by a Learning Management System (LMS) migration starting in 2021 a small group have been refining and using an alternate set of principles and practices to respond to this challenge by developing reusable orchestrations – organised arrangements of actions, tools, methods, and processes (Dron, 2022) – to sustainably, and at scale, fulfill diverse requirements for quality learning and teaching. Leading to a process where requirements are informed through collegial networks of learning and teaching stakeholders that weigh their objective strategic and contextual concerns to inform priority and approach. Helping to share knowledge and concerns and develop institutional capability laterally and in recognition of available educator expertise.

The presentation will be structured around three common tasks: quality assurance of course sites; migrating content between two LMS; and, designing effective course sites. For each task a comparison will be made between the group’s innovative orchestrations and standard institutional/vendor orchestrations. These comparisons will: demonstrate the benefits of the innovative orchestrations; outline the development process; and, explain the three principles informing this work – 1) contextual digital augmentation, 2) meso-level automation, and 3) generativity and adaptive reuse. The comparisons will also be used to establish the practical and theoretical inspirations for the approach, including: RPA and citizen development; and, convivial technologies (Illich, 1973), lightweight IT development (Bygstad, 2017), and socio-material understandings of educational technology (Dron, 2022). The breadth of the work will be illustrated through an overview of the growing catalogue of orchestrations using a gatherers, weavers, and augmenters taxonomy.

References

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026. https://doi.org/10.1111/bjet.12683

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology. https://doi.org/10.1057/jit.2016.15

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166. https://doi.org/10.1007/s00146-021-01195-z

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Branches of lantana entangled with each other and a dead tree branch. Sprinkled with bright green lantana leaves

Orchestrating entangled relations to stretch the iron triangle: Observations from an LMS migration

About

This work arose from the depths of an institutional LMS migration (Blackboard Learn to Canvas). In particular, the observation that the default migration processes required an awful lot of low level manual labour. Methods that appeared to reduce the quality of the migration process and increase the cost. Hence we started developing different methods. As the migration project unfolded we kept developing and refining. Building on what we’d done before and further decreasing the cost of migration, increasing the quality of the end result, and increasing the scale and diversity of what we could migrate.

We were stretching the iron triangle (Ryan et al, 2021). Since stretching the iron triangle a key strategic issue for higher education (Ryan et al, 2021), questions arose, including:

  1. What was different between the two sets of orchestrations? Why are our orchestrations better than the default at stretching the iron triangle?
  2. Might those differences help stretch the iron triangle post-migration (i.e. business as usual – BAU)?
  3. Can we refine and improve those differences?

The work here is an initial exploration into answering the first question.

Table of Contents

Below you will find

Abstract

A key strategic issue for higher education is how to maximise the accessibility, quality, and cost efficiency of learning and teaching (Ryan et al., 2021). Higher education’s iron triangle literature (Daniel et al, 2009; Mulder, 2013; Ryan et al, 2021) argues that effectively addressing this challenge is difficult, if not impossible, due to the “iron” connections between the three qualities. These iron connections mean maximising one quality will inevitably result in reductions in the other qualities. For example, the rapid maximisation of accessibility required by the COVID-19 pandemic resulted in a reduction in cost efficiency (increased staff costs) and a reduction in the perceived quality of learning experiences (Martin, 2020). These experiences illustrate higher education’s on-going difficulties in creating orchestrations that stretch the iron triangle by sustainably and at scale fulfilling diverse requirements for quality learning, (Bennett et al., 2018; Ellis & Goodyear, 2019). This exploratory case study aims to help reduce this difficulty by answering the question: What characteristics of orchestrations help to stretch the iron triangle?

An LMS migration is an effective exploratory case for this research question since it is one of the most labour-intensive and complex projects undertaken by universities (Cottam, 2021). It is a project commonly undertaken with the aim of stretching the iron triangle. Using a socio-material perspective (Ellis & Goodyear, 2019; Fawns, 2022) and drawing on Dron’s (2022) definition of educational technology the poster examines three specific migration tasks: migrating lecture recordings; designing quality course sites; and, performing quality assurance checks. For each task, two different orchestrations – organized arrangements of actions, tools, methods, and processes (Dron, 2022) – are described and analysed. The institutional orchestrations developed by the central project organising the migration of an institution’s 4500+ courses, and the group orchestrations developed, due to perceived limitations of the institutional orchestrations, by a sub-group directly migrating 1700+ courses.

Descriptions of the orchestrations are used to identify their effectiveness in sustainably and at scale satisfying diverse quality requirements – stretching the iron triangle. Analysis of these orchestrations identified three characteristics that are more likely to stretch the iron triangle: contextual digital augmentation; meso-level automation; and, generativity and adaptive reuse. Each of these characteristics, their presence in each orchestration, the relationships between these characteristics; linkages with existing literature and practice; and their observed impact on the iron triangle qualities is described. These descriptions are used to illustrate the very different assumptions underpinning the two sets of orchestrations. Differences in relationships evident in the orchestrations and which mirror the distinctions between ‘smooth users’ and ‘collective agency’ (Macgilchrist et al., 2020); and, industrial and convivial tools (Illich, 1973). The characteristics identified by this exploratory case study suggest that an approach that is less atomistic and industrial, and more collective and convivial may help reconnect people with educational technology more meaningfully and sustainably. Consequently this shift may also help increase higher education’s ability to maximise the accessibility, quality, and, cost efficiency of learning and teaching.

Poster

The postere is embedded below and also available directly from Google slides. The Enter full screen option available from the “three dots” button at the bottom of the poster embed is useful for viewing the poster.

Comparing orchestrations

The core of this exploratory case study is the comparison of two sets of orchestrations and how they seek to fulfill the same three tasks.

echo360 migration

Course site QA

Course site usability

About the orchestrations

The orchestrations discussed typically rely on software that we’ve developed by building on the shoulders of other giants of open source software. Software that we’re happy to share with others.

Course Analysis Report (CAR) process

The CAR process started as an attempt to make it easier for migration staff to understand what was in a Blackboard course site. It started with a gather that extract the contents of each Blackboard course site into an offline data structure. A data structure that provided a foundation for much, much more.

The echo360 migration video offers some more detail. The following image is from that video. It shows the CAR folder for a sample Blackboard course. Generated by the CAR script this folder contains

  • A folder (contentCollection) containing copies of all the files uploaded to the Blackboard course site.
    The files are organised in two ways to help the migration:

    1. Don’t migrate files that are no longer used in the course site; and,
      Files are placed into an attached or unattached folder depending on whether they are still used by the Blackboard course site.
    2. Don’t migrate all the files in one single unorganised folder.
  • A folder (coursePages) containing individual Word documents containing the content of course site pages.
  • A CAR report.
    A Word document that summarises the content, structure and features used in a course site.
  • A pickle file.
    Contains a copy of all the course site details and content in a machine readable format.

While the CAR code is not currently publicly available we are happy to share.

Copy of slide showing a CAR folder structure. With pointers to the contentCollection and coursePages folders and a Word doc (CAR doc) and pickle file

Word2Canvas

Word2Canvas is Javascript which modifies the modules page on a Canvas course site. It provides a button that allows you to convert a specially formatted Word document into Canvas module.

The coursePages folder produced by the CAR process generates these specially formatted Word documents. Enabling migration to consist largely of minor edits of a Word document and using word2canvas to create a Canvas module.

The echo360 migration video offers some more detail, including an example of using the CAR. The Word2Canvas to site provides more detail again, including how to install and use word2canvas.

Canvas Collections

Canvas Collections is also Javascript which modifies the Canvas modules page. However, Canvas Collections’ modifications seek to improve the usability and visual design of the modules page. In doing so it addresses long known limitations of the Modules page, as the following table summarises.

Limitation of Canvas modules Collections Functionality
Lots of modules leads to a long list to search Group modules into collections that are viewed separately
An overly linear and underwhelming visual design Ability to select from, change between, and create new representations of collections and their modules.
No way to add narrative or additional contextual information about modules to the modules page. Ability to transform vanilla Canvas modules into contextual objects by adding additional properties (information) for modules that are used in representations and other functionality.

The course site usability video provides more detail on Canvas Collections, as does the Canvas Collections site. Canvas Collections is available for use now, but is continually being developed.

References – Poster

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology, 32(3), 180-193. https://doi.org/10.1057/jit.2016.15

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1). https://doi.org/10.18278/jolrap.8.1.5

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166. https://doi.org/10.1007/s00146-021-01195-z

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS Quarterly, 19(2), 213–236.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89. https://doi.org/10.1080/17439884.2019.1656235

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105. https://doi.org/10.19173/irrodl.v14i2.1536

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

References – Abstract

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026. https://doi.org/10.1111/bjet.12683

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1). https://doi.org/10.18278/jolrap.8.1.5

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166. https://doi.org/10.1007/s00146-021-01195-z

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education. https://doi.org/10.1007/s42438-022-00302-7

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89. https://doi.org/10.1080/17439884.2019.1656235

Martin, L. (2020). Foundations for good practice: The student experience of online learning in Australian higher education during the COVID-19 pandemic). Tertiary Educational Quality and Standards Agency. https://www.teqsa.gov.au/latest-news/publications/foundations-good-practice-student-experience-online-learning-australian

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

Entangled Japanese power lines

Orchestrating entangled relations to break the iron triangle: examples from a LMS migration

Introduction

All university strategies for learning and teaching seek to maximise: accessibility (as many people as possible can participate – feel the scale – in as many ways as possible); quality (it’s good); and, cost effectiveness (it’s cheap to produce and offer). Ryan et al (2021) argue that this is a “key issue for contemporary higher education” (p. 1383) due to inevitable cost constraints, the benefits of increased access to higher education, and requirements to maintain quality standards. However, the literature on the “iron triangle” in higher education (e.g. Daniel et al, 2009; Mulder, 2013; Ryan et al, 2021) suggests that maximising all three is difficult, if not impossible. As illustrated in Figure 1 (adapted from Mulder, 2013, p. 100) the iron triangle suggests that changes in one (e.g. changing accessibility from on-campus to online due to COVID) will have negatively impact at least one, but probably both, of the other qualities (e.g. the COVID response involving increase in workload for staff and resulting in less than happy participants).

Figure 1: Illustrating the iron triangle (adapted from Mulder, 2013, p. 100)
Illustration of the iron triangle

Much of the iron triangle literature identifies different strategies that promise to break the iron triangle. Mulder (2013) suggests open educational resources (OER). Daniel et al (2009) suggest open and distance eLearning. Ryan et al (2021) suggest high-quality large group teaching and learning; alternative curriculum structures; and automation of assessment and feedback.

I’m not convinced that any of these will break the iron triangle. Not due to the inherent validity of the specific solutions (though there are questions). Instead my doubts arise from how such suggestions would be implemented in contemporary higher education. Each would be implemented via variations on common methods. My suspicion is that these methods are likely to limit any attempts to break the iron triangle because they are incapable of effectively and efficiently orchestrating the entangled relations that are inherent to learning and teaching.

Largely because existing methods are based on atomistic, and deterministic understandings of education, technology, and organisations. The standard methods – based on practices like stepwise refinement and loose coupling – may be necessary but aren’t sufficient for breaking the iron triangle. These methods decompose problems into smaller black boxes (e.g. pedagogy before technology; learning and teaching, requirements and implementation; enrolment, finance, and HR; learning objects etc.) making it easier to solve the smaller problem within the confines of its blackbox. The assumption is that solving larger problems (e.g. designing a quality learning experience or migrating to a new LMS) is simply a matter of combining different black boxes like lego blocks to provide a solution. The following examples illustrate how this isn’t reality.

Entangled views of pedagogy (Fawns, 2022), educational technology (Dron, 2022), and associated “distributed” views (Jones and Clark, 2014) argue that atomistic views are naive and simply don’t match the reality of learning and teaching. As Parrish (2004) argued almost two decades ago in the context of learning objects, decontextualised black boxes place an increased burden on others to add the appropriate context back in. To orchestrate the entangled relations between and betwixt the black boxes and the context in which they are used. As illustrated in the examples below, current practice relies on this orchestration being manual and time consuming. I don’t see how this foundation enables the iron triangle to be broken.

Three examples from an LMS migration

We’re in the process of migrating from Blackboard Learn to Canvas. I work with one part of an institution and we’re responsible for migrating some 1400 courses (some with multiple course sites) over 18 months. An LMS migration “is one of the most complex and labor-intensive initiatives that a university might undertake” (Cottam, 2021, p. 66). Hence much of the organisation is expending effort to make sure it succeeds. This includes enterprise information technology players such as the new LMS vendor, our organisational IT division, and various other enterprise systems and practices. i.e. there are lots of enterprise black boxes available. The following seeks to illustrate the mismatch between these “enterprise” practices and what we have to actually do as part of an LMS migration.

In particular, three standard LMS migration tasks are used as examples, these are:

  1. Connect the LMS with an ecosystem of tools using the Learning Tools Interoperability (LTI) standard.

  2. Moving content from one LMS to another using the common cartridge standard.

  3. “to make teaching and learning easier” using a vanilla LMS.

The sections below describe the challenges we faced as each of these standardised black boxes fell short. Each were so disconnected from our context and purpose to require significant manual re-entanglement to even approach being fit-for-purpose. Rather than persevere with an inefficient, manual approach to re-entanglement we did what many, many project teams have done before. We leveraged digital technologies to help automate the re-entanglement of these context-free and purposeless black boxes into fit-for-purpose assemblages that were more efficient, effective, and provided a foundation for on-going improvement and practice. Importantly, a key part of this re-entanglement was injecting some knowledge of learning design. Our improved assemblages are described below.

1. Connect the LMS with an ecosystem of tools using the LTI standard

Right now we’re working on migrating ~500 Blackboard course sites. Echo360 is used in these course sites for lecture capture and for recording and embedding other videos. Echo360 is an external tool, it’s not part of the LMS (Blackboard or Canvas). Instead, the Learning Tools Interoperability (LTI) standard is used to embed and link echo360 videos into the LMS. LTI is a way to provide loose coupling between the separate black boxes of the LMS and other tools. It makes it easy for the individual vendors – both LMS and external tools – to develop their own software. They focus on writing software to meet the LTI standard without a need to understand (much of) the internal detail of each other’s software. Once done, their software can interconnect (via a very narrow connection). For institutional information technology folk the presence of LTI support in a tool promises to make it easy to connect one piece of software to another. i.e. it makes it easy to connect the Blackboard LMS and Echo360; or, to connect the Canvas LMS and Echo360.

From the teacher perspective, one practice LTI enables is a way for an Echo360 button to appear in the LMS content editor. Press that button and you access your Echo360 library of videos from which you select the one you wish to embed. From the student perspective, the echo360 video is embedded in your course content within the LMS. All fairly seamless.

Wrong purpose, no relationship, manual assemblage

Of the ~500 course sites we’re currently working on there are 2162 echo360 embeds. Those are spread across 98 of the course sites. Those 98 course sites have on average 22 echo360 videos. 62 of the course sites have 10 or more echo360 embeds. One course has 142 echo360 embeds. The ability to provide those statistics is not common. We can do that because of the orchestration we’ve done in the next example.

The problem we face in migrating these videos to Canvas is that our purpose falls outside the purpose of LTI. Our purpose is not focused on connecting an individual LMS to echo360. We’re moving from one LMS to another LMS. LTI is not designed to help with that purpose. LTI’s purpose (one LMS to echo360) and how it’s been implemented in Blackboard creates a problem for us. The code to embed an echo360 video in Blackboard (via LTI) is different to the code to embed the same video in Canvas (via LTI). If I use Blackboard’s Echo360 LTI plugin to embed an echo360 video into Blackboard the id will be f34e8a01-4f72-46e1-XXXX-105XXXXXf75f. If I use the Canvas Echo360 LIT plugin to embed the very same video into Canvas it will use a very different id (49dbc576-XXXX-4eb0-b0d6-6bXXXXX0707). This means that to migrate from Blackboard to Canvas each of the 2162 echo360 videos in our 500+ courses we will need to regenerate/identify a new id.

The initial solution to this problem was:

  1. A migration person manually searches a course site and generates a list of names for all the echo360 videos.

  2. A central helpdesk uses that list to manually use the echo360 search mechanism to find and generate a new id for each video and update the list

    Necessary because in echo360 only the owner of the video or the echo360 “root” user can access/see the video. So either the video owner (typically an academic) or the “root” user generate the new ids. From a risk perspective, only a very small number of people should have root access, it can’t be given to all the migration people.

  3. The migration person receives the list of new video ids and manually updates the new Canvas course site.

…and repeat that for thousands of echo360 videos.

It’s evident that this process involves a great deal of manual work and a bottleneck in terms of “root” user access to echo360.

Orchestrating the relationships into a semi-automated assemblage

A simple improvement to this approach would be to automate step #2 using something like Robotic Process Automation. With RPA the software (i.e. the “robot”) could step through a list of video names, login to the echo360 web interface, search for the video, find it, generate a new echo360 id for Canvas, and write that id back to the original list. Ready for handing back to the migration person.

A better solution would be to automate the whole process. i.e. have software that will

  1. Search through an entire Blackboard course site and identify all the echo360 embeds.

  2. Use the echo360 search mechanism to find and generate a new id for each video.

  3. Update the Canvas course site with the new video ids.

That’s basically what we did with some Python code. The Python code helps orchestrate the relationship between Blackboard, Canvas, and Echo360. It helps improve the cost effectiveness of the process though doesn’t shift the dial much on access or quality.

But there’s more to this better solution than echo360. Our Python code needs to know what’s in the Blackboard course site and how to design content for Canvas. The software has to be more broadly connected. As explained in the next example.

Moving content from one LMS to another using the common cartridge standard

Common Cartridge provides “a standard way to represent digital course materials”. Within the context of an LMS migration, common cartridge (and some similar approaches) provide the main way to migrate content from one LMS to another. It provides the black box encapsulation of LMS content. Go to Blackboard and use it to produce a common cartridge export. Head over to the Canvas and use its import feature to bring the content in. Hey presto migration complete.

If only it were that simple.

2. Migrating content without knowing anything about it or how it should end up

Of course it’s not as simple as that, there are known problems, including:

  1. Not all systems are the same so not all content can be “standardised”.

    Vendors of different LMS seek to differentiate themselves from their competitors. Hence they tend to offer different functionality, or implement/label the same functionality differently. Either way there’s a limit to how standardised digital content can be and not all LMS support the same functionality (e.g. quizzes). Hence a lot of manual work arounds to identify and remedy issues (orchestrating entangled relations).

  2. Imports are ignorant of learning design in both source and destination LMS.

    Depending on the specific learning design in a course, the structure and nature of the course site can be very different. Standardised export formats – like common cartridge – use standardised formats. They are ignorant of the specifics of course learning design as embodied in the old LMS. They are also ignorant of how best to adapt the course learning design to the requirements of the new LMS.

  3. Migrating information specific to the old LMS.

    Since common cartridge just packages up what is in the old LMS, detail specific to the old LMS gets ported to the new and has to be manually changed. e.g. echo360 embeds as outlined above, but also language specific to the old lms (e.g. Blackboard) but inappropriate to the new.

  4. Migrating bad practice.

    e.g. it’s quite common for the “content collection” area of Blackboard courses to collect a large number of files. Many of these files are no longer used. Some are mistaken left overs, some are just no longer used. Most of the time the content collection is one long list of files with names like lecture 1.pptx, lecture 1-2019.pptx, lectures 1a.pptx. The common cartridge approach to migration packages up all that bad practice and ports it to the new LMS.

All these problems contribute to the initial migration outcome not being all that good. For example, the following images. Figure 2 is the original Blackboard course site. A common cartridge of that Blackboard course site was created and imported into Canvas. Figure 3 is the result.

It’s a mess and that’s just the visible structure. What were separate bits of content are now all combined together, because common cartridge is ignorant of that design. Some elements that were not needed in Canvas have been imported. Some information (Staff Information) was lost. And did you notice the default “scroll of death” in Canvas (Figure 3)?

Figure 2: Source LMS
Student view of a simple Blackboard course
Figure 3: Destination LMS
Student view of Canvas course created by importing a Common Cartridge export of the Blackboard course

The Canvas Files area is even worse off. Figure 4 shows the files area of this same course after common cartridge import. Only the first four or five files were in the Blackboard course. All the web_content0000X folders are added by the common cartridge import.

Figure 4: Canvas files area – common cartridge import
Canvas files area after Common Cartridge import

You can’t leave that course in that stage. The next step is to manually modify and reorganise the Canvas site into a design that works in Canvas. This modification relies on the Canvas web interface. Not the most effective or efficient interface for that purpose (e.g. the Canvas interface still does not provide a way to delete all the pages in a course). Importantly, remember that this manual tidy up process has to be performed for each of the 1400+ course sites we’re migrating.

The issue here is the common cartridge is a generic standard. Its purpose (in part) is to take content from any LMS (other other tool) and enable it to be imported into another LMS/tool. It has no contextual knowledge. We have to manually orchestrate that back in.

Driving the CAR: Migration scaffolded by re-entangling knowledge of source and destination structure

On the other hand, our purpose is different and specific. We know we are migrating from a specific version of Blackboard to a specific version of Canvas. We know the common approaches used in Blackboard by our courses. We eventually developed the knowledge of how what was common in Blackboard must be modified to work in Canvas. Rather than engage in the manual, de-contextualised process above, a better approach would leverage our additional knowledge and use it to increase the efficiency and the effectiveness of the migration.

To do this we developed the Course Analysis Report (CAR) approach. Broadly this approach automates the majority of the following steps:

  1. Pickle the Blackboard course site.

    Details of the structure, make up, and the HTML content of the Blackboard course site is extracted out of Blackboard and stored into a file. A single data structure (residing in a shared network folder) that contains a snapshot of the Blackboard course site.

  2. Analyse the pickle and generate a CAR.

    Perform various analysis and modifications to the pickle file (e.g. look for Blackboard specific language, modify echo360 embeds, identify which content collections files are actually attached to course content etc.) stick that analysis into a database, and generate a Word document providing a summary of the course site.

  3. Download the course files and generate specially formatted Word documents representing course site content.

    Using our knowledge of how our Blackboard courses are structured and the modifications necessary for an effective Canvas course embodying a similar design intent create a couple of folders in the shared course folder containing all of the files and Word documents containing the web content of the Blackboard course. Format these files, folders, and documents to scaffold modification (using traditional desktop tools). For example, separate out the files from the course into those that were actually used in the current course site and those that aren’t. Making it easy to decide not to migrate unnecessary content.

  4. Upload the modified files and Word documents directly into Canvas as mostly completed course content.

    Step #3 is where almost all the design knowledge necessary gets applied to the migrate the course. All that’s left is to upload it into Canvas. Uploading the files is easy and supported by Canvas. Uploading the Word documents into Canvas as modules is done via word2Canvas a semi-automated tool.

Steps #1 and #2 are entirely automatic as is the download of course content and generation of the Word documents in step #3. These are stored in shared folders available to the entire migration team (the following table provides some stats on those folders). From there the migration is semi-automated. People leveraging their knowledge to make decisions and changes using common desktop tools.

Development Window # course sites # of files Disk Usage
1 219 15,213 1633Gb
2 555 2531 336Gb

Figures 5 and 6 show the end result of this improved migration process using the same course as the Figures 3 and 4. Figure 5 illustrates how the structure of “modules” in the Blackboard site has been recreated using the matching Canvas functionality. What the figures don’t show is that Step 3 of the CAR process has removed or modified Blackboard practices to fit the capabilities of Canvas.

Figure 6 illustrates a much neater Files area compared to Figure 4. All of the unnecessary common cartridge crud is not there. Figure 5 also illustrates Step 3’s addition of structure to the Files area. The three files shown are all within a Learning Module folder. This folder was not present in the Blackboard course site’s content collection. It’s been added by the CAR to indicate where in the course site structure the files were used. These images were all used within the Learning Modules content area in the Blackboard course site (Figure 2). In a more complex course site this additional structure makes it easier to find the relevant files.

Figure 5 still has a pretty significant whiff of the ‘scroll of death’. In part because the highly visual card interface used in the Blackboard course site is not available in Blackboard. This is a “feature” of Canvas and how it organises learning content in a long, visually boring scroll of death. More on that next.

Figure 5: Canvas site via CAR
| Canvas course site created by migrating via CAR
Figure 6: Canvas files via CAR
Canvas files migrated via CAR

3. Making teaching and learning easier/better using a vanilla LMS

There’s quite a bit of literature and other work arguing about the value to learning and the learning experience of the aesthetics, findability, and usability of the LMS and LMS courses. Almost as much as there is literature and work expounding on the value of consistency as a method for addressing those concerns (misguided IMHO). Migrating to a new LMS typically includes some promise of making the experience of teaching and learning easier, better, and more engaging. For example, one of the apparent advantages of Canvas is it reportedly looks prettier than the competitors. People using Canvas generally report the user interface as feeling cleaner. Apparently it “provides students with an accessible and user-friendly interface through which they can access course learning materials”.

Using a overly linear, visually unappealing, context-free, generic tool constrained by the vendor

Of course beauty is in the eye of the beholder and familiarity can breed contempt. Some think Canvas “plain and ugly”. As illustrated above by Figures 2 and 4 the Canvas Modules view – the core of how students interact with study material – is known widely (e.g. University of Oxford) to be overly linear, involve lots of vertical scrolling, and not be very visually appealing. Years of experience has also shown that the course navigation experience is less than stellar for a variety of reasons.

There are common manual workarounds that are widely recommended to teaching staff. There is also a community of third party design tools intended to improve the Canvas interface and navigation experience. As well as requests to Canvas to respond to these observations and improve the system. Some examples include: a 2015 request; a suggestion from 2016 to allow modules within modules; and another grouping modules request in 2019. The last of which includes a comment touching on the shortcomings of most of the existing workarounds. The second of which includes comment from the vendor explaining there are no plans to provide this functionality.

As Figure 2 demonstrates, we’ve been able to do aspects of this since 2019 in Blackboard Learn, but we can’t in the wonderful new system we’re migrating to. We’ll be losing functionality (used in hundreds of courses.

Canvas Collections: Injecting context, visual design, and alternatives into the Canvas’ modules page

Canvas Collections is a work-in-progress designed to address the shortcomings of the current Canvas modules page. We’re working through the prevailing heavyweight umwelt in attempt to move it into production. For now, it’s working as a userscript. Illustrating the flexibility of light-weight approach, it’s currently updated to semi-automate the typical Canvas workaround for creating visual home pages. Canvas Collections is inspired by related approaches within the Canvas Community, including: CSS-based approaches to creating interactive cards; and, Javascript methods for inserting cards into Canvas which appears to have gone into production at the University of Oxford. But also draw upon the experiences of developing and supporting the use of the Card Interface in Blackboard.

Canvas Collections is Javascript to modify the Canvas modules view by adding support for three new abstractions. Each of the abstractions represent different ways to orchestrate entangled relations. The three abstractions are:

  1. Collections;

    Rather than a single, long list of modules. Modules can be grouped into collections that align with the design intent of the course. Figures 7 and 8 illustrate a common use of two collections: course content and assessment. A navigation bar is provided to switch between the two collections. When viewing a collection you only see the modules that belong to that collection.

  2. Representations; and,

    Each collection can be represented in different ways. No longer limited to a text-based list of modules and their contents. Figures 7 and 8 demonstrate use of a representation that borrows heavily from the Card Interface. Such representations – implemented in code – can perform additional tasks to further embed context and design intent.

  3. Additional module “metadata”.

    Canvas stores a large collection of generic information about Modules. However, as you engage in learning design you assign additional meaning and purpose to modules, which can’t be stored in Canvas. Canvas Collections supports additional design-oriented metadata about modules. Figures 7 and 8 demonstrate the addition to each module of: a description or driving question to a module to help learners understand the module’s intent; a date or date period when learners should pay attention to a module; a different label to a module to further refine its purpose; and, a picture to visually representation ([dual-coding](https://en.wikipedia.org/wiki/Dual-coding_theory) anyone?).

Figures 7 and 8 illustrate each of these abstractions. The modules for this sample course have been divided into two collections: Course Content (Figure 7) and Assessment (Figure 8). Perhaps not very creative, but mirroring common organisational practice. Each Canvas module is represented by a card, which includes the title (Canvas), a specific image, a description, relevant dates, and a link to the module.

The dates are a further example of injecting context into a generic tool to save time and manual effort. The provision of specific dates (e.g. July 18, Friday, September 2) would require manual updating every time a course site was rolled over to a new offering (at a new time). Alternatively, Canvas Collections Griffith Cards representation knows both the Griffith University calendar and how Griffith’s approach to Canvas course ids specify the study period for a course. This means dates can be specific in a generic study period format (e.g. Week 1, or Friday Week 11) and the representation can figure out the actual date.

Not only does Canvas Collections improve the aesthetics of a Canvas course site it improves the findability of information within the course site by making it possible to explicitly represent the information architecture. Research (Simmunich et al, 2015) suggests that course sites with higher findability lead to increases in student reported self-efficacy and motivation, and a better overall experience. Experience with the Card Interface and early experience with Canvas Collections suggest that it is just not the students which benefit. Being able to improve a course site using Canvas Collections appears to encourage teaching staff to think more explicitly about the design of their course sites. Being asked to consider questions like: What are the core objects/activities in your course? How should they be explained? Visually represented?

Figure 7: Canvas Collections – content collection

Figure 8: Canvas Collections – assessment collection

Conclusions

The argument here is that more effective orchestration of entangled relations will be a necessary (though not sufficient) enabler for breaking the iron triangle in learning and teaching. On-going reliance on manual orchestration of entangled relations necessary to leverage the black-boxes of heavyweight IT will be a barrier to breaking the iron triangle. In terms of efficiency, effectiveness, and novelty. Efficiency because manual orchestration requires time-consuming human intervention. Effectiveness, at least because the time requirement will either prevent it from being done or, if one, increase significantly the chance of human error. Novelty because – as defined by Arthur (2019) – technological evolution comes from combining technologies where technology is “the orchestration of phenomena for some purpose” (Dron, 2021, p. 155). It’s orchestration all the way down. The ability to creatively orchestrate the entangled relations inherent to learning and teaching will be a key enabler to developing new learning and teaching practices.

What we’re doing is not new. In the information systems literature it has been labelled light-weight Information Technology (IT) development defined as “a socio-technical knowledge regime driven by competent users’ need for solutions, enabled by the consumerisation of digital technology, and realized through innovation processes” (Bygstad, 2017, p. 181). Light-weight IT development is increasingly how people responsible for solving problems with the black boxes of heavyweight IT (a different socio-technical knowledge regime) leverage technology to orchestrate the necessary entangled relations into contextually appropriate assemblages to solve their own needs. It is how they do this in ways that save time and enable new and more effective practice. The three examples above illustrate how we’ve done these in the context of an LMS migration and the benefits that have arisen.

These “light-weight IT” practices aren’t new in universities or learning and teaching. Pre-designed templates for the LMS (Perämäki, 2021) are an increasingly widespread and simple example. The common practice within the Canvas community of developing and sharing userscripts or sharing Python code are examples. More surprising examples is the sheer number of Universities which have significant enterprise projects in the form of Robotic Process Automation (RPA) (e.g. the University of Melbourne, the Australian National University, Griffith University, and the University of Auckland). RPA is a poster child example of lightweight IT development. These significant enterprise RPA projects are designed to develop the capability to more efficiently and effectively re-entangle the black boxes of heavyweight IT. But to date universities appear to be focusing RPA efforts on administrative processes such as HR, Finance, and student enrolment. I’m not aware of any evidence of institutional projects explicitly focused on applying these methods to learning and teaching. In fact, enterprise approaches to the use of digital technology appear more interested in increasing the use of outsourced, vanilla enterprise services. Leaving it to us tinkerers.

A big part of the struggle is that lightweight and heavyweight IT are different socio-technical knowledge regimes (Bygstad, 2017). They have different umwelten and in L&T practice the heavyweight umwelten reigns supreme. Hence, I’m not sure if I’m more worried about the absence of lightweight approaches to L&T at universities, or the nature of the “lightweight” approach that universities might develop given their current knowledge regimes. On the plus side, some really smart folk are starting to explore the alternatives.

References

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology. https://doi.org/10.1057/jit.2016.15

Cottam, M. E. (2021). An Agile Approach to LMS Migration. Journal of Online Learning Research and Practice, 8(1). https://doi.org/10.18278/jolrap.8.1.5

Daniel, J., Kanwar, A., & Uvalić-Trumbić, S. (2009). Breaking Higher Education’s Iron Triangle: Access, Cost, and Quality. Change: The Magazine of Higher Learning, 41(2), 30–35. https://doi.org/10.3200/CHNG.41.2.30-35

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166. https://doi.org/10.1007/s00146-021-01195-z

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education. https://doi.org/10.1007/s42438-022-00302-7

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105. https://doi.org/10.19173/irrodl.v14i2.1536

Perämäki, M. (2021). Predesigned course templates: Helping organizations teach online [Masters, Tampere University of Applied Sciences]. http://www.theseus.fi/handle/10024/496169

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

Exploring Dron’s definition of educational technology

Pre-COVID the role of technology in learning and teaching in higher education was important. However, in 2020 it became core as part of the COVID response. Given the circumstances it is no surprise that chunks of that response were not that great. There was some good work. There was a lot of a “good enough for the situation” work. There was quite a bit that really sucked. For example,

Drake Hotline Bling Meme

Arugably, I’m not sure there’s much difference from pre-COVID practice. Yes, COVID meant that the importance and spread of digital technology use was much, much higher. But, rapid adoption whilst responding to a pandemic was unlikely to be better (or as good?) qualitatively than previous practice. There just wasn’t time for many to engage in the work required to question prior assumptions and redesign prior practices to suit the very different context and needs. Let alone harness technology transformatively.

It is even less likely if – as I believe – most pre-COVID individual and organisational assumptions and practices around learning, teaching and technology were built on fairly limited conceptual foundations. Building a COVID response on that sandy foundation was never going to end well. As individuals, institutions, and vendors (thanks Microsoft?) begin to (re-)imagine what’s next for learning and teaching in higher education, it is probably a good time to improve those limited conceptual foundations.

That’s where this post comes in. It is an attempt to explore in more detail Dron’s (2021) definition of educational technology and how it works. There are other conceptual/theoretical framings that could be used. For example, postdigital (Fawns, 2019). That’s for other posts. The intent here it to consider Dron’s definition of educational technology and if/how it might help improve the conceptual foundations of institutional practices with educational technology.

After writing this post, I’m seeing some interesting possible implications. For example:

  • Another argument for limitations in the “pedagogy before technology” argument (pedagogy is technology, so this is an unhelpful tautology).
  • A possible explanation for why most L&T professional development is attended by the “usual suspects” (it’s about purpose).
  • Thoughts on the problems created by the separation of pedagogy and technology into two organisational universities (quality of learning experience is due to the combination of these two, separate organisational units, separate purposes, focused on their specific phenomena).
  • One explanation why the “blank canvas” (soft) nature of the LMS (& why the NGDLE only makes this worse) is a big challenge for quality learning and teaching (soft is hard).
  • Why improving digital fluency or the teaching qualifications of teaching staff are unlikely to address this challenge (soft is hard and solutions focused on individuals don’t adress the limitations in the web of institutional technologies – in the broadest Dron sense).

Analysing a tutorial room

Imagine you’re responsible for running a tutorial at some educational institution. You’ve rocked up to the tutorial room for the first time and you’re looking at one of the following room layouts: computer lab, or classroom. How does Dron’s definition of educational technology help understand the learning and teaching activity and experience you and your students are about to embark upon? How might it help students, teachers, and the people from facilities management and your institution’s learning and teaching centre?

Computer lab Classroom
Czeva , CC BY-SA 4.0 via Wikimedia Commons Thedofc, Public domain, via Wikimedia Commons

Ask yourself these questions

  1. What technology do you see in the rooms above (imagine you can see a tutorial being run in both)?
  2. What is the nature of the work you and your students do during the tutorial?
  3. Which of the rooms above would be “best” for your tutorial? Why?
  4. How could the rooms above be modified to be better for tutorials? Why?

What is the (educational) technology in the room?

Assuming we’re looking at a tutorial being carried out in both images. What would be on your list of technology being used?

A typical list might include chairs, tables, computers, whiteboards (interactive/smart and static), clock, notice boards, doors, windows, walls, floors, cupboards, water bottles, phones, books, notepads etc.

You might add more of the technologies that you and your students brought with you. Laptops, phones, backpacks etc. What else?

How do you delineate between what is and isn’t technology? How would you define technology?

Defining technology

Dron (2021) starts by acknowledging that this is difficult. That most definitions of technology are vague, incomplete, and often contradictory. He goes into some detail why. Dron’s definition draws on Arthur’s (2009) definition of technlogy as (emphasis added)

the orchestration of phenomena for some purpose (Dron, 2021, p. 1)

Phenomena includes stuff that is “real or imagined, mental or physical, designed or existing in the natural world” (Dron, 2021, p. 2). Phenomena can be seen as belonging to physics (materials science for table tops), biology (human body climate requirements), chemistry etc. Phenomena can be: something you touch (the book you hold); another technology (the book you hold); a cognitive practice (reading); and, partially or entirely human enacted (think/pair/share, organisational processes etc).

For Arthur, technological evolution comes from combining technologies. The phenomena being orchestrated in a technology can be another technology. Writing (technology) orchestrates language (technology) for another purpose. A purpose Socrates didn’t much care for. Different combinations (assemblies) of technologies can be used for different purposes. New technologies are built using assemblies of existing technologies. There are inter-connected webs of technologies orchestrated by different people for different purposes.

For example, in the classrooms above manufacturers of furniture orchestrated various physical and material phenomena to produce the chairs, desks and other furniture. Some other people – probably from institutional facilities management – orchestrated different combinations of furniture for the purpose of designing cost efficient and useful tutorial rooms. The folk designing the computer lab had a different purpose (provide computer lab with desktop computers) than the folk designing the classroom (provide a room that can be flexibly re-arranged). Those different purposes led to decisions about different approaches to orchestration of both similar and different phenomena.

When the tutorial participants enter the room they start the next stage of orchestration for different, more learning and teaching specific purposes. Both students and teachers will have their own individual purposes in mind. Purposes that may change in respone to what happens in the tutorial. Those diverse purposes will drive them to orchestrate different phenomena in different ways. To achieve a particular learning outcome, a teacher will orchestrate different phenomena and technology. They will combine the technologies in the room with certain pedagogies (other technologies) to create specific learning tasks. The students then orchestrate how the learning tasks – purposeful orchestrations of phenomena – are adapted to serve their individual purposes.

Some assemblies of technologies are easier to orchestrate than others (e.g. the computers in a computer lab can be used to play computer games, rather than “learning”). Collaborative small group pedagogies would probably be easier in the classroom, than the computer lab. The design of the furniture technology in the classroom has been orchestrated with the purpose of enabling this type of flexibility. Not so the computer lab.

For Dron, pedagogies are a technology and education is a technology. For some,

Them's fighting words

What is educational technology?

Dron (2021) answers

educational technology, or learning technology, may tentatively be defined as one that, deliberately or not, includes pedagogies among the technologies it orchestrates.

Consequently, both the images above are examples of educational technologies. The inclusion of pedagogies in the empty classroom is more implicit than in the computer lab which shows people apparently engaged in a learning activity. The empty classroom implicitly illustrates some teacher-driven pedagogical assumptions in terms of how it is laid out. With the chairs and desks essentially in rows facing front.

The teacher-driven pedagogical assumptions in the computer lab are more explicit and fixed. Not only because you can see the teacher up the front and the students apparently following along. But also because the teacher-driven pedagogical assumptions are enshrined in the computer lab. The rows in the computer lab are not designed to be moved (probably because of the phenomena associated with desktop computers, not the most moveable technologies). The seating positions for students are almost always going to be facing toward the teacher at the front of the room. There are even partitions between each student making collaboration and sharing more difficult.

The classroom, however, is more flexible. It implicitly enables a number of different pedagogical assumptions. A number of different orchetrations of different phenomena. The chairs and tables can be moved. They could be pushed to sides of the room to open up a space for all sorts of large group and collaborative pedagogies. The shapes of the desks suggest that it would be possible to push four of them together to support small group pedagogies. Pedagogies that seek to assemble or orchestrate a very different set of mental and learning phenomena. The classroom is designed to be assembled in different ways.

But beyond that both rooms appear embedded in the broader assembly of technology of formal education. They appear to be classrooms within the buildings of an educational institution. Use of these classrooms are likely scheduled according to a time-table. Scheduled classes are likely led by people employed according to specific position titles and role descriptions. Most of which are likely to make some mention of pedagogies (e.g. lecturer, tutor, teacher).

Technologies mediate all formal education and intentional learning

Dron’s (2021) position is that

All teachers use technologies, and technologies mediate all formal education (p. 2)

Everyone involved in education has to be involved in the orchestration of new assemblies of technology. e.g. as you enter one of the rooms above as the teacher, you will orchestrate the available technologies including your choice of explicit/implicit pedagogical approaches into a learning experience. If you enter one of the rooms as the learner, you will orchestrate the assembly presented to you by the teacher and institution with your technologies, for your purpose.

Dron does distinguish between learning and intentional learning. Learning is natural. It occurs without explicit orchestration of phenomena for a purpose. He suggests that babies and non-human entities engage in this type of learning. But when we start engaging in intentional learning we start orchestrating assemblies of phenomena/technologies for learning. Technologies such as language, writing, concepts, models, theories, and beyond.

Use and particpation: hard and soft

For Dron (2021) students and teachers are “not just users but participants in the orchestration of technologies” (p. 3).

The technology that is the tutorial you are running, requires participation from both you and the students. For example, to help organise the room for particular activities, use the whiteboard/projector to show relevant task information, use language to share a particular message, and use digital or physical notebooks etc. Individuals perform these tasks in different ways, with lesser or greater success, with different definitions of what is required, and with different preferences. They don’t just use the technology, the participate in the orchestration.

Some technologies heavily pre-deterimine and restrict what form that participation takes. For example, the rigidity of the seating arrangements in the computer lab image above. There is very limited capacity to creatively orchestrate the seating arrangement in the computer lab. The students participation is largely (but not entirely) limited to sitting in rows. The constraints this type of technology places on our behaviour leads Dron to label them as hard technologies. But even hard technologies can orchestrated in different ways by coparticipants. Which in turn lead to different orchestrations.

Other technologies allow and may require more active and creative orchestration. As mentioned above, the classroom image includes seating that can be creatively arranged in different ways. It is a soft technology. The additional orchestration that soft technologies require, requires from us additional knowledge, skills, and activities (i.e additional technology) to be useful. Dron (2021) identifies “teaching methods, musical instruments and computers” as further examples of soft technologies. Technologies that require more from us in terms of orchestration. Soft technologies are harder to use.

Hard is easy, soft is hard

Hard technologies typically don’t require additional knowledge, processes and techniques to achieve their intended purpose. What participation hard technologies require is constrained and (hopefully) fairly obvious. Hard technologies are typically easy to use (but perhaps not a great fit). However, the intended purpose baked into the hard technology may not align with your purpose.

Soft technologies require additional knowledge and skills to be useful. The more you know the more creatively you can orchestrate them. Soft technologies are hard to use because they require more of you. However, the upside is that there is often more flexibility in the purpose you can achieve with soft technologies.

For example, let’s assume you want to paint a picture. The following images show two technologies that could help you achieve that purpose. One is hard and one is soft.

Hard is easy Soft is hard
Aleksander Fedyanin CC0, via Wikimedia Commons Small easel with a blank canvas CC0

Softness is not universally available. It can only be used if you have the awareness, permission, knowledge, and self-efficacy necessary to make use of it. Since I “know” I “can’t paint”, I’d almost certainly never even think of using of a blank canvas. But then if I’m painting by numbers, then I’m stuck with producing whatever painting has been embedded in this hard technology. At least as long as I expect the hardness. Nor is hard versus soft a categorisation, it’s a spectrum.

As a brand new tutor entering the classroom shown above, you may not feel confident enough to re-arrange the chairs. You may also not be aware of certain beneficial learning activites that require moving the chairs. If you’ve never taught a particular tutorial or topic with a particular collection of students, you may not be aware that different orchestrations of technologies may work better.

Hard technologies are first and structural

Harder technologies are structural. They funnel practice in certain ways. Softer technologies tend to adapt to those funnels, some won’t be able to adapt. The structure baked into the hard technology of the computer lab above makes it difficult to effectively use a circle of voices activity. The structure created by hard technologies may mean you have to consider a different soft technology.

This can be difficult because hard technologies become part of the furniture. They become implicit, invisible and even apparently natural parts of education. The hardness of the computer lab above is quite obvious, especially the first time you enter the room for a tutorial. But what about the other invisible hard technologies embedded into the web technologies that is formal education.

You assemble the tutorial within a web of other technologies. As the number of hard technologies and interconnections between hard technologies increases, the web in which you’re working becomes harder to change. Various policies, requirements and decisions are made before you start assembling the tutorial. You might be a casual paid for 1 hour to take a tutorial in the computer lab shown above on Friday at 5pm. You might be required to use a common, pre-determined set of topics/questions. To ensure a common learning experience for students across all tutorials you might be required to use a specific pedagogical approach.

While perhaps not as physically hard as the furniture in the computer lab, these technologies tend to funnel practice toward certain forms.

Education is a coparticipative technological process

For Dron (2021) education is a coparticipative technological process. Education – as a technology – is a complex orchestration of different nested phenomena for diverse purposes.

How it is orchestrated and for what purposes are inherently situated, socially constructed, and ungeneralizable. While the most obvious coparticipants in education are students and teachers there are many others. Dron (2021) provides a sample, including “timetablers, writers, editors, illustrators of textbooks, creators of regulations, designers of classrooms, whiteboard manufacturers, developers and managers of LMSs, lab technicians”. Some of a never ending list of roles that orchestrate some of the phenomena that make up the technologies that teachers and students then orchestrate to achieve their diverse purposes.

Dron (2021) argues that how the coparticipants orchestrate the technologies is what is important. That the technologies of education – pedagogies, digital technologies, rooms, policies, etc. – “have no value at all without how we creatively and responsively orchestrate them, fuelled by passion for the subject and process, and compassion for our coparticipants” (p. 10). Our coparticipative orchestration is the source of the human, socially constructed, complex and unique processes and outcomes of learning. More than this Dron (2021) argues that the purpose of education is to both develop our knowledge and skills and to encourge the never-ending development of our ability to assemble our knowledge and skills “to contribute more and gain more from our communities and environments” (p. 10)

Though, as a coparticipant in this technological process, I assume I could orchestrate that particular technology with other phenemona to achieve a different purpose. e.g. if I were a particular type of ed-tech bro, then profit might be my purpose of choice.

Possible questions, applications, and implications

Dron (2021) applies his definition of educational technology to some of the big educational research questions including: the no significant different phenomena; learning styles; and the impossibility of replication studies for educational interventions. This produces some interesting insights. My question is whether or not Dron’s definition can be usefully applied to my practitioner experience with educational technology within Australian Higher Education. This is a start.

At this stage, I’m drawn to how Dron’s definition breaks down the unhelpful duality between technology and pedagogy. Instead, it positions pedagogy and technology as “just” phenomena that the coparticipants in education will orchestrate for their purposes. Echoing the sociomaterial and postdigital turns. The notions of hard and soft technologies and what they mean for orchestration also seem to offer an interesting lens to understand and guide institutional attempts to improve learning and teaching.

Pulling apart Dron’s (2021) definition

the orchestration of phenomena for some purpose (Arthur, 2009, p. 51)
seems to suggest the following questions about L&T as being important
1. Purpose: whose purpose and what is the purpose?
2. Orchestration: how can orchestration happen and who is able orchestrate?
3. Phenomena: what phenomena/assemblies are being orchestrated?

Questions that echo Fawn’s (2020) argument using a postdigital perspective to argue against the pedagogy before technology purpose and landing on the following

(context + purpose) drives (pedagogy [ which includes actual uses of technology])

Withi this in mind, designing a tutorial in one of the rooms would start with the content and purpose. In this case the context is the web of existing technologies that have led you and your students being in the room ready for a tutorial. The purpose includes the espoused learning goals of the tutorial, but also the goals of all the other participants, including those that emerge during the orchestration of the tutorial. This context and purpose is then what ought to drive the orchestration of various phenomena (which Fawn labels “pedagogy”) for that diverse and emergent collection of purposes.

Suggesting that it might be useful if the focus for institutional attempts to improve learning and teaching aimed to improve the quality of that orchestration. The challenge is that the quality of that orchestration should be driven by context and purpose, which are inherently diverse and situated. A challenge which I don’t think existing institutional practices are able to effectively deal with. Which is perhaps why discussions of quality learning and teaching in higher education “privileges outcome measures at the expense of understanding the processes that generate those outcomes” (Ellis and Goodyear, 2019, p. 2).

It’s easier to deal with abstract outcomes (very soft, non-specific technologies) than with the situated and contexual diversity of specifics and how to help with the orchestration of how to achieve those outcomes. In part, because many of the technologies that contribute to institutional L&T are so hard to reassemble. Hence it’s easier to put the blame on teaching staff (e.g. lack of teaching qualifications or digital fluency), than think about how the assembly of technologies that make up an institution should be rethought (e.g. this thread).

More to come.

References

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Dron, J. (2021). Educational technology: What it is and how it works. AI & SOCIETY. https://doi.org/10.1007/s00146-021-01195-z

Fawns, T. (2019). Postdigital Education in Design and Practice. Postdigital Science and Education, 1(1), 132–145. https://doi.org/10.1007/s42438-018-0021-8

Understanding (digital) education through workarounds and quality indicators

COVID-19 and the subsequent #pivotonline has higher education paying a lot more attention to the use of digital and online technology for learning and teaching (digital education). COVID-19 has made digital education necessary. COVID-19 has made any form of education – and just about anything else – more difficult. For everyone. COVID-19 and it’s impact is rewriting what higher education will be after. COVID-19 is raising hopes and fears that what will come after will be (positively?) transformative. Not beholden to previous conceptions and corporate mores.

Most of that’s beyond me. Too big to consider. Too far beyond my control and my personal limitations. Hence I’ll retreat to my limited experience, practices, and conceptions. Exploring those more familiar and possibly understandable landscapes in order to reveal something that might be useful for the brave new post-COVID-19 world of university digital education. A world that I’m not confident has any hope of being positively transformed. Regardless of what the experts, prognosticators, futurists and vendors are selling. But I’m well-known for being a pessimist.

Echoing Phipps and Lanclos (2019) I believe that making changes in digital education needs to be grounded in “an understanding of the practices that staff undertake and the challenges they face” (p. 68). Some colleagues and I have started identifying our practices and challenges by documenting the workarounds we’ve used and developed. Alter (2014) defines workarounds as

a goal-driven adaptation, improvisation, or other change to one or more aspects of an existing work system in order to overcome, bypass, or minimize the impact of obstacles, exceptions, anomalies, mishaps, established practices, management expectations, or structural constraints that are perceived as preventing that work system or its participants from achieving a desired level of efficiency, effectiveness, or other organizational or personal goals (p. 1044)

Workarounds are a useful lens because they highlight areas of disconnect between what is needed and what is provided. Alter (2014) suggests that this Theory of Workarounds could be used to understand these disconnects and leverage that understanding to drive re-design. Resonating with Biggs’ (2001) notion of quality feasibility, a practice that actively seeks to understand what the impediements to quality teaching are and to remove them.

The challenge I faced was whether I could remember a reasonable percentage of the workarounds I’ve used in 20+ years.

Enter the following list of eight Online Course Quality Indicators also available as a PDF download and tested in Joosten, Cusatis & Harness (2019) (HT: @plredmond and OLDaily).My interest here isn’t in the validity/value of this type of approach (of which I have my doubts). Instead, my interest in that the eight indicators offer a prompt for the type of considerations to which a conscientious teacher might pay attention. The type of considerations that will point out limitations within institutional support for (digital) education and generate workarounds.

Initial findings

So far I’ve remembered 53 workarounds. Detail provided below. The following table maps workarounds against the quality indicators. The biggest category is Doesn’t fit. i.e. workarounds that didn’t seem to fit the quality indicators. Perhaps suggesting that the quality indictors were designed to analyse the outcome of teacher work (online course), rather than provide insight into the practices teachers undertake to produce that outcome.

Peer interaction and content interaction are the indicators with the next highest number of workarounds. Though I have collapsed both content interaction and richness indicators into content interaction.

Quality Indicator

# of workarounds

Design

4

Organisation

6

Support

1

Clarity

3

Instructor interaction

7

Peer interaction

10

Content interaction / Richness

9

Doesn’t fit

13

53 is a fair number. But perhaps not surprising given my original discipline is information technology and part of my working life has been spent designing LMS-like functionality.

What’s disappointing is that a number of these workarounds are duplicates solving the same fundamental problem. The only difference being in the institutional and technological context. For example, a number of the workarounds are focused on helping with:

  1. Production and maintenance of well-designed, rich course content.
  2. Increasing the quanity and quality of what teachers know about students background and activity.

What does that say about higher education, digital education, and me?

Proper reflection and analysis will have to wait for another time. But evidence of a difficulties in at least two fundamental practices seems important. Or, perhaps it’s just showing how blinked and obsessive my interest is.

There are some questions about whether the following are actually workarounds. In particular, some of the fairly specific learning activities aren’t actually designed to change an existing part of the institutional context. There was no part of the institutional context that provided for the learning activities. Largely because the learning activities were so specific to the learning intent that the institution would never have been able to provide any support. However, most institutions now have lists of digital tools that have been approved for use in learning and teaching. Typically, the specificity of the learning need means that no appropriate tool has been added to the list.

What does this say about the reusability paradox and institutional approaches to digital education?

Workarounds and quality indicators

The following steps through each of the quality indicators and uses them as an inspiration to answer the above question. For each workaround links to additional detail is provided and initial thoughts on the workaround given.

Design

Systems Emergencies

One attempt at an authentic real world experience was the Systems Emergency assessment item for Systems Administration (Jones, 1995). Each student had to run a program on their computer. A program that would break their computer. Simulating an authentic error. The students had to draw on what they’d learned during the course to disagnose the problem, fix it and complete a report.

Is this a workaround? It’s so specific to a particular course and a particular pedagogical choice there is no institutional system that it is replacing.

Open Learning Computing Platform

Better example from the same course went by the acryonym OLCP (open learning computing platform) (Jones, 1994). The recommended computer systems almost all distance education students were using (Windows 3.1/95) was not up to the requirements of the course (Systems Administration). To workaround this limitation we distributed a version of Linux (Jones, 1996a), eventually relying on commercial distributions. Without Linux the course couldn’t be taught.

Personal Blogs, not ePortfolios

Arguably, my predilection for requiring students to use their choice of public blogging engines, rather than institutional ePotfolio tools was also driven by a desire for authenticity. Not to mention my skepticism about the value of institutional ePortfolio systems (which got me in trouble one time). Initially, individual student blogs were an extension of journaling (introduced in Sys Admin) and an encouragement to engage in open reflection and discussion. Intended to mirror good practice for IT professionals and first used in a Web Engineering course in 2002. Later evolving into the BAM and BIM tools to encourage reflection for assessment purposes and to encourage the development of a professional learning network.

Alignment and curriculum mapping

In terms of alignment of assessments and learning activities I’ve used and more often seen people use bespoke Word documents and spreadsheets to engage in mapping of courses and programs. Mainly because institutions did not have any practice of encouraging such practices, let alone systems to do it (e.g. this from 2009). There’s been a lot more attention paid and importance placed on mapping, but generally it remains an area of bespoke documents and spreadsheets. Perceived shortfalls that led to some design work on alternatives.

Organisation

Moodle Course design

Designing a well-organised course site that is easy to navigate with manageable sections and a logical and consistent form is no easy task given the nature of most LMS. My first foray into this (before 2012 I was using an LMS I developed) added the fikkiwubg design features using bits of HTML.

A “Jump-to: Topic” navigation bar to my Moodle course sites to avoid the scroll of death.

Addition of non-topic based navigation to the top of the Moodle site to provide a sensible grouping of resources (Course background & content) that didn’t fit with the default Moodle design.

Addition of topic-based photos to generate visual interest, perhaps a bit of dual coding with the topic, and encourage some further exploration.

A “Right now” section at the top manually updated each week (along with the banner image) of term to orient students to the current focus.

Moodle Activity Viewer

Since the in-built Moodle reports aren’t that good and because I really wanted to understand how students were engaging with the Moodle sites I designed the Moodle Activity Viewer scratched an itch. It enabled an analysis of student activity.

Evernote to search a course site?

On of the on-going challenges with using Moodle was the absence of a search engine, a fairly widespread and important part of navigating any website. I did consider a number of different options and ended up trying out a kludge with Evernote. But only for one offering.

Modifying the Moodle search book block

Hosting course content on a WordPress blog

In 2012 I took over a Masters course titled Network and Global Learning. Given the focus of the course, hosting the learning environment in a closed LMS site didn’t seem appropriate. Instead, I decided to try it as an open course. It ended up as a WordPress site and has since been taken over by another academic…at least for one offering. Looks like it probably ended up back in the LMS.

Diigo for course revision

Given NGL was hosted on a course blog, this raised questions about how to take notes about what wasn’t working and ponder options for re-design. In Word, this could be done with the comments feature. For the Web I used Diigo to produce annotations like the following.

Card Interface

Late 2018 saw me stepping backwards to Blackboard 9.1. A very flexible system for structuring a site, but incredibly hard to make look good without a lot of knowledge. How to enable lots of people organise their course sites effectively? Enter the Card Interface. Easily convert a standard Blackboard content page into a contemporary, visual user interface.

Support

jQuery to work around limitations of a standard course design

Over a number of years of iterative design improvements I’d gotten to the stage of having quite a detailed Assessment section on my course site. Echoing much of the online course quality indicators say about support: description of grading and assessment plan; clear instructions and directions;…. This required some extra lifting on my part to add to the base LMS functionality.

Eventually, the institution figured out that the difficulty of this needed to be reduced and instroduced a new standard course design. The problem was that the new institution standard provided less support than I was already providing. Not just assessment, but other areas as well.

My solution was to use jQuery to modify the default course site so that it would point to my resources, not the limited standard.

Clarity

I’m assuming that there is meant to be some overlap between support and clarity. There are some arrows in the original image suggesting it.

Course macro system

A key part of clarity is knowing when something is meant to happen. When an assignment is due. When semester starts etc. Some of the events the occur in a course design occur in every offering of the course. The main difference is that the specific dates change. The simplest solution is to use week numbers. A practice commonly used from the days of distance education until now. The problem is that this approach sacrifices a little bit of clarity.

My solution was a “course macro” system. Rather than put Week 5 as the due date, I would put {A1_DUE_DATE}. The braces indicating that this was a variable. Before anyone see the variable, the macro system would replace the variable with the correct value. The macro system knows what course you’re looking at and know the right value to show you.

It wasn’t just for dates. The macro system was useful for any information that would be repeated in multiple places, but which could vary over time. Using the macro system meant I could easily change all usage of the information, without having to remembering where it was used.

Auto-marking contributions and activity

Since what the student does I decided – for better or worse – to target extrinsic motivation and give a small percentage of a course assessment based on whether or not course activities were completed, including the quantity and characteristics of the posts students made to their individual blog. The calculation of this mark was done by Perl scripts that I wrote. Students were notified of their progress via emails also sent by Perl scripts.

Student Evaluation of Course Leaderboard

For better or worse, response rates on institutional end of term student evaluation surveys was deemed important. Response rates were also low across the board. One solution I developed and which was then adopted by others was the development of an “evaluation leaderboard”. While the surveys were available a table was added to the top of the course site. That table listed each student cohort and show the number and percentage of that cohort who had responded. The table along with appropriate prompting from teaching staff increased response rates by 12 to 15%. There’s even a poster about the idea.

Instructor – interaction

Course barometers

My teaching journey has always involved distance education. First print-based and then online. One of the biggest problems I ever had with distance education – especially print-based – was the loss of informal student feedback (Jones, 2002). Then I stumbled across the idea of a course barometer (Svensson et al., 1999). I implemented the functionality in an LMS I was writing and used it in my courses. Initially, other staff could add a barometer to their course. Eventually and for a couple of years, a course barometer became a standard part of all courses. Moving onto Moodle I wanted the same functionality, and kludged something using the Moodle Feedback module.

Minute Papers via BAM and Google Forms

In on-campus teaching I had used minute papers as another method to gain insight into student progress. For distance education students, minute papers were implemented as part of the use of individual student blogs and the BAM tool (Jones & Luck, 2009). I’ve also used Google forms to implement the IMPACT procedure. An extended version of the minute paper. Student feedback was then run through Tagxedo

Use my blog to publicly reflect on my teaching

That word cloud image above was a course taken by pre-service teachers titled ICT and pedagogy. A key message of that course was that reflection is important for learning, that good teaching relies on learning, and that reflection can help teachers. Hence students in this course were encouraged to use their individual blogs to reflect and I modelled this reflection by using my blog to reflect on what was happening in this course. An example of that is the blog post sharing the tag clouds. A blog post that had a couple of comments from students.

Staff MyCQU

A key enabler of effective instructor/learner interaction is the amount of knowledge the instructor has about the student and how effectively they use that knowledge to guide interaction. I’ve never experienced a University who has solved the challenge of getting. Staff MyInfocom/MyCQU was a simple web portal that showed the courses someone was teaching and provided various lists of student details. Ended up being used by the entire University.

Know thy student

Staff MyCQU provided access to the data, but not within the context of learning. The Know thy student project (Jones et al., 2017) used a duplicate database to hold information about students specific to the pedagogy of the course I was teaching. It then provided contextualised access to that information from within the course site. Where ever a student name was mentioned in the course site, I could access information about that students and what they had done so far in the course.

Pastoral and academic management

Trends in funding for Australian higher education are starting to focus attention on retention. Consequently, there’s interest in implementing processes to effectively enable and encourage pastoral and academic care. A starting point for this type of task is gathering disparate information about students demographics and activities into a central form. Enter a collection of Python scripts that scrape various data sources, combine them into a data store, and then produce various reports.

Finding a time to reschedule a face-to-face session

Unforeseen circumstances meant that a teacher couldn’t make a timetabled session. Needed to identify the impacted students, contact them to find the best alternate time and book it. The process I used to do this was documented and ended up relying on copy and pasting from a Moodle site, some manual editing, email, and setting up a Moodle quiz to ask for times.

Peer – interaction

Blogs, learning journals and BIM

To encourage reflection and peer interaction students were encouraged to: maintain a personal blog; regularly post to the blog; and, actively follow and comment on blog posts from other students. Supporting this learning design led to numerous work arounds

Writing (and updating) the BIM Moodle plugin and arguing its inclusion in the institutional Moodle instance.

Prior to BIM being installed, using the Moodle database activity to allow registration of student blogs. That information was manually transferred to my laptop. My laptop was then used to drive the mirroring of student blog posts.

Manually producing OPML files for different student cohorts and encouraging the use of feed readers (e.g. Feedly) for students to track posts from other students.

Writing Perl scripts to perform sentiment analysis on student blog posts and integrating that analysis into the Know thy student tool.

Write Perl scripts to automatically mark student blog activity against defined criteria; and, email students their progress (as mentioned earlier).

Integrating Webfuse discussions forums into Blackboard groups

The Blackboard LMS was unable to correctly support a specific group-based learning design. To implement it we needed to integrate discussion forums from Webfuse (my LMS) into Blackboard. That course and learning design won a national teaching citation.

Google doc to share course problems and solutions

With 100s of students taking a course focused on exploring the use of different digital technologies for teaching there were always going to be challenges. With 100s of students many would discover new and intersting problems and solutions. We wanted them to share their experience.

We did this via a Google doc. Students were encouraged to add any problems they were having with the course and to also add suggested solutions.

Shared bookmarks

Social bookmarking allows a group of people to share and annotate the good stuff they find on the web. In one of my courses we had a course Diigo group that has 100s have shared resources.

Shared attentions

But it wasn’t just sharing links. Diigo allowed shared annotations and for some readings, students were encouraged to make annotations. With the standard practice of starting with a new course site each offering, the opinions and discussions shared via these means would normally be lost. Using Diigo meant they were still visible. Providing what Riel and Polin (2004) describe as the “residue of experience” (p. 18) that in good online learning communities “remains available to newcomers in the tools, tales, talk, and traditions of the group” (p. 18). The residue of experience provides a richer learning environment.

Use Diigo as method to collaboratively create a podcast

While I was a student I used the Diigo social bookmarking tool to create a method for collaboratively creating a podcast. Find an audio file online, add a description, and bookmark it with a particular tag and it would be added to the podcast.

I used it a couple of times in my teaching, but more as a way to curate my own podcast. I should probably get back into this.

What’s the weather like where you are

One of the advantages (and disadvantages) of courses with large distance education enrolments is that the students can be located anywhere. To take advantage and raise awareness of this, and to demonstrate another pedagogical application of digital technology I adopted Alec Couros’ weather activity (slide 3). Check out the 380+ photos generated since 2012, including the following contribution from me.

Content – interaction

Camera ready distance education study guides

Back in the early to mid 90s I taught dual mode courses taken by both on-campus and distance education students. In this pre-Online world distance education students received print-based study guides. Study guides produced by an industrial strength pipeline of copy editors and desk top publishers. At this stage Wordstar on MS-DOS was the high point of available text editors. The industrial strength pipeline was needed. The standardisation of these study guides couldn’t handle the unique requirements of Prolog and required long lead times (Jones, 1996). First response to this harnessed the rise of Word to starting producing camera ready PDF documents. Bypassing the industrial pipeline.

Online textbook

That evolved into a method for producing a hypermedia textbook (Jones, 1996b) that you can still peruse today, including the online index (Thanks to the Internet Archive’s WayBack Machine).

Authoring Moodle Book from HTML

Almost 20 years later and I was using the Moodle Book plugin to produce online content. Authoring for the Moodle Book wasn’t that great. An alternative was needed that fit my practices. Resulting process allowed me to create content in a HTML file. That was run through a script to produce something that could be imported into the Moodle Book.

Integrating the Moodle book with github

20 years later I started work on the Moodle “open” book project. An attempt to modify the Moodle Book plugin to integrate it with github. Initial development completed, but never used in anger. Beyond the open-washing the main aim of this project was to make it easier to create and maintain online resources.

Content Interface

Fast forward a few years to another institution and another LMS and the Content Interface aims to address the same problem. This version of a solution is both code you can download and is being used. Though not with its challenges.

Animations for dynamic, abstract concepts

Back in the 90s I used to teach information technology students about the internal data structures and algorithms used in operating systems. This involved pretty complex, abstract, dynamic concepts. Visualisation of these concepts was hard.

Working with (really relying upon) a couple of project students we developed a sequence of Flash-based (it was the 90s) animations. Animations that were used in the course long after I moved on and were also used across the world.

Animated, simulated operating systems of learning dynamic, abstract concepts

The first solution tried to solve this problem was through computer-aided learning. Initially we used a CAL tool developed in the US called PRMS (Hays et al., 1989). But it had some limitations and one of the students who experienced PRMS had the interest and ability to develop a better simulated, animated operating system called RCOS (Chernich et al., 1995).

Physical activity for learning dynamic, abstract concepts

Before the availability of animations, how did we teach these complex, abstract, dynamic concepts in face-to-face lectures? To teach scheduling algorithms, I used student volunteers and chairs out the front of the lecture theatre. The students simulated processes and perhaps other students the scheduling algorithm.

Online lectures for distance education students

In the same project we developed online lectures. We started with the slides provided with the course textbook. To produce the online lectures, we: converted the original slides to HTML, updated the programming language in the slides to one familiar to our students; added our animations; and, added audio we’d recorded.

Echoing an approach I’d used earlier.

Slide casts for faculty professional development on Slideshare

Almost 10 years later we wanted to produce slidecasts for professional development sessions. By 2007/2008 PowerPoint provided the ability to record narration but didn’t provide a decent method to disseminate the slides/audio on the web. To work around this some shell scripts we used to transform and concatenate the PowerPoint audio files to produce a format that could be linked with the PowerPoint slides on Slideshare.

I’m pretty sure that Slideshare doesn’t support this functionality anymore.

Stuff that doesn’t fit

User authentication and access control

In the mid-1990s I was developing a web information system used by multiple students and staff. We need to implement a form of user authentication and access control. Intially we didn’t have access to the institutional authentication systems. This was well before the days of “single-sign on” (has any organisation ever really had single sign on?). Subsequently, the institutional authentication system didn’t support the types of groups and operations we needed to support. This page provides some description of evolution of our user auth/access control workaround.

Visualising student locations

Most of my teaching has occurred at institutions with large numbers of distance students. Students spread throughout the world. Visualising just where those students are is typically limited to lists of student locations. In response to a request from a Head of School I used cartodb to convert that list of student locations into map-based visualisations.

Numerous adminstrative applications

Webuse provided numerous different administrative applications. A personalised timetable generator for teachers and students. More teacher administration centric examples were an academic misconduct database, assignment extensions system, academic staff allocations, and informal review of grade systems.

Graduation checker

I’m currently working on a graduation checker application of a degree with a complex structure. A complexity that can make it difficult for some students to visualise what they need to do (and when) to graduate.

Analysing results of student evaluations of learning and teaching

Almost every university has a standard teaching/course evluation process that happens each time a course is offered. The purpose of which would be for teaching staff to reflect on the feedback provided and take action. But typically the evaluation systems don’t enable this. Particularly doing analysis of free text feedback. I’ve manually extracted that information and used NVIVO for the analysis. Then I wrote a GreaseMonkey script that would automate the extraction of all the data into a spreadsheet for further anaysis. A tool that was eventually used to support others, particularly those applying for teaching awards.

Filtering LMS email “spam”

To encourage collaboration around digital education the academic department I worked with decided that all teaching staff should be able to see the course sites for all of the department’s courses. For a single semester, we were all added to 40+ courses. The problem was that the LMS was configured with email notifications for discussion forums and there were some discussion forums you could not unsubscribe from. LMS email spam.

The solution was to employ email filter rules to throw away all but the relevant LMS email.

Kludges to make final results processing better

At the end of a course, most University academics have to distill student outcomes into a mark and a grade and upload that distallation into an enterprise system. There is typically a huge gulf between what the teacher’s distallation naturally produces and what the enterprise system expects to see. That gulf typically means a lot of manual work (and checking) by the teaching staff. To make this process easier, I’ve written Perl scripts to help with the distallation (a draft post, you won’t be able to see it). I’ve also written GreaseMonkey scripts to modify the interface of the Enterprise System to make the process quicker and more correct. More detail here. One of my colleagues used the kludge and commented

That really should be built into any sensible system but it’s nice that you can make it happen regardless of the clunky system

Check status of Blackboard course site

We noticed that there were some fairly common errors being made with preparing Blackboard course sites. But there was no automated method to proactively check for these common errors. We wrote a script to automate this process and reported it to the L&T leaders responsible.

Check presence of Tweak

10+ years later and I was back on Blackboard (at another institution). First task was to identify which courses were using a particular functionality (course tweaks) that were being deprecated. The only available method was to check the course site. No report available.

Bugger doing that manually. I wrote a script (similar to the status check) to do the check for me.

References

Alter, S. (2014). Theory of Workarounds. Communications of the Association for Information Systems, 34(1). https://doi.org/10.17705/1CAIS.03455

Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Chernich, R., Jamieson, B., & Jones, D. (1995). RCOS: Yet another teaching operating system.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Hays, J., Miler, L., Othmer, B., & Saeed, M. (1989). Simulation of Process and Resource Management in a Multiprogramming Operating System. 125–128.

Jones, D. (1994). A workstation in every home!

Jones, D. (1995). Teaching systems administration II.

Jones, D. (1996a). Computing by distance education: Problems and solutions. ACM SIGCSE Bulletin, 28(SI), 139–146.

Jones, D. (1996b). Solving Some Problems of University Education: A Case Study. In R. Debreceny & A. Ellis (Eds.), Proceedings of AusWeb’96 (pp. 243–252). Southern Cross University Press.

Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. In P. Barker & S. Rebelsky (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications 2002 (pp. 884–889). AACE.

Jones, D., Jones, H., Beer, C., & Lawson, C. (2017). Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics. ALASI 2017: Australian Learning Analytics Summer Institute, Brisbane, Australia. http://tiny.cc/ktsdiy

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. In G. Siemns & C. Fulford (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 398–406). AACE. http://www.editlib.org/p/31530

Joosten, T., Cusatis, R., & Harness, L. (2019). A Cross-institutional Study of Instructional Characteristics and Student Outcomes: Are Quality Indicators of Online Courses Able to Predict Student Success? Online Learning, 23(4), Article 4. https://doi.org/10.24059/olj.v23i4.1432

Lanclos, D., & Phipps, L. (2019). Trust, Innovation and Risk: A contextual inquiry into teaching practices and the implications for the use of technology. Irish Journal of Technology Enhanced Learning, 4(1), 68–85.

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge University Press.

Svensson, L., Andersson, R., Gadd, M., & Johnsson, A. (1999). Course-Barometer: Compensating for the loss of informal feedback in distance education. 1612–1613.

Frog in a boat in a bath

What are the impediments to quality teaching and what can be done to remove them?

This is something I wrote in a protected post years ago. I want to get this bit out in the open.

It captures an important distinction about Quality Assurance from John Biggs and asks a question (the title of this post) that I’ve yet to see an institution address effectively. Lots of big projects and events paying lip service, but not really anything substantially addressing the practical impediments to quality teaching.

Raising perhaps another question, who gets to judge what is an impediment and whether it is solved?

An focus on retrospective QA, rather than prospective QA

Biggs (2001) identifies two types of Quality Assurance (QA), these are

  • retrospective QA; and
    Defined as seeing “QA in terms of accountability, and conforming to externally imposed standards” (p. 221).
  • prospective QA.
    Defined as seeing “QA as maintaining and enhancing the quality of teaching and learning in the institution” (p. 221).

In my experience, the institution (like most) is almost entirely focused on retrospective QA and pays little or no attention to prospective QA. As Biggs (2001) describes

Retrospective QA looks back to what has already been done and makes a summative judgment against external standards. The agenda is managerial rather than academic, with accountability as a high priority; procedures are top-down, and bureaucratic. This approach, widely used in the emerging universities in Australia, New Zealand, and the United Kingdom (Liston 1999), is despite the rhetoric not functionally concerned with the quality of teaching and learning, but with quantifying some of the presumed indicators of good teaching and good management, and coming to some kind of cost-benefits decision. (p. 222)

Prospective QA, on the other-hand

is not concerned with quantifying aspects of the system, but with reviewing how well the whole institution works in achieving its mission, and how it may be improved. (p. 223)

Biggs (2001) then draws on the ideas of the reflective practitioner and the scholarship of teaching to develop three questions that define QA

  1. Quality model (QM) – What is the institution’s espoused theory of teaching?
    I have some problems with a whole institution having an espoused theory of teaching. However, to some extent the institution has already done this with the “personalised learning” pillar of its strategic plan.
  2. Quality enhancement (QE) – Does practice align with the espoused theory? How can the theory guide the design of teaching?
  3. Quality feasibility (QF) – What are the impediments to quality teaching and what can be done to remove them?

The concepts of digital renovation and concrete lounges are related to the apparent absence of quality feasibility.

References

Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

 

Residential street in Singapore

Learning to think in React

As outlined previously I’m taking some steps toward learning and using the React Javascript library to develop some web interfaces/applications. The following documents progress toward writing that first application, which is largely confined to develop an initial mock-up and then learning more about the “React way”.

Developing a mock-up

The React site provides an introduction the main concepts of React. Number 12 of that list is titled “Thinking in React”. It starts with a mock up. A mock up also fits with an agile tendency to want to maximise feedback and iteration. I need one to share with my “clients”.

This is where React Proto could enter the picture. It’s an Electon-based app that supports prototyping React applications. Sadly, however, it’s reality doesn’t match my original assumption. You need to create a mockup image first. Then React Proto can be used to outline the React components within that image and then produces code.

Which has me looking for a web prototyping tool and stumbling across Justinmind Prototyper. Which looks like being a very slow download. Eventually downloaded and after some initial struggles groking the tool a rough prototype developed. Showed to someone. Got some initial feedback.

Next step use – npx create-react-app gradcheck – to create a skeleton React app. But a skeleton that doesn’t include any knowledge of the app I’m designing and the React components I’ll need to write for it.

This is where React Proto comes in. Using it I

  1. Import a screen shot of the HTML template created using the Justinmind prototype.
  2. Mark up that image using React Proto to identify each of the required components, including configuring a few options.
  3. When finished, export that data into my existing gradcheck application.

Ending up with a folder of jsx code ready for me to fill in details.

Sadly, it doesn’t quite update everything. Running – npm start – to view the app still gives the default create-react-app view. Do I know enough to make the change?

That’s a no. Time to take baby steps

Main concepts of React

Next step is to work through the Main Concepts “guide” for React, including updating some Javascript knowledge.

And then we’re into JSX. The apparently strange, almost PHP-like approach to mixing HTML and code. However, unlike PHP, React does provide some separation of concerns. Rather than functional separation – rendering and UI logic – React separates concerns into separate components. For me, this echoes of the argument against functional decomposition proposed by “The Method” from Righting Software. Need to read and think more about this.

Nice intro to JSX. Definition of React elements – the output of React.createElement calls and JSX – which are objects that describe what to see on the page. Used by React to construct the DOM.

Rendering elements

Onto Rendering Elements and starting with the idea of elements being used to make up components.

React applications are embedded into HTML pages via a particular (web) DOM node. There can be multiple. React handles the “rendering” of React elements into those DOM nodes.

React elements – its children and attributes – can’t be changed. To update the UI you have to create a new element and render it. But the React DOM is smart enough to only update what’s changed.

The focus is on what the UI looks at any given time, rather than how to change it. Apparently doing this eliminates a class of bugs. Will need to think on that.

Typically the render function is called only once (more on this to come)

Components and props

And now onto components, which are apparently like JavaScript functions. React components provide the reusable bits and can be designed in isolation from each other.

Thinking of them as functions, they take input. Called props. And return React elements. Hence where JSX enters the picture.

They can be defined two ways. As Function or Class components. Function components are functions. Class components are defined using JavaScript classes. Apparently each have strengths and weaknesses. But I’ve seen some recommended functions. And create-react-app appears to be going there by default.

The code produced by React Proto is using a different structure. Hence my problems above. More to learn.

React components must start with a capital letter. Otherwise they are seen as DOM tags.

Once the components are defined, you get the benefit of being able to reuse again using HTML.

e.g. the following from my playground during this process. The function Welcome is defining a new (very simple) React component. It takes props and uses JSX to return a React element(s).

This component is then used twice in the create-react-app App component. Which is described as typical React practice. See the two tags <Welcome name=”…

The power comes when the components are much more powerful than Welcome.

The trick then is how to decompose a web UI into components. The example on the components page shows that it is much more fine grain a decomposition than I expected.

Ok, props can’t be changed. Functions must be pure in this way. A React requirement.

It is via a new concept State that React components change their output/appearance in response to external actions.

State and lifecycle

How then do you get a React clock component? A component that updates its display as the time changes? Enter State and Lifecycle.

Only a single instance of a class is used per DOM.

The first time a component is rendered into a DOM, it’s called mounting and to tidy up resources there is unmounting.

Which correspond to lifecycle methods such as componentDidMount.

The state data member of a component holds state. It can/should only be changed using the setState function. This is how React knows that changes have occurred and thus should looking into running render again.

setState may be asycnchronous. Placing limits on how props and state can be combined. Requiring a second form of setState passed a function.

Status is encapsulated. No other component can know about if/what state a component has.

Handling events

What about handling user interface events?

Some similarities with JavaScript. But differences, including:

  • Using camelCase for names
  • Passing functions
  • Having to explicitly call preventDfault

Will need to revisit this in anger when working on a project.

Conditional rendering

It appears that conditional rendering is the idea of encapsulating different states in different React components. Then using conditionals to choose which element to render.

If an element returns NULL, then the component will not be rendered.

Lists and keys

And onto data structures – lists and keys. Particularly useful for generating lists of repeated elements. E.g. lists, table rows etc. JSX notation allows some powerful combinations.

But useful if these have keys (for React).

Forms

Apparently Form elements react different in react. Unlike other HTML elements, form elements maintain some internal state.

Preventing normal submission of a page by the form, requires the use of controlled components.

Which means that the React state of a form element becomes the authoritative source. The component has to control the state of the form element.

Which means that an onSubmit event handler is written as part of the component.

There are also differences for textarea and select to bring them into line with other form elements.

There is a concept of uncontrolled components.

Lifting state up

Sometimes state needs to be shared by several components. The recommended solution is to lift state up to a common ancestor. This entails

  1. Moving the bit of state to be shared into the common ancestor.
  2. That state is passed as props to the descendants.
  3. Pass from the ancestor to the descendants an “onChange” method.
  4. When the descendants need to change the value, they call that method.

The idea being that there is a single source of truth for data. One of the components.

Composition versus inheritance

Ahh, the debate of composition versus inheritance. My vague insight into the OO design community was that they’d come down on the side of composition being the generally better approach. Wonder if that’s correct?

Yep, at least for React “has a powerful composition model, and we recommend using composition instead of inheritance”.

There are components (e.g. dialog) that contain others. Advice is to pass children as props. Use your own convention.

Specialisation (Dialog -> WelcomeDialog) also done with composition and props.

Experience from Facebook is that they haven’t seen a need for inheritance.

Non-UI functionality shared between components – not surprisingly – is suggested to go into its own module.

What’s next?

The next page in this sequence is Thinking in React, where I started. I’ll save that for another post where I actually start work on the first version of the Grad Checker prototype.

Japanese store front - dog and boy

Playing with React.js as a technology for CASA

As the title suggests the aim here is to discover if the React.js library for building web (and other) user interfaces might help address some limitations and add some needed features to the Contextually-Appropriate Scaffolding Assemblage (CASA) idea (Jones, 2019). I’m going to be spending a fair bit of time on this, hence the importance of at least trying to be explicit about thinking it through. Not to mention the value of explicitly trying to make sense of all this and recording the process for later referral.

Any and all comments welcome. Especially technical corrections and disagreements. There’s much more to all of this than I know.

With that caveat, the answer I’ve drawn is that React.js can help improve CASA, and that’s work I’m starting on now.

Following starts with a re-statement of the perceived requirement and outlining limitations of current practice. React is briefly introduced and a comparison done against other possibilities. Finally, there is a dive into the React ecosystem and what it might provide the CASA idea.

What is the requirement?

Last week this tweet from @plredmond had me running down a web-page rabbit hole and stumbling across this list of “Top Research Questions” for the distance education community. A list that was apparently “derived and prioritized by the experts in the field of the 2015 DETA Summit at the ELI Annual Meeting”.

Question #7 – the last question in bold and thus significant? – caught my eye

What are the key components that promote a sustainable and an effective teaching and learning ecosystem?

Not a bad description of my broader interest in digital/online learning within higher education institutions. In last year’s ASCILITE paper I argued that current approaches to enabling effective design for learning (I like Goodyear’s distinction between “learning design” and “design for learning”) in higher education is neither sustainable or effective. The largest reason is the mismatch between the nature of encouraging the on-going development of effective learning and teaching and what can be provided by the current focus on using standard strategic/corporate/leadership techniques (Jones & Clark, 2014).

For example, successfully implementing Microsoft Teams (or Zoom etc) and then offering training on its use will enable the successful completion of an operational plan. It will also enable some examples of good learning and teaching. But it won’t be sufficient for creating (nor a sufficient contribution toward) “a sustainable and an effective teaching and learning ecosystem”.

CASA and Forward-oriented design

My most recent attempt (Jones, 2019) to explore a different answer is the idea of Contextually-Appropriate Scaffolding Assemblages (CASA). CASA are intended to embed necessary design knowledge into a digital technology that is (more easily) embedded into contextually appropriate activity system. An activity system that makes it significantly easier for teachers to engage in forward-oriented design for learning (Dimitriadis & Goodyear, 2013; Goodyear & Dimitriadis, 2013; Sun, 2017) as represented in the following diagram. As argued by others (Dimitriadis & Goodyear, 2013; Goodyear & Dimitriadis, 2013; Sun, 2017) most design for learning stops at design for configuration. My argument is that this flaw infects most attempts to build a teaching and learning ecosystem (e.g. introduce Microsoft Teams)

It’s just not sufficient to implement the learning activity. Instead, active thought has to be given to the type of work that the teacher will need to do: while learning is occurring (including thinking about what the learners will do); while reflecting on how well (or not) that learning activity went; and, re-thinking how that learning activity will work next time. Suggesting that an effective teaching and learning ecosystem will provide active support for forward-oriented design. My argument is that such support also needs to encourage and enable individuals to integrate this support into their context-specific assemblage of practices and skills. To be successful in this it the support also needs to be able to respond to the diversity and volatility of the contexts. Otherwise it won’t be used for long, or at all.

Oh, this also must be achieved sustainably.

Representation of a forward-oriented view of design for learning (Goodyear & Dimitriadis, 2013): Design for: configuration; orchestration; reflection; & re-design.

Problems with early CASA

As described in the ASCILITE paper (Jones, 2019) we’ve developed and used two example CASA. They have been used in 100s of courses. They fill a need, but they can be better. Some of the current issues

  1. The cost (difficulty)/benefit ratio for one of the CASA is still too high.
    One of the CASA remains too difficult/different for some (but not all) people to understand and see its benefits. Hence, they can’t see how it fits with their current assemblages. Instead, some are distracted by the “pretty interface” offered by tools such as Microsoft Sway. A tool that is easy to use, but which at best offers very limited design for configuration and nothing else.
  2. Insufficient support for orchestration and reflection.
    In particular, current CASA don’t provide any insight into how and what learners are doing. Meaning next to no support for orchestration and reflection.
  3. Insufficient support for customisation and generativity.
    Both CASA currently offer support for a small set of fairly generic activity systems. The reusability paradox suggests that the there is more value in enabling more context specific adaptation.
  4. The technology foundation for these CASA is primitive.
    The current CASA were my training projects for Javascript and Blackboard. It shows in the quality of the tech. The limitations are becoming an issue and are one of the major constraints on addressing the above problems.

Lastly, I’ve a project in which I need to develop a graduation checker. A tool that allows students to check when and what courses they need to complete in order to graduate. A tool that requires a more complex user interface.

How to address these problems? What are the options? Well, one is…(and yes I do recognise the apparent techno-solutionism here and have failed to explain why I continue down this route).

React.js

React is a “JavaScript library for building user interfaces”. Its origins are with FaceBook, but I’m trying not to hold that against it. On the plus side, it is open source and widely used. Meaning that there is a lot of tooling available to support React development. There are also jobs available in React development. A plus given the likely COVID inspired collapse in Australian higher education that has a growing attraction and the challenges that poses for people on contracts.

The following seeks to answer these sub-questions about React

  1. What about other alternatives? Bootstrap, Angular…
  2. Can it be integrated into the existing ecosystem?
  3. What support for current CASA might it provide?

Sustainability and reusing design knowledge

Question #3 is important from the sustainability perspective. The CASA idea positions the ability to reuse design knowledge as a key enabler for sustainability. It argues that current approaches to maintaining a teaching and learning ecosystem don’t enable reuse of design knowledge. Too many people are wasting time trying to solve the same problems.

One of the key limitations of the current CASA implementation is that my weak brain had to solve too many problems. One of the strengths of current CASA is that they enable the reuse of existing design knowledge. The ability to draw on and reuse a vast array of design knowledge is important for CASA.

What about the alternatives?

What about Bootstrap?

A lot of people like and use Bootstrap. But, this comparison of Boostrap and Material-UI (a React-based framework discussed below) has me thinking that React offers a better option than Bootstrap. Mainly because it argues that one of Boostrap’s strengths is consistency with “average opportunities for customisation”. For CASA, customisation is important.

React versus Boostrap is really a apples and oranges comparison. Bootstrap is a more complete framework for CSS/HTML and a very successful one. As illustrated in other perspectives and top 15 and top 10 lists of CSS frameworks. React is a library. A smaller scale object. The type of thing React is means much more able to play well with others. Whereas all-encompassing frameworks don’t play well with others. Also, comments suggest that React is easier to try out small and grow.

Perhaps the biggest plus for React is that it focuses on building user interfaces. Providing the View in Model-View-Controller development. The focus with the CASA concept is not to be developing full web applications. It to act as the glue between current systems and practices. The place where additional contextual knowledge is added.

What about Angular and Vue?

What about comparing apples with apples. Apparently it makes more sense to compare Angular, Vue and React.

Angular is from Google. Vue is smaller scale in origins, an ex-Google employee and friends.

For a code-based comparison, here’s a description of writing the same app in both Vue and React. Which initially suggests not a lot of difference, but has me thinking Vue seems to fit my conceptions a little better. But eventually not much to choose between them.

There’s also an article that creates the same simple App using Angular. Angular uses TypeScript which makes it more object-oriented. And ends up with code in a format that’s a bit more familiar to me. However, it’s described as a more heavyweight framework. Bumping up against the issue with Bootstrap above.

Just discovered a site that implements a todo app in multiple different MV* frameworks. If I actually had the time to do an in-depth comparison, this would be very useful. The design of the React version of the app is likely to be a useful inspiration.

With considering just Angular, Vue and React it’s close. However, the sunk cost of the pre-conceived idea of using React will win. Especially given this quote from a comparison of Angular and React

React.js is comprised of tools that help you build components to be dropped into a page

Can it be integrated into the pages that CASA will need to work with?

What about Web components?

Thanks largely to the work of Bryan Ollendyke (@btopro) I’ve become aware of Web components and long put them on my list of tech to grok. The little I knew suggested they’d be a good tool for the CASA idea. Without the slightly grubby feel that a technology that originated at Facebook might being.

According to the React folk, React and Web components are apple and oranges. That they are and can be complementary. Others seem to agree that it’s not React versus Web components. Arguing that something like React might be needed for complex web applications, but not something simpler. React adds state to UI and some other advantages. But in theory these are still interoperable.

Suggesting it’s not currently a question of React or Web components.

Can it be integrated into the existing ecosystem?

One of React’s hyped benefits is that it plays well with others. It doesn’t expect to be the only framework/tool you are using. It can be embedded in web pages produced by other systems. As illustrated by the following two examples working in this WordPress post. They also work in Blackboard Learn, which is currently where the CASA I work on have to live. This also suggests that they will also work in any other future systems or organisation adopts. At least as long as those systems “play well with others”. i.e. not Blackboard Ultra.



Or something slightly more exciting

That’s a yes, but with a bit more to do

What might the React ecosystem provide CASA?

Something like React is a pre-requisite for the next CASA – a graduation checker – which will require quite a complex user interface. A React strong point. Combine that with the large React ecosystem and its integration into Web (and mobile) UI development and it will definitely help the grad checker. The question is what might the React ecosystem offer the two existing CASA.

Card Interface

At its core the Card Interface CASA translates boring, ugly lists of Blackboard Learn content items into a more contemporary interface. It also embeds contextual knowledge, such as on what date does Week 1, Trimester 1, 2019 start? It converts the interface on the left to the interface on the right.

C:\Users\s2986288\AppData\Local\Microsoft\Windows\INetCache\Content.MSO\542B1BF9.tmp

C:\Users\s2986288\AppData\Local\Microsoft\Windows\INetCache\Content.MSO\8C14596F.tmp

The Card based approach to user interface design is a widely used contemporary practice. You’ve probably seen it on numerous websites. Hence it is a common need for web interfaces. A strength of React is that it encourages the design of reusable components. In fact, there’s a collection of tooling to help with the design and management of reusable components. This suggests that the React community should provide significant existing explicit support (even components) for a Card interface. A Google search for “react card interface” supports that suggestion.

React Frameworks

Since React is a fairly low level Javascript UI library, there’s a lot of space for frameworks that provide pre-built components. Here’s a list of 20+ such React libraries/frameworks.

There’s Material-UI. Based on a design from another Internet behemoth (Google). Material-UI is described as a REACT UI framework. It’s open source. It supports a long list of components, including Cards. The Card component appears quite flexible. Beyond that, there are templates and example applications. There is a collection of premium themes and templates. Some people seem to like it. There are those that have problems with Google’s work on the design of the Material guidelines that Material-UI is based on.

React Toolbox is another implementation of Google’s Material design into React. It has support for CSS Modules, which may be useful for CASA. Need to explore if Material-UI also supports this, and if this will work in the Blackboard context.

If you don’t use Google’s Material. Then there is Ant Design and the React UI library based on it. It has a Card component and many of the same things as Material-UI. Or perhaps IBM’s Carbon. Not as complete, but supports React.

Or perhaps React Bootstrap. i.e. Bootstrap rebuilt using React, but still maintaining compatibility with the Bootstrap UI ecosystem (e.g. themes).

Content Interface

In a course context, the Card Interface is used to give an overview of learning modules. The Content Interface embodies design knowledge in Javascript which is then used to transform a HTML description of a learning module into a more interactive Web experience. As the (bad) title suggests this CASA initial focused on presenting content. Currently using the jQuery accordion to produce something like the following. The long term aim is to provide more support for interactive learning and teaching activities.

C:\Users\s2986288\AppData\Local\Microsoft\Windows\INetCache\Content.MSO\67B8460E.tmp

There appear to be three broad ways that React can help

  1. Provide alternatives to the jQuery accordion interface for presenting a learning module.
  2. Provide a range of React components that can be used to enhance the interactive activities that can used within a learning module.
  3. Good development tooling and practices to enable more rapid response to divergent requirements.

Way #1 could look something like Spectacle – a presentation library. Which appears to use a a Carousel component common to most React frameworks. There are some possibilities with infinite scroll and parallax in the awesome react component list. In the ends this really requires more CSS/HTML design before considering any React components.

Individual components could be used with way #2. Collections of React components could be used to implement specific learning activities.

In terms of tooling and development practice there are tools like Storybook and Bit.dev which are designed to manage collections of components and reuse them. Though it’s not quite apples and oranges. Storybook focuses on visual development of standalone components whilst Bit supports the full life cycle of developed components. And here’s a taste of more React dev tools.

Not to mention a 2020 list of dev React tools. From there some tools I will use: Create React App; Jest; React Developer Tools.

Some to explore further:

  • Storybook;
    As above.
  • React Styleguidist;
    Immediate reaction is a that it echoes Storybook/bit.dev. I’m not working in a team. Hence the need isn’t as strong. But good practice would suggest adopting something like this.
  • React Proto;
    Now this looks interesting, a prototyping tool. Mocking up the app is a recommended first step in React development and something I’ve been pondering. The Proto site gives an overview of what it offers that resonates with my needs.
  • Evergreen;
    Another component collection. Gives impression of more complete than Belle. But not as complete as some above.
  • Belle;
    Another collection of components. Nothing leaping out to put it over other options above. Doesn’t appear quite as complete as the other options.
  • Gatsby.
    A static site generator that leverages React. Initially this doesn’t strike me as a good fit. But reading some more suggests that it might be a useful replacement for Create React App. The Grad Checker is going to be implemented as a standalone web page that needs to consume and manipulate data. Gatsby’s support for GraphQL might be useful. More exploration required.

    I do wonder if Gatsby could be useful for the Content Interface? A bit of a stretch, but the Showcase has some nice examples, including Digital Psychology.

Conclusions and what’s next?

Seems I’ve convinced myself to use React. Next step is to start developing the Graduation Checker. First a prototype (React Proto here I come?). Then some decisions about implementation. Gatsby? Create React App? Component libraries? Tooling etc.

Sometime after that I’ll need to experiment more with the integration of React into Blackboard. From there whether or not to replace the current Card and Content CASA.

Though first I probably should see if I can add the Content Interface to this blog post to enable the embed stuff above.

References

Dimitriadis, Y., & Goodyear, P. (2013). Forward-oriented design for learning: Illustrating the approach. Research in Learning Technology, 21, 1–13.

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: Reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Jones, D. (2019). Exploring knowledge reuse in design for digital learning: Tweaks, H5P, CASA and constructive templates. In Y. W. Chew, K. M. Chan, & A. Alphonso (Eds.), Personalised Learning. Diverse Goals. One Heart. ASCILITE 2019 (pp. 139–148).

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). http://ascilite2014.otago.ac.nz/files/fullpapers/221-Jones.pdf

Sun, S. Y. H. (2017). Design for CALL – possible synergies between CALL and design for learning. Computer Assisted Language Learning, 30(6), 575–599. https://doi.org/10.1080/09588221.2017.1329216

Exploring auto-coding with NVivo

The challenge here is learn more about using NVivo in order to design processes for a research project exploring the prevalence and nature workarounds in higher education learning and teaching.

Can Word documents be imported and pre marked up?

The current plan is to have people complete a Word template. The template consists of numerous questions related to Alter’s Theory of Workarounds (Alter, 2014). The question is whether there’s a good way to structure this document to make it easier to code responses according to Alter’s (2014) theory? A simple first step to further analysis.

The answer is yes. Following documents the process.

What’s the NVivo model?

I’m a great believer in the idea that most difficulties with using software arises from a model mismatch between the person and the software. Hence my starting point with a new bit of software is to try and build a representation of the model underpinning the software.

The fact that NVivo’s makers have a Understand the key concepts page has me feeling good about Nvivo. As does the observation that the Using NVivo page starts with a focus on different types of qualitative research – the domain from which most potential NVivo users will have some familiarity. Start with where the user is. Good.

Though a diagram the “using” page does suggest that importing of data into NVivo is a separate from coding.

Quick summary of some of the concepts

Concept

NVivo Purpose

Local project

Files

Materials to analyse (can organise in folders)

Individual workaround descriptions

Memos

Place to store ideas and thoughts arising from analysis

Descriptions of analysis process/progress

Coding

Process of analysing content and allocating to a node

Analysis of workarounds

Nodes

Container for content coded as belonging to a common theme. Can be organised in a hierarchy.

There is some support for auto coding structured content that can rely on consistent use of paragraph styles in documents. Even supports the idea of nested nodes through use of nested headings (e.g. H1, H2 etc).

First test

As it happens, many years of frustration with Microsoft Word has convinced me of the value of using paragraph styles correctly. Hence the template is already set up. I should be able to import and auto-code my first.

Wonder if I can do it without following the recipe instructions.

There was an option to

Create a new case for each imported file?

What’s a case in NVivo speak? Ahh, nodes can be theme or case nodes. Each workaround being a separate case node seems like a useful idea. But there’s also the idea of a workaround belonging to a particular individual. Appears multiple case nodes are possible.

Importing is straight forward. Need to use the autocode wizard and make some decisions about where to put new nodes. For now, place them under the case code for the workaround.

And it all appears to work

Is training the barrier to quality online learning in higher ed?

TL;DR

Recently there have been various suggestions that the biggest barrier to quality online learning in higher education is lack of knowledge held by teaching staff (Johnson, 2019; Mathes, 2019; Roberts, 2018). More or better training, faculty development and requirements for formal teaching qualifications are proposed as the solution.

The following argues that this is just a symptom of the real barrier. i.e. that Universities actually don’t know how to implement quality online learning. Specific evidence drawn from one of the clarion calls for more training/formal qualifications is offered. A pointer to possible solutions to the actual barrier is provided.

Note: I started writing this in late November. A week before ASCILITE’2019. Just finishing it now.

Why?

The following is sparked by personal experience (bias?) and a post on OLDaily from Stephen Downes titled More needs to be done to support teaching online in Canada. Downes’ post reports on results from a research report (Johnson, 2019) surveying Canadian, publicly funded, post-secondary institutions and reactions from Tony Bates and Clint Lalonde. Downes wonders how, after 25+ years of online learning, 79% of institutions surveyed can report that a major barrier to online learning is inadequate training for faculty? Lalonde finds “this number staggering, and a sobering wake up call”. And its not just Canada. This topic echoes the findings from a 2019 ICDE report (Mathes, 2019) that was also featured on OLDaily. That report – Global quality in online, open, flexible and technology enhanced education: An analysis of strengths, weaknesses, opportunities and threats – draw on interviews with senior leaders from ICDE member institutions from across the world from which three themes emerged. Theme #2 was professional development

Appropriate training is not always available to build the expertise and skills of faculty and staff responsible for developing and/or teaching courses in these modalities. This can result in a poor teaching experience for faculty and a poor learning environment for students. (Mathes, 2019, p. 10)

Lalonde isn’t certain why there is this apparent “massive skills gap among instructors to teach online”. Downes suggests that higher education’s problem isn’t a training problem, but a culture problem. He wonders “about the apparent inability or unwillingness of today’s professors to teach themselves how to use a computer to teach”. Bates identifies the lack of willingness amongst institutions to “make training in teaching mandatory” as a major contributor and suggests (in another post) Should all lecturers have to have a teaching certificate? Why the answer is a resounding ‘yes’. Lalonde wonders if institutional focus on training for online has become less of a priority as a feeling of “been there done that” is combined with more attention being paid to broader issues such as accessibility, inclusiveness etc. He then picks up on Bates’ formal certification solution, but expands it to include not just online learning, but any learning modality.

Is training really the barrier?

The image below is from an ASCILITE’2019 presentation I’m working on for next week (slides, paper and source code available from here). The image is from a Blackboard course site. It is not a site for a real course. However, each design decision present in this “course site” is inspired by a practice from an actual course site that was developed by someone with a formal teaching qualification. In some cases, they had more than one formal teaching qualification. Since it’s for a presentation, this example focused on limitations that were visual. A similar example could be generated focusing on design for learning.

The point is that these practices were taken from courses developed by people with formal teaching qualifications. Exactly the solution being suggested. Given we are now 20+ years into the digital/online learning revolution, this seems to suggest that more training and formal qualifications in (online) learning are not likely to help improve the quality of online learning. Suggesting that the barrier is not (just) a lack of training.

What might be the barrier?

To answer this question, the following digs a bit deeper into the report from the Canadian Digital Learning Research Association. It reveals a few more possible barriers, but each really appear to be symptoms – just like the need for more faculty training – of the real barrier to quality online learning and teaching.

An absence of strategic planning?

The following is graph won’t be found in the report, but it is drawn from data presented in the report.

It shows that 94% of the institutions responding to the survey identified online learning as being of strategic importance of the institution. A variety of reasons is given. Growing continuing/professional education, increasing student access, and attracting students from outside traditional catchment areas are reported as the most important.

However, the graph also shows that only 12% of the institutions responding to the survey had a fully implemented plan for online learning!!!!! 59% reported being in the process of developing a plan with 26% reporting that they don’t have one, but really should develop one.

Might this not create some issues? How is this the case after 25+ years of online learning?

The free text from one institution suggests the potential source of problems that this absence might create

By creating a strategy, we are hoping to provide a frame for blended learning at our institute that will help guide processes, policies, and systems that align. At a course level, we are creating more supports for instructors to create their own digital learning objects for curriculum.

In a organisation espousing a strategic management approach the absence of strategic plans creates problems all the way down.

Workload

It was interesting to note that training was NOT “the most significant barrier to the adoption of online learning” (p. 40) from institutions responding to the survey. The following graph shows the top two responses from responding institutions to the barreirs to online education (from both the 2018 and 2019 surveys)

Much of the discussion mentions the need for more support for teaching staff, but most of the discussion appears to have focused on the training. Rather than pondering if the absence of appropriate supports might be exacerbating the training problem. i.e. more training is being called for because the systems and supports currently available to teaching staff are insufficient and inappropriate for the task being asked.

The focus on the new shiny thing

Based on the responses so far, it appears that many institutions recognise the strategic importance of online learning, know they don’t have a plan, and are aware that there are significant barriers in terms of workload and preparation for our teaching staff. So, obviously institutions are focused on taking action to address these problems, right?

From the report

institutions are experimenting with different delivery methods to better meet the needs of students. A variety of strategies are being employed: new technologies, OER, blended/hybrid learning, and alternative credentials. (p. 54)

Oh dear.

The need for careful implementation

To break the iron triangle of access, cost and quality Ryan et al (2019) propose the following “practical and pedagogical techniques”

  • High-quality large group teaching and learning;
  • Alternative curriculum structures;
  • Automation of assessment and feedback;
  • Personalising feedback at scale;
  • Peer-based learning; and,
  • Offloading administrative and technical support.

Arguably, each of these are shiny new things. The last suggestion includes one of the recent “poster boy” shiny new things – teacher bots. But the authors recognise the problem of shiny new things (arguably) with the recognition that the shiny new things “are surely part of the solution, they are by no means the entire solution”. The understand that it is important that shiny new things are

…implemented carefully and with a clear purpose…(and)…used to support good teachers, teaching practice and learning and assessment designs

As established above, it appears that many of the responders to the survey haven’t gotten there just yet. The question is whether they ever will. After all, we are 25+ years into this online learning fad.

The biggest barrier to quality online learning is actually…

Western universities don’t know how to do online learning

The real barrier to quality online learning and teaching in higher education is that they don’t know how to do this. Universities are good at the shiny new thing, but not so much at figuring out how the shiny new thing can be “implemented carefully and with a clear purpose…(and)…used to support good teachers, teaching practice and learning and assessment designs”. In the last pages of their book Ellis and Goodyear (2019) describe it this way

Over recent decades, Western universities have been very good at picking up and reproducing modish language about their purposes and methods – engaged enquiry, T-shaped graduates, being and becoming, and so on. They have been less good at ‘tooling up’ to deal with the complexity of analysing how their educational ecosystems actually function and of systematically redesigning for sustainable improvement. (p. 242)

For me, the lack of training for teaching staff is just a symptom of this broader problem. Universities are full of good people with a lot of knowledge about aspects (e.g. technical, pedagogical, content etc) of the challenge of online learning. But they are (and have been for some time) operating within organisations underpinned by a mindset that actively prevents those people from working effectively together to achieve “careful implementation”.

The fact that we are 25+ years into this online learning thing and its possible to make observations like the above seems to provide some support for this perspective.

What’s the solution?

That’s the (significantly more than) $64K question. More training, better (or even some) strategic plans, more project managers, and more shiny new things won’t provide a solution.

Ellis and Goodyear (Ellis & Goodyear, 2019) offer a research-based book that offers both diagnosis and remedy. The (second) best top-level answer to the question I’ve seen so far.

The work I describe in this year’s ASCILITE paper/presentation describes one meso-level practitioners attempt at a possible solution derived by combining some of ideas from Ellis and Goodyear (2019) with some other ideas.

Though whether these theoretical answers are good answers is waiting further work.

References

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Johnson, N. (2019). National Survey of Online and Digital Learning 2019 National Report (p. 67). Retrieved from Canadian Digital Learning Research Association website: https://onlinelearningsurveycanada.ca/publications-2019/

Mathes, J. (2019). Global quality in online, open, flexible and technology enhanced education: An analysis of strengths, weaknesses, opportunities and threats. Retrieved from International Council for Open and Distance Education website: https://www.icde.org/knowledge-hub/report-global-quality-in-online-education

Roberts, J. (2018). Future and changing roles of staff in distance education: A study to identify training and professional development needs. Distance Education, 39(1), 37–53. https://doi.org/10.1080/01587919.2017.1419818

Ryan, T., French, S., & Kennedy, G. (2019). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 0(0), 1–12. https://doi.org/10.1080/03075079.2019.1679763

 

Theory of workarounds

Introduction

The following is a summary of the paper Theory of Workarounds.

Alter, S. (2014). Theory of Workarounds. Communications of the Association for Information Systems, 34(1). https://doi.org/10.17705/1CAIS.03455

The paper provides “an integrated theory of workarounds that describes how and why” they are created. It is a process theory “driven by the interaction of key factors that determine whether possible workarounds are considerd and how they are executed” and is seen as useful for

  • classifying workarounds and analysing how they occur;
  • understanding compliance and noncompliance to management mandates;
  • figuring how to consider possible workarounds as part of systems development;
  • studying how workarounds may lead to larger planned changes.

My interest – digital learning and teaching

I’m interested in workarounds as a way to better understanding what’s happening around higher education’s use of digital technology to support learning and teaching, and identifying ways to improve it.

Definition and theory of workarounds

Alter (2014) offers the following definition of workarounds

A workaround is a goal-driven adaptation, improvisation, or other change to one or more aspects of an existing work system in order to overcome, bypass, or minimize the impact of obstacles, exceptions, anomalies, mishaps, established practices, management expectations, or structural constraints that are perceived as preventing that work system or its participants from achieving a desired level of efficiency, effectiveness, or other organizational or personal goals. (p. 1044)Comparisons between this and related definitions suggest this is a broader definition, including additional factors such as

  • workarounds don’t need to use digital technology;
  • workarounds may include work not formally recognised by the organisation;
  • workarounds don’t always compensate for or bypass system deficiencies;
  • workarounds may not be temporary;
  • workarounds are not necessarily examples of noncompliance;

Alter’s (2014) definition of workarounds does rely on it occuring within a work system. Another theoretical concept developed by Alter (2002). See this section from an old paper of mine for a summary of the Work System Framework.

It is argued that this reliance on the work system framework provides a “broader and more comprehensive view of the changes that can be included in workarounds” (Alter, 2014, p. 1046)

Figure 1 is a representation of Alter’s (2014) theory of workarounds. It is positioned as a process theory that describes how and why workarounds are created. A brief description follows the figure.

Alter’s theory of workarounds draws on a number of theories and concepts, including:

  • Theory of planned behaviour;
  • Improvisation and bricolage;
  • Agency theory;
  • Work system theory

Figure 1 – Alter’s (2014) Theory of Workarounds (p. 1056)

Workarounds arise from a context that includes each work system participant’s personal goals, interests and values. Communication and sharing of these goals/values between participants may be flaws or incomplete leading to misalignment in the work system. It also includes the structure of the work system that includes architecture, characteristics, performance goals and also emergent change.

From this context arises the perceived need for a work around.

Which triggers a process of trying to identify possible workarounds. Often starting with the obstacles in the current situation and the perceived need combined with consideration of the costs, benefits, risks of being identifed, and possible ramifications. An essential component is the knowledge available to those involved.

Eventually this leads to a decision to select a workaround to pursue, if any.

If going ahead, then development and execution of the workaround is driven by factors such as attention to current conditions, intuition guiding action, testing of intuitive understanding, and situational decision making.

Subsequently, there are local consequences and broader consequences. Locally, may lead to elimination of the obstacles that initiated the process, but may also include failure of the workaround or various unintendended consquences. More broadly, these types of consquences might be felt or pushed into other locations.

Temporality of workarounds

Alter also makes a point of outlining the temporality of workarounds as outlined in Figure 2.


Figure 2: Temporality of Workarounds (adapted from Alter, 2014, p. 1058)

Five voices in the workarounds literature

Alter performed a literature review on the workarounds literature. He gather 289 papers and used that to derive his theory. He summarises that work by using five “voices” which in turn include topics, including:

Phenomena associated with workarounds;

  • Obstacles, exceptions, anomalies, mishaps and structural constraints
  • Agency
  • Improvisation and bricolage
  • Routines, processes and methods
  • Articulation work and loose coupling
  • Technology misfits
  • Design and emergence
  • Technology usage and adaptation
  • Motives and control systems
  • Knowledge
  • Temporality

Types of workarounds;

  • Overcome inadequate IT functionality
  • Bypass an obstacle built into processes or practices
  • Respond to a mishap or anomaly with a quick fix
  • Substitute for unavailable resources
  • Design and implement new resources
  • Prevent future mishaps
  • Pretent to comply
  • Lie, cheat, steal for personal benefit
  • Colluse for mutual benefit

direct effects of workarounds;

  • Continuation of work despite obstacles, mishaps or anomalies
  • Creation of hazards, inefficiencies or errors
  • Impact on subsequent activities
  • Compliance or non-compliance with management intentions

perspectives on workarounds; and,

  • Workarounds as necessary activities in everyday life
  • Workarounds as sources for future improvements
  • Workarounds as creative acts
  • Workarounds as add-ons or shadow systems
  • Workarounds as quick fixes that won’t go away
  • Workarounds as facades of compliance
  • Workarounds as inefficiencies of hazards
  • Workarounds as resistance
  • Workarounds as distortions or subterfuge

organisational challenges and dilemmas related to workarounds.

  • Ability to operate despite obstacles
  • Enactment of interpretive flexibility
  • Balance of personal, group and organisational interests
  • Permitting and learning from emergent change

He uses these 5 voices then to group and establish some sense of causality within the “breadth of ideas and examples that were found in the literature” (p. 1047).

Usefulness and Further research

Since the theory is developed based on a literature search, it is limited by anything that hasn’t made it into the literature. e.g. accounts of workarounds that were considered, but never attempted.

Each step in the process theory could inform survey and/or case study research to explore how well the theory maps onto reality and lead to discoveries of factors/relationships not currently in the theory.

The workaround literature identifies fundamental limtiations in assumptions underpinning traditional approaches to organisational and system analysis and design (e.g. that prescribed business processes will be followed consistently). Theory of workarounds can be used to analyse systems in organisations, reveal conditions that lead to workarounds, provide opportunities to incorproate learning from workaround into emergent/planned change. Helping reveal insights into whether or not intended methods are followed; how systems in organisations evolve over time, how implementation evolves over time..

Starting from alternate theoretical foundations (e.g. Actor Network Theory, activity theory, socio-materiality etc) might lead to different outcomes and insights.

Exploring knowledge reuse in design for digital learning: tweaks, H5P, constructive templates and CASA

The following has been accepted for presentation at ASCILITE’2019. It’s based on work described in earlier blog posts.

Click on the images below to see full size.

Abstract

Higher education is being challenged to improve the quality of learning and teaching while at the same time dealing with challenges such as reduced funding and increasing complexity. Design for learning has been proposed as one way to address this challenge, but a question remains around how to sustainably harness all the diverse knowledge required for effective design for digital learning. This paper proposes some initial design principles embodied in the idea of Context-Appropriate Scaffolding Assemblages (CASA) as one potential answer. These principles arose out of prior theory and work, contemporary digital learning practices and the early cycles of an Action Design Research process that has developed two digital ensemble artefacts employed in over 30 courses (units, subjects). Early experience with this approach suggests it can successfully increase the level of design knowledge embedded in digital learning experiences, identify and address shortcomings with current practice, and have a positive impact on the quality of the learning environment.

Keywords: Design for Learning, Digital learning, NGDLE.

Introduction

Learning and teaching within higher education continues to be faced with significant, diverse and on-going challenges. Challenges that increase the difficulty of providing the high-quality learning experiences necessary to produce graduates of the standard society is expecting (Bennett, Lockyer, & Agostinho, 2018). Goodyear (2015) groups these challenges into four categories: massification and the subsequent diversification of needs and expectations; growing expectations of producing work-ready graduates; rapidly changing technologies, creating risk and uncertainty; and, dwindling public funding and competing demands on time. Reconceptualising teaching as design for learning has been identified as a key strategy to sustainably, and at scale, respond to these challenges in a way that offers improvements in learning and teaching (Bennett et al., 2018; Goodyear, 2015). Design for learning aims to improve learning processes and outcomes through the creation of tasks, environments, and social structures that are conducive to effective learning (Goodyear, 2015; Goodyear & Dimitriadis, 2013). The ability of universities to develop the capacity of teaching staff to enhance student learning through design for learning is of increasing financial and strategic importance (Alhadad, Thompson, Knight, Lewis, & Lodge, 2018).

Designing learning experiences that successfully integrate digital tools is a wicked problem. A problem that requires the utilisation of expert knowledge across numerous fields to design solutions that respond appropriately to the unique, incomplete, contextual, and complex nature of learning (Mishra & Koehler, 2008). The shift to teaching as design for learning requires different skills and knowledge, but also brings shifts in the conception of teaching and the identity of the teacher (Gregory & Lodge, 2015). Effective implementation of design for learning requires detailed understanding of pedagogy and design and places cognitive, emotional and social demands on teachers (Alhadad et al., 2018). The ability of teachers to deal with this load has significant impact on learners, learning, and outcomes (Bezuidenhout, 2018). Academic staff report perceptions that expertise in digital technology and instructional design will be increasingly important to their future work, but that these are also the areas where they have the least competency and the highest need for training (Roberts, 2018). Helping teachers integrate digital technology effectively into learning and teaching has been at or near the top of issues facing higher education over several years (Dahlstrom, 2015). However, the nature of this required knowledge is often underestimated by common conceptions of the knowledge required by university teachers (Goodyear, 2015). Responding effectively will not be achieved through a single institutional technology, structure, or design, but instead will require an “amalgamation of strategies and supportive resources” (Alhadad et al., 2018, pp. 427-429). Approaches that do not pay enough attention to the impact on teacher workload run the risk of less than optimal learner outcomes (Gregory & Lodge, 2015).

Universities have adopted several different strategies to ameliorate the difficulty of successfully engaging in design for digital learning. For decades a common solution has been that course design, especially involving the adoption of new methods and technologies, should involve systematic planning by a team of people with appropriate expertise in content, education, technology and other required areas (Dekkers & Andrews, 2000). The use of collaborative design teams with an appropriate, complementary mix of skills, knowledge and experience mirrors the practice in other design fields (Alhadad et al., 2018). However, the prevalence of this practice in higher education has been low, both then (Dekkers & Andrews, 2000) and now. The combination of the high demand and limited availability of people with the necessary knowledge mean that many teaching staff miss out (Bennett, Agostinho, & Lockyer, 2017). A complementary approach is professional development that provides teaching staff with the necessary knowledge of digital technology and instructional design (Roberts, 2018). However, access to professional development is not always possible and funding for professional development and training has rarely kept up with the funding for hardware and infrastructure (Mathes, 2019). There has been work focused on developing methods, tools and repositories to help analyse, capture and encourage reuse of learning designs across disciplines and sectors (Bennett et al., 2017). However, it appears that design for learning continues to struggle to enter mainstream practice (Mor, Craft, & Maina, 2015) with design work undertaken by teachers apparently not including the use of formal methods or systematic representations (Bennett et al., 2017). There does, however, remain on-going demand from academic staff for customisable and reusable ideas for design (Goodyear, 2005). Approaches that respond to academic concerns about workload and time (Gregory & Lodge, 2015) and do not require radical changes to existing work practices nor the development of complex knowledge and skills (Goodyear, 2005).

If there are limitations with current common approaches, what other approaches might exist? Leading to the research question of this study:

How might the diverse knowledge required for effective design for digital learning be shared and used sustainably and at scale?

An Action Design Research (ADR) process is being applied to develop one answer to this question. ADR is used to describe the design, development and evaluation of two digital artefacts – the Card Interface and the Content Interface – and the subsequent formulation of initial design principles that offer a potential answer to the research question. The paper starts by describing the research context and research method. The evolution of each of the two digital artefacts is then described. This experience is then abstracted into six design principles encapsulated in the concept of Context-Appropriate Scaffolding Assemblages (CASA). Finally, the conclusions and implications of this work are discussed.

Research context and method

This research project started in late 2018 within the Learning and Teaching (L&T) section of the Arts, Education and Law (AEL) Group at Griffith University. Staff within the AEL L&T section work with the AEL’s teachers to improve the quality of learning and teaching across about 1300 courses (units, subjects) and 68 programs (degrees). This work seeks to bridge the gaps between the macro-level institutional and technological vision and the practical, coal-face realities of teaching and learning (micro-level). In late 2018 the macro-level vision at Griffith University consisted of current and long-term usage of the Blackboard Learn Learning Management System (LMS) along with a recent decision to move to the Blackboard Ultra LMS. In this context, a challenge was balancing the need to help teaching staff continue to improve learning and teaching within the existing learning environment while at the same time helping the institution develop, refine, and achieve its new macro-level vision. It is within this context that the first offering of Griffith University’s Bachelor of Creative Industries (BCI) program would occur in 2019. The BCI is a future-focused program designed to attract creatives who aspire to a career in the creative industries by instilling an entrepreneurial mindset to engage and challenge the practice and business of the creative industries. Implementation of the program was supported through a year-long strategic project including a project manager and educational developer from the AEL L&T section working with a Program Director and other academic staff. This study starts in late 2018 with a focus on developing the course sites for the seven first year BCI courses. A focus of this work was to develop a striking and innovative design that mirrored the program’s aims and approach. A design that could be maintained by the relevant teaching staff beyond the project’s protected niche. This raised the question of how to ensure that the design knowledge required to maintain a digital learning environment into the future would be available within the teaching team?

To answer this question an Action Design Research (Sein, Henfridsson, Purao, & Rossi, 2011) process was adopted. ADR is a merging of Action Research with Design Research developed within the Information Systems discipline. ADR aims to use the analysis of the continuing emergence of theory-ingrained, digital artefacts within a context as the basis for developing generalised outcomes, including design principles (Sein et al., 2011). A key assumption of ADR is that digital artefacts are not established or fixed. Instead, digital artefacts are ensembles that arise within a context and continue to emerge through development, use and refinement (Sein et al., 2011). A critical element of ADR is that the specific problem being addressed – design of online learning environment for courses within the BCI program – is established as an example of a broader class of problems – how to sustainably and at scale share and reuse the diverse knowledge required for effective design for digital learning (Sein et al., 2011). This shift moves ADR work beyond design – as practised by any learning designer – to research intending to provide guidance to how others might address similar challenges in other contexts that belong to the broader class of design problems.

Figure 1 provides a representation of the ADR four-stage process and the seven principles on which ADR is based. Stages 1 through 3 represent the process through which ensemble digital artefacts are developed, used and evolved within a specific context. The next two sections of this paper describe the emergence of two artefacts developed for the BCI program as they cycled through the first three ADR stages numerous times. The fourth stage of ADR – Formalisation of Learning – aims to abstract the situated knowledge gained during the emergence of digital artefacts into design principles that provide guidance for addressing a class of field problems (Sein et al., 2011). The third section of this paper formalizes the learning gained in the form of six initial design principles structured around the concept of Contextually Appropriate Scaffolding Assemblages (CASA).

Action Design Research Method: Stages and Pinciples

Figure 1 – ADR Method: Stages and Principles (adapted from Sein et al., 2011, p. 41)

Card Interface (artefact 1, ADR stages 1-3)

In response to the adoption of a trimester academic calendar, Griffith University encourages the adoption of a modular approach to course design. It is recommended that course profiles use modules to group and describe the teaching and learning activities. Subsequently, it has become common practice for this modular structure to be used within the course site using the Blackboard Learn content area functionality. To do this well, is not straight forward. Blackboard Learn has several functional limitations in legibility, design consistency, content arrangement and content adjustment that make it difficult to achieve quality visual design (Bartuskova, Krejcar, & Soukal, 2015). Usability analysis has also found that the Blackboard content area is inflexible, inefficient to use, and creates confusion for teaching staff regardless of their level of user experience (Kunene & Petrides, 2017). Overcoming these limitations requires levels of technical and design knowledge not typically held by teaching staff. Without this knowledge the resulting designs typically range from purely textual (e.g. the left-hand side of Figure 2) through to exemplars of poor design choices including the likes of blinking text, poor layout, questionable colour choices, and inconsistent design. While specialist design staff can and have been used to provide the necessary design knowledge to implement contextually-appropriate, effective designs, such an approach does not scale. For example, any subsequent modification typically requires the re-engagement of the design staff.

To overcome this challenge the Blackboard Learn user community has developed a collection of related solutions (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) that use Javascript to package the necessary design knowledge into a form that can be used by teachers. Griffith University has for some time used one of these solutions, the Blackboard Tweaks building block (Plaisted & Tkachov, 2011) developed at the Queensland University of Technology. One of the tweaks offered by this building block – the Themed Course Table – has been widely used by teaching staff to generate a tabular representation of course modules (e.g. the right-hand side of Figure 2). However, experience has shown that the level of knowledge required to maintain and update the Themed Course Table can challenge some teaching staff. For example, re-ordering modules can be difficult for some, and the dates commonly used within the table must be manually added and then modified when copied from one offering to another. Finally, the inherently text-based and tabular design of the Themed Course Table is also increasingly dated. This was an important limitation for the Bachelor of Creative Industries. An alternative was required.

Example blackboard content area Themed course table
Figure 2 – Example Blackboard Learn Content Areas: Textual versus Themed Course Table

That alternative would use the same approach as the Themed Course Table to achieve a more appropriate outcome. The approach used by the Themed Course Table, other related examples from the Blackboard community, and the H5P authoring tool (Singh & Scholz, 2017) are contemporary examples of constructive templates (Nanard, Nanard, & Kahn, 1998). Constructive templates arose from the hypermedia discipline to encourage the reuse of design knowledge and have been found to reduce cost and improve consistency, reliability and quality while enabling content experts to author and maintain hypermedia systems (Nanard et al., 1998). Constructive templates encapsulate a specific collection of design knowledge required to scaffold the structured provision of necessary data and generate design instances. For example, the Themed Course Table supports the provision of data through the Blackboard content area interface. It then uses design knowledge embedded within the tweak to transform that data into a table. Given these examples and the author’s prior positive experience with the use of constructive templates within digital learning (Jones, 2011), the initial plan for the BCI Course Content area was to replace the Course Theme Table “template” to adopt both a more contemporary visual design, and a forward-oriented view of design for learning. Dimitriadis and Goodyear (2013) argue that design for learning needs to be more forward-oriented and consider what features will be required in each of the lifecycle stages of a learning activity. That is, as the Course Theme Table replacement is being designed, consider what specific features will be required during configuration, orchestration, and reflection and re-design.

The first step in developing a replacement was to explore contemporary web interface practices for a table replacement. Due to its responsiveness to different devices, highly visual presentation, and widespread use amongst Internet and social media services, a card-based interface was chosen. Based on the metaphor of a paper card, this interface brings together all data for a particular object with an option to add contextual information. Common practice with card-based interfaces is to embed into a card memorable images related to the card content (see Figure 3). Within the context of a course module overview such a practice has the potential to positively impact student cognition, emotions, interest, and motivation (Leutner, 2014; Mayer, 2017). A practical advantage of card-based interfaces is that its widespread use means there are numerous widely available resources to aid implementation. This was especially important to the BCI project team, as it did not have significant graphical and client-side design knowledge to draw upon.

Next, a prototype was developed to test how effectively a card-based interface would represent a course’s learning modules. An iterative process was used to translate features and existing practice from the Course Theme Table to a card-based interface. Feedback from other design staff influenced the evolution of the prototype. It also highlighted differences of opinion about some of the visual elements such as the size of the cards, the number of cards per row, and the inclusion of the date in the top left-hand corner. Eventually the prototype card interface was shown to the BCI teaching team for input and approval. With approval given, a collection of Javascript and HTML was created to transform a specifically formatted Blackboard content area into a card interface.

Figure 3 shows just two of the six different styles of card-based interface currently supported by the Card Interface. This illustrates a key feature of the original conception of constructive templates – separation of content from presentation (Nanard et al., 1998) – allowing for different representations of the same content. The left-hand image in Figure 3 and the inclusion of dates on some cards illustrates one way the Card Interface supports a forward-oriented approach to design. Initially, the module dates are specified during the configuration of a course site. However, the dates typically only apply to the initial offering of the course and will need to be manually changed for subsequent offerings. To address this the Card Interface knows the trimester weekly dates from the university academic calendar. Dates to be included on the Card Interface can then be provided using the week number (e.g. Week 1, Week 5 etc.). The Card Interface identifies the trimester a course offering belongs to and translates all week numbers into the appropriate calendar dates.

view ANother card interface
Figure 3 – Two early visualisations of the Card Interface

Despite being designed for the BCI program, the first use of the Card Interface was not in the BCI program. Instead, in late 2018 a librarian working on a Study Skills site learned of the Card Interface from a colleague. Working without any additional support, the librarian was able to use the Card Interface to represent 28 modules spread over 12 content areas. Implementation of the Card Interface in the BCI courses started by drawing on existing learning module content from course profiles. Google Image Search was used to identify visually striking images that could be associated with each module (e.g. the left-hand side of Figure 3). The Card Interface was also used on the BCI program’s Blackboard site. However, the program site had a broader purpose leading to different design decisions and the adoption of a different style of card-based interface (see the right-hand image in Figure 3).

Anecdotal feedback from BCI staff and students suggest that the initial implementation and use of the Card Interface was positive. In addition, the visual improvements offered by the Card Interface over both the standard Blackboard Content Area and the Course Theme Table tweak led to interest from other courses and programs. As of late July 2019, the Card Interface has been used in over 55 content areas in over 30 Blackboard sites. Adoption has occurred at both the program and individual course level led by exposure within the AEL L&T team or by academics seeing it and wanting it. Widespread use has generated different requirements leading to creative uses of the Card Interface (e.g. the use of animated GIFs as card images) and the addition of new functionality (e.g. the ability to embed a video, instead of an image). Requirements from another strategic project led to a customisation of the Card Interface to provide an overview of assessment items, rather than modules.

With its adoption in multiple courses and use for different purposes the Card Interface appears to have successfully encapsulated a collection of design knowledge into a form that can be readily adopted and adapted. Use of that knowledge has improved the resulting design. Contributing factors to this success include: building on existing practice; providing advantages above and beyond existing practice; and, the capability for both teaching and support staff to rapidly customise the Card Interface. Further work is required to gain greater and more objective insight into the impact of the Card Interface on the student experience and outcomes of learning and teaching.

Content Interface (artefact 2, ADR stages 1-3)

The Card Interface provides a visual overview of course modules. The next challenge for the BCI project was the design, implementation and support of the learning activities and resources that form the content of those course modules. A task that is inherently more creative, important and typically involves significantly more content. Also, a task that must be completed using the same, problematic Blackboard interface. This requirement is known to encourage teaching staff to avoid the interface by using offline documents and slides (Bartuskova et al., 2015). This is despite evidence that failing to leverage affordances of the online environment can create a disengaging student experience (Stone & O’Shea, 2019) and that course content is a significant influence on students’ perceptions of course quality (Peltier, Schibrowsky, & Drago, 2007). Adding to the difficulty, the BCI teaching staff either had limited, none, or little recent experience with Blackboard. In the case of contracted staff, they did not have access to Blackboard. This raised the question of how to support the design, implementation and re-design of effective modular, online learning resources and activities for the BCI?

Observation of, and experience with, the Blackboard interface identified three main issues. First, staff did not know how or have access to the Blackboard content interface. Second, the Blackboard authoring interface provides limited authoring functionality. For example, beyond issues identified in the literature (Bartuskova et al., 2015; Kunene & Petrides, 2017) there is no support for standard authoring functionality such as grammar checking, reference management, commenting, and version control. Lastly, once the content is placed within Blackboard the user interface is limited and quite dated. On the plus side, the Blackboard interface does provide the ability to integrate a variety of different activities such as discussion forums, quizzes etc. The intent was to address the issues while at the same time retaining the ability to use the Blackboard activities.

For better or worse, the most common content creation tool for most University staff is Microsoft Word. Anecdotal observation suggests that many staff have adopted the practice of drafting content in Word before copying and pasting it into Blackboard. The Content Interface is designed to transform Word documents into good quality online learning activities and resources (see Figure 4). This is done by using an open source converter to semantically transform Word to HTML that is then copied and pasted into Blackboard. A collection of design knowledge embedded into Javascript then transforms the HTML in several ways. Semantic elements such as activities and readings are visually transformed. All external web links are modified to open in a new tab to avoid a common Blackboard error. The document is transformed into an accordion interface with vertical list of headings that be clicked on to display associated content. This progressive reveal: allows readers to get an overall picture of the module before focusing on the details; provides greater control over how they engage with the content; and is particularly useful on mobile platforms (Budiu, 2015; Loranger, 2014).

Word Content Interface
Figure 4 – Example Module as a Word document and in the Content Interface in Blackboard

To date, the Content Interface has been used to develop over 75 modules in 13 different Blackboard sites, most of these within the seven BCI courses. Experience using the still incomplete Content Interface suggests that there are significant advantages. For example, Library staff have adopted it to create research skills modules that are used in multiple course sites. Experience in the BCI shows that sharing documents through OneDrive and using comments and track changes enables the Word documents to become boundary objects helping the course development team co-create the module learning activities and resources. Where staff are comfortable with Word as an authoring environment, the authoring process is more efficient. The resulting accordion interface offers an improvement over the standard Blackboard interface. However, creating documents with Word is not without its challenges, especially the use of Word styles and templates. Also, the extra steps required can be perceived as problematic when minor edits need to be made, and when direct editing within Blackboard is perceived to be easier and quicker, especially for time-poor teaching staff. Better integration between Blackboard and OneDrive will help. More advantage is possible when the Content Interface is further contextually customized to offer forward-oriented functionality specific to the module learning design.

Initial Design Principles (ADR stage 4)

This section engages with the final stage of the ADR process – formalisation of learning – to produce design principles that help provide actionable insight for practitioners. The following six design principles help guide the development of Contextually-Appropriate Scaffolding Assemblages (CASA) that help to sustainably and at scale share and reuse the design knowledge necessary for effective design for digital learning. The design principles are grouped using the three components of the CASA acronym.

Contextually-Appropriate

1. A CASA should address a specific contextual need within a specific activity
system
.
The highest quality learning and teaching involves the development of appropriate context-specific approaches (Mishra & Koehler, 2006). A CASA should not be implemented at an institutional level. Such top-down projects are unable to pay enough attention to contextually specific needs as they aim for a solution that works in all contexts. Instead, a CASA should be designed in response to a specific need arising in a course or a small group of related courses. Following Ellis & Goodyear (2019) the focus in designing a CASA should not be the needs of individual students, but instead on the whole activity system. That is, consideration should be given to the complex assemblage of learners, teachers, content, pedagogy, technology, organisational structures and the physical environment with an emphasis on encouraging students to successfully engage in intended learning activities. For example, both the Card and Content Interfaces arose from working with a group of seven courses in the BCI program as the result of two separate, but related, needs. While the issues addressed by these CASA apply to many courses, the ability to develop and test solutions at a small scale was beneficial. Rather than a focus primarily on individual learners, the solutions were heavily influenced by an analysis of the available tools (e.g. Blackboard Tweaks, Office365), practices (e.g. modularisation and learning activities described in course profiles), and other components of the activity systems.

2. CASA should be built using and result in generative technologies. To maximise and maintain contextual appropriateness, a CASA must be able to be designed and redesigned as easily as possible. Zittrain (2008) labels technologies as generative or sterile. Generative technologies have a “capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences” (Zittrain, 2008, p. 70). Sterile technologies prevent this. Generative technologies enable convivial systems where people can be “actively engaged in generating creative extensions to the artefacts given to them” (Fischer & Girgensohn, 1990, p. 183). It is the end-user modifiability of generative technology that is crucial to knowledge-based design environments and enables response to unanticipated, contextual requirements (Fischer & Girgensohn, 1990). Implementing CASA using generative technologies allows easy design for specific contexts. Ensuring that CASA are implemented as generative technologies enables easy redesign for other contexts. Generativity, like other technological affordances, arises from the relationship between the technology and the people using the technology. Not only is it necessary to use technology that is easier to modify, it is necessary to be able to draw upon appropriate technological skills. This could mean having people with those technological skills available to educational design teams. It could also mean having a network of intra- and inter-institutional CASA users and developers collaboratively sharing CASA and the knowledge required for use and development; like that available in the H5P community (Singh & Scholz, 2017).

For example, development of the Card and Content Interfaces was only possible due to Blackboard Learn supporting the embedding of Javascript. The value of this generative capability is evident through the numerous projects (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) from the Blackboard community that leverage this capability; a capability that has been removed in Blackboard’s next version LMS, Ultra. The use of Office365 by the Content Interface illustrates the rise of digital platforms that are generative and raise questions that challenge how innovation through digital technologies are enabled and managed (Yoo, Boland, Lyytinen, & Majchrzak, 2012). Using the generative jQuery library to implement the Content Interface’s accordion enables modification of the accordion look and feel through use of jQuery’s theme roller and library of existing themes. The separation of content from presentation in the Card Interface has enabled at least six redesigns for different purposes. This work was possible because the BCI development team had ready access to the necessary technological skills and was able to draw upon a wide collection of open source software and online support.

3. CASA development should be strategically aligned and supported. Services to support design for learning within Australian universities are limited and insufficient for the demand (Bennett et al., 2017). Services capable of supporting the development of CASA are likely to be more limited. Hence appropriate decisions need to be made about how and what CASA are designed, re-designed and supported. Resources used to develop CASA are best allocated in line with institutional strategic projects. CASA development should proceed with consideration to the “manageably small set of particularly valued activity systems” (Ellis & Goodyear, 2019, p. 188) within the institution and be undertaken with institutionally approved and supported generative technologies. For example, the Card and Content Interfaces arose from an AEL strategic project. Both interfaces were focused on providing contextually-appropriate customization and support for the institutionally important activity system of creating modular learning activities and resources. Where possible these example CASA have used institutionally approved digital technologies (e.g. OneDrive and Blackboard). The sterile nature of existing institutional infrastructure has made it necessary to use more generative technologies (e.g. Amazon Web Services) that are neither officially approved or supported. However, the approach used does build upon an approach from an existing institutional approved technology – Blackboard Tweaks (Plaisted & Tkachov, 2011).

Scaffolding

4. CASA should package appropriate design knowledge to enable (re-)use by teachers and students. Drawing on ideas from constructive templates (Nanard et al., 1998), CASA should package the diverse design knowledge required to respond to a contextually-appropriate need in a way that this design knowledge can be easily reused in different instances. CASA enable the sustainable reuse of contextually applied design knowledge in learning activity systems and subsequently reduce cost and improve quality and consistency. For example, the Card Interface combines the knowledge from web design and multimedia learning research (Leutner, 2014; Mayer, 2017) in a way that has allowed teaching staff to generate a visual overview of the modules in numerous course sites. The Content Interface combines existing knowledge of the Microsoft

Word ecosystem with web design knowledge to improve the design, use and revision of modular content.

5. CASA should actively support a forward-oriented approach to design for learning.

To “thrive outside of the protective niches of project-based innovation” (Dimitriadis & Goodyear, 2013, p. 1) the design of a CASA must not focus only on initial implementation. Instead, CASA design must explicitly consider and include functionality to support the configuration, orchestration, and reflection and re-design of the CASA. For example, the Card Interface leverages contextual knowledge to enable dates to be specified independent of the calendar to automate re-design for subsequent course offerings. As CASA tend to embody a learning design, it should be possible to improve each CASA’s support for orchestration by implementing checkpoint and process analytics (Lockyer, Heathcote, & Dawson, 2013) specific to the CASA’s embedded learning design.

Assemblages

6. CASA are conceptualised and treated as contextual assemblages. Like all technologies, CASA are assemblies of other technologies (Arthur, 2009) where technologies are understood to include techniques such as organisational processes and pedagogies, as well as hardware and software. But a contextual assemblage is more than just technology. It includes consideration of and connections with the policies, practices, funding, literacies and discourse across levels from societal and down through sector, organisational, personal, individual, formal and informal. These are elements that make up the mess and nuance of the context, where the practice of educational technology gets complex (Cottom, 2019). A CASA must be generative in order to be designed and re-designed to respond to this contextual complexity. A CASA needs to be inherently heterogeneous, ephemeral, local, and emergent. A need that is opposed and ill-suited to the dominant rational system view underpinning common digital learning practice which sees technologies as planned, structured, consistent, deterministic, and systematic. Instead, connecting back to design principle one, CASA should be designed in recognition of and as the importance and complex intertwining of the human, social and organisational elements in any attempt to use digital technologies. It should play down the usefulness of distinctions between developer and user, or pedagogy and technology. For example, the Card Interface does not use the Lego approach to assembly that informs the Next Generation Digital Learning Environment (NGDLE) (Brown, Dehoney, & Millichap, 2015) and underpins technologies such as the Learning Tools Interoperability (LTI) standard. Instead of combining clearly distinct blocks with clearly defined connectors the Card and Content Interface is intertwined with and modifies the Blackboard user interface to connect with the specifics of context. Suggesting that the Lego approach is useful, perhaps even necessary, but not sufficient.

Conclusions, Implications, and Further Work

Universities are faced with the strategically important question of how to sustainably and at scale leverage the knowledge required for effective design for digital learning. The early stages of an Action Design Research (ADR) process has been used to formulate one potential answer in the form of six design principles encapsulated in the idea of Context-Appropriate Scaffolding Assemblages (CASA). To date, the ADR process has resulted in the development and use of two prototype CASA within a suite of 7 courses and within 6 months their subsequent adoption in another 24 courses. CASA draw on the idea of constructive templates to capture diverse design knowledge in a form that enables use of that knowledge by teachers and students to effectively address contextually specific needs. By adopting a forward-oriented view of design for learning CASA offer functionality to support configuration, orchestration, and reflection and re-design in order to encourage on-going use beyond the protected project niche of initial implementation. The use of generative technologies and an assemblage perspective enables CASA development to be driven by and re-designed to fit the specific needs of different activity systems and contexts. Such work will be most effective when it is strategically aligned and supported with the aim of supporting and refining institutionally valued activity systems.

Use of the Card and Content Interfaces within and beyond the original project suggest that these CASA have successfully encapsulated the necessary design knowledge to address shortcomings with current practice and had a positive impact on the quality of the digital learning environment. But it’s early days. These CASA can be improved by more completely following the CASA design principles. For example, the Content Interface currently offers only generic support for module design. Significantly greater benefits would arise from customising the Content Interface to support specific learning designs and provide contextually appropriate forward-oriented functionality. More experience is needed to provide insight into how this can be done effectively. Further work is required to establish if, how and what impact the use of CASA has on the quality of the learning environment and the experience and outcomes of both learning and teaching. Further work could also explore the questions raised by the CASA design principles about existing digital learning practice. The generative principle raises questions about whether moves away from leveraging the generativity of web technology – such the design of Blackboard Ultra and the increasing focus on mobile apps – will make it more difficult to integrate contextually specific design knowledge? Do reported difficulties accessing student engagement data with H5P activities (Singh & Scholz, 2017) suggest that the H5P community could fruitfully pay more attention to supporting a forward-oriented design approach? Does the assemblage principal point to potential limitations with some conceptualisations and implementation of next generation of digital learning environments?

References

Abhrahamson, A., & Hillman, D. (2016). Cutomize Learn with CSS and Javascript injection. Presented at the BBWorld 16, Las Vegas, NV. Retrieved from https://community.blackboard.com/docs/DOC-2103

Alhadad, S. S. J., Thompson, K., Knight, S., Lewis, M., & Lodge, J. M. (2018). Analytics-enabled Teaching As Design: Reconceptualisation and Call for Research. Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 427–435.

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves. New York, USA: Free Press.

Bartuskova, A., Krejcar, O., & Soukal, I. (2015). Framework of Design Requirements for E-learning Applied on Blackboard Learning System. In M. Núñez, N. T. Nguyen, D. Camacho, & B. Trawiński (Eds.), Computational Collective Intelligence (pp. 471–480). Springer International Publishing.

Bennett, S., Agostinho, S., & Lockyer, L. (2017). The process of designing for learning: understanding university teachers’ design work. Educational Technology Research & Development, 65(1), 125–145.

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Bezuidenhout, A. (2018). Analysing the Importance-Competence Gap of Distance Educators with the Increased Utilisation of Online Learning Strategies in a Developing World Context. International Review of Research in Open and Distributed Learning, 19(3), 263–281.

Brown, M., Dehoney, J., & Millichap, N. (2015). The Next Generation Digital Learning Environment: A

Report on Research (p. 11). Louisville, CO: EDUCAUSE.

Budiu, R. (2015). Accordions on Mobile. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/mobile-accordions/

Cottom, T. M. (2019). Rethinking the Context of Edtech. EDUCAUSE Review, 54(3). Retrieved from
https://er.educause.edu/articles/2019/8/rethinking-the-context-of-edtech

Dahlstrom, E. (2015). Educational Technology and Faculty Development in Higher Education. Retrieved from ECAR website: https://library.educause.edu/resources/2015/6/educational-technology-and-faculty-development-in-higher-education

Dekkers, J., & Andrews, T. (2000). A meta-analysis of flexible delivery in selected Australian tertiary institutions: How flexible is flexible delivery? In L. Richardson & J. Lidstone, (Eds.), Proceedings of

ASET-HERDSA 2000 Conference, (pp. 172-182)

Dimitriadis, Y., & Goodyear, P. (2013). Forward-oriented design for learning: illustrating the approach. Research in Learning Technology, 21, 1–13.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning,

Strategy and the Academy. Routledge.

Fischer, G., & Girgensohn, A. (1990). End-user Modifiability in Design Environments. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 183–192.

Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1). https://doi.org/10.14742/ajet.1344

Goodyear, P. (2015). Teaching As Design. HERDSA Review of Higher Education, 2, 27–59.

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in

Learning Technology, 21, 1–13.

Gregory, M. S. J., & Lodge, J. M. (2015). Academic workload: the silent barrier to the implementation of technology-enhanced learning strategies in higher education. Distance Education, 36(2), 210–230.

Jones, D. (2011). An Information Systems Design Theory for E-learning (PhD, Australian National University). Retrieved from https://openresearch-repository.anu.edu.au/handle/1885/8370

Kunene, K. N., & Petrides, L. (2017). Mind the LMS Content Producer: Blackboard usability for improved productivity and user satisfaction. Information Systems, 14.

Leutner, D. (2014). Motivation and emotion as mediators in multimedia learning. Learning and

Instruction, 29, 174–175.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459.

Loranger, H. (2014). Accordions for Complex Website Content on Desktops. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/accordions-complex-content/

Mathes, J. (2019). Global quality in online, open, flexible and technology enhanced education: An analysis of strengths, weaknesses, opportunities and threats. Retrieved from International Council for Open and Distance Education website:
https://www.icde.org/knowledge-hub/report-global-quality-in-online-education

Mayer, R. E. (2017). Using multimedia for e-learning. Journal of Computer Assisted Learning,
33(5), 403–423.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Mor, Y., Craft, B., & Maina, M. (2015). Introduction – Learning Design: Definitions, Current Issues and Grand Challenges. In M. Maina, B. Craft, & Y. Mor (Eds.), The Art & Science of Learning Design (pp. ix–xxvi). Rotterdam: Sense Publishers.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design

Patterns and Constructive Templates. 11–20. ACM.

Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The Interdependence of the Factors Influencing the Perceived Quality of the Online Learning Experience: A Causal Model. Journal of Marketing Education; Boulder, 29(2), 140–153.

Plaisted, T., & Tkachov, N. (2011). Blackboard Tweaks: Tools for Academics, Designers and Programmers. Retrieved July 2, 2019, from http://tweaks.github.io/Tweaks/index.html

Roberts, J. (2018). Future and changing roles of staff in distance education: A study to identify training and professional development needs. Distance Education, 39(1), 37–53.

Sein, M. K., Henfridsson, O., Purao, S., & Rossi, M. (2011). Action Design Research. MIS Quarterly,
35(1), 37–56.

Singh, S., & Scholz, K. (2017). Using an e-authoring tool (H5P) to support blended learning: Librarians’ experience. In H. Partridge, K. Davis, & J. Thomas (Eds.), Me, Us, IT! Proceedings ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education (pp. 158–162).

Stone, C., & O’Shea, S. (2019). Older, online and first: Recommendations for retention and success.

Australasian Journal of Educational Technology, 35(1). https://doi.org/10.14742/ajet.3913

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. (2008). The Future of the Internet–And How to Stop It. Yale University Press.

Exploring knowledge reuse in design for digital learning

This post continues an on-going exploration of knowledge reuse in design for digital learning. Previous posts (one and two) started the exploration in the context of developing an assemblage to help designers of web-based learning environments create a card interface (see Figure 1). Implementing such a design from scratch requires a diverse collection of knowledge that is beyond most individuals. It is hoped that packaging that knowledge into an assemblage of technologies will allow for that knowledge to be used and reused (within Blackboard 9.1) by more people and subsequently have a positive impact on the learning environment and experience.

The card inteface is a simple example of this work. The requirements of the card interface are fairly contained and pre-defined. The next challenge is to explore if and how this can be expanded to something more difficult and open-ended.


Figure 1: Card interface example

Problem: developing and maintaining online learning content

Back in 2015 @abelardopardo wrote a blog post titled Re-visiting authoring: Reauthoring which starts

Creating learning resources is getting incredibly difficult. Gone are the days in which a bunch of PDFs or PPTs were the only resources available to students. In a matter of years, learning resources have to be engaging, interactive, render in all sorts of device..

This thread from the Blackboard community site provides evidence of the problem elsewhere and directly echoes my own experiences with the Blackboard LMS.

I’m finding that relying primarily on the Blackboard Content Editor to post materials in the course shell as HTML is a time relatively consuming process. I am concerned that, despite training, some faculty may find maintaining these courses too technically challenging. Many faculty have been posting their entire courses as MS word docs. (Behnke, 2018)

Andecdotal observations of my local context suggests that learning modules (online content) with Blackboard generally come in the following categories:

  • Nothing.
  • Word documents or Powerpoint files.
  • Collections of Blackboard content items (e.g. the image on the Blackboard link) – with variable design quality.
  • High-end versions designed and implemented by teams of specialists.

The distribution between categories seems to lean heavily toward the first three categories. Contributing factors appear to include: the institutional assumption that individual teachers are largely responsible for producing learning resources; the availability of specialists to help is limited to strategic projects; and, the difficulty of using the Blackboard 9.1 tools to generate learning modules.

As a specialist assigned to a strategic project, my task has been to help a brand new program set up their course sites, including learning modules. Echoing Behnke’s (2018) quote I’ve found using the Blackboard tools too time consuming for outcomes of limited quality.

Hence I needed a solution to the authoring problem that would enable the quick creation and on-going maintenance of good quality online learning modules. A solution with a low floor (i.e. easy enough that an “average” teacher could use it) and a high ceiling (i.e. capable of creating advanced features and high quality). A solution that worked with the tools to hand in my current context.

Solution: the Content Interface

The last couple of weeks have seen the development of an assemblage of technologies currently labelled (unimaginitevely) the Content Interface. Figure 2 is a screen shot of an example learning module produced using the Content Interface. This blog post was also produced using the first two steps of the Content Interface process. The following sections outline the three step Content Interface process used to produce learning modules.

1. Create and edit content as a Word document

Microsoft Word, of if you’d prefer LibreOffice, is used to create and edit content that is saved as a Word document (.docx). The Word document must be structured using styles, including some styles specific to the Content Interface (e.g. Note, Reading, Activity, Embed). For example, this Word document was used to produce the learning module shown in Figure 2. That Word document and the learning module is actually an introduction to the Content Interface and illustrates the use of styles. Feel free to download the Word document for the learning module in Figure 2 and compare its contents with Figure 2. You can also download the Word document used to produce this blog post.


Figure 2: Content interface example – Blackboard

2. Convert to HTML using Mammoth

Once editing is complete, the Word document is uploaded to a Web form that converts it into clean HTML. This is done using a locally configured version of Mammoth.js (a Javascript version of Mammoth). Using a Click to copy button on the form, the HTML produced by Mammoth is copied into the clipboard and then pasted into a Blackboard content area. Or, as with this blog post any other web publishing service such as WordPress. It’s just HTML.

3. Transform the HTML

Since Mammoth produces very nice semantic HTML it’s fairly easy to transform using Javascript. Figure 2 is an example of the current transformation that is done by a combination of Javascript and CSS. Each learning module page in Blackboard has a Javascript file included to perform transformations, including:

  • Dividing the document into sections based on Heading 1 and displaying the sections via an Accordion interface.
  • Allowing any embedded HTML code (e.g. YouTube video) to be displayed.
  • Transforming a growing number of higher level semantic elements.
    For example, the Reading shown in Figure 2. If you examine the Word document from which the learning module in Figure 2 was produced,you will see that the text for the activity is displayed as normal text. No icon of someone reading a book. If examine the style, you’ll find that the text does have the Word style Reading applied to it. When it detects text with this style, the Javascript/CSS performs the transformation showin in Figure 2.
  • Ensure any non-Blackboard links open in a new Window.
    By default, Blackboard generates an error if any attempt is made to open a non-Blackboard link in the current browser window.

Is it any good?

Eating my own dog food

For a start, it works for me. I’m eating my own dog food. I find I’m able to prepare learning modules (i.e. HTML) quickly and easily. It’s also much easier to work on learning modules provided by someone else. I’m also finding it very useful for me at the moment as I often find having time available to write blog posts coincides with an absence of network connectivity. Not a problem when using a Word processor. In addition, using Word/LibreOffice also means that I can use Zotero for citation management, as well as all the other features (and the foibles) of contemporary Word processors.

At the very least I can see myself using this process, what about others?

Learning modules for four LMS sites

I’ve been working with three different academics helping them create learning modules for four different course sites. Most of these learning modules are being produced in the last couple of weeks leading up to the start of semester when the academics are busy. So far, discussions with those staff have generated positive comments about the improvement in the quality of the end product and about the value of working in Word, rather than the Blackboard interface.

This week we’ve discovered that sharing Word documents via OneDrive (part of the institution’s technology infrastrucutre) provides some promising benefits. Such documents are shared with the course teaching and development team via a link from the Blackboard learning module page. This provides a single point to go to for the learning module. OneDrive provides the ability to edit online and also provides version control. More exploration needed here.

Other specialists I work with are also talking about the promise the content inteface offers for use in other courses.

Week 1 of trimester starts this week. Still too early to have feedback from students.

Abelardo’s conditions

In his blog post, Abelardo identifies seven conditions he was using when looking for a solution. Table 1 is a summary of his conditions and a note on how well (or not) the Content Interface approach meets each condition.

Condition

Description

Content Interface

Content focus

No need to worry about visual appearance, table of contents, responsiveness etc.

Yes, but more work possible/required.

Mammoth only translate semantic information. Formatting and further transformation done via Javascript/CSS.

More work is required on the Javascript/CSS to provide ToC.

Support complex textual structures

e.g. cross-referencing, sections, subsections, figures, links etc.

Yes.

Word/LibreOffice provides much of this.

Javascript/CSS provides additional structures (e.g. Notes, Readings, Activities and more to come)

Support for interactive elements

Embedding videos, MCQs etc.

Yes.

Insert any HTML embed code in a document and apply the Embed style

Use HTML as the underlying format

HTML5 in particular

Yes.

At least in terms of publishing as HTML anywhere HTML is taken.

Support collaborative production

Version control etc.

Early indications, yes.

Experiments with OneDrive to share the Word documents appears to provide this.

Run on your own machine

No complex interface in an online tool, push a button to publish remotely

Yes, more work to do.

You author using Word. Work to be done on the one button publishing.

Problems and challenges

Perhaps the biggest limitations and source of challenge with this process is the use of Microsoft Word as the main authoring format. Even though most of the academics I work with use Word as their primary word processor there are issues. Word’s foibles as an authoring platform (e.g. see Figure 3 and the associated explanation), the stretching of Word’s styles functionality through this process; and, a tendency for many people not to really understand how to use Word as intended (e.g. Ben-Ari & Yeshno, 2006). Hence there’s a question about the mechanics of the process. However, early experience shows there may be some hope.


Figure 3: xkcd’s explanation of one of the challenges of using Word processors

There’s also the question of whether or not the “write in Word and publish in the LMS” process will be an abstraction too far. In particular, the increasing use of semantic elements in Word. A practice that challenges the typical formatting driven use of Word. Intermingled with this is that while the content interface may help reduce the cognitive load associated with the technical aspects of authoring, will this translate to an increased focus on design for learning?

On-going development

The content interface has been a working concern for less than two weeks. It is hoped that a lot more development will be done to refine this process and its output. Some current plans include:

  • One button publishing;
    Rather than manually upload the Word document and then copy and paste the HTML code, the hope is we can implement a Publish button in Blackboard that automates this process, perhaps connected with OneDrive.
  • Program/project specific designs;
    Currently all learning modules get the same, fairly limited design (i.e. Figure 2). It would be fairly easy to modify the Content Interface to use different visual designs for different programs or other purposes.
  • Alternate interface designs;
    The accordion interface shown in Figure 2 could be changed. For example, a simple page interface and hopefully more contemporary and effective designs.
  • Integration of Blackboard content items and tools; and,
    Blackboard provides a range of items/tools that can be included in a learning module (e.g. quizzes, assignments, discussion forums etc). The aim is to modify the Content Interface to allow such Blackboard tools to be integrated into content at appropriate places.
  • Higher level semantic elements.
    Current semantic elements (e.g. Reading, Activity and Note) are fairly low level. All that happens is that some additional HTML/CSS is added. A good long term goal would be to allow the use of higher level semantic elements that equate to learning designs/activities. For example, allow the use of a Debate style in a Word document which would set up an online environment that helps implement and orchestrate a debate.

References

Behnke, J. (2018). Content editor HTML vs. PDF? Retrieved February 24, 2019, from https://community.blackboard.com/thread/6523-content-editor-html-vs-pdf

Ben-Ari, M., & Yeshno, T. (2006). Conceptual Models of Software Artifacts. Interacting with Computers, 18(6), 1336–1350. https://doi.org/10.1016/j.intcom.2006.03.005

Random meandering notes on “digital” and the fourth industrial revolution

In the absence of an established workflow for curating thoughts and resources I am using this blog post to save links to some resources. It’s also being used as an initial attempt to write down some thoughts on these resources and beyond. All very rough.

Fourth industrial revolution

This from the world economic forum (authored by Klaus Schwab, ahh, who is author of two books on shaping the fourth industrial revolution) aims to explain “The Fourth Industrial Revolution: what it means, how to respond”. If offers the following description of the “generations” of revolution

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

Immediate reaction to that is that the 3rd revolution – with its focus on electronics and information technology – missed a trick with digital technology. It didn’t understand and leverage the nature of digital technologies sufficiently. In part, this was due to the limited nature of the available digital technology, but also perhaps due to the failure of a connection between the folk who really knew this and the folk trying to do stuff with digital technology.

The WEF post argues that “velocity, scope and systems impact” are why this fourth generation is distinct from the third. They could be right, but again I wonder if ignorance of the nature of digital technology might be a factor?

The WEF argues about the rapid pace of change and how everything is being disrupted. Which brings to mind arguments from Audrey Watters (and I assume others) about how, actually, it’s not all that rapid.

It identifies the possibility/likelihood of inequality. Proposes that the largest benefits of this new revolution (as with others?) accrues to the “providers of intellectual and physical capital – the innovators, shareholders and investors”.

Points to disquiet caused by social media and says more than 30% of the population accesses social media. However, the current social media is largely flawed and ill-designed, it can be done better.

Question: does an understanding of the nature of digital technology help (or is it even required) for that notion of “better”? Can’t be an explanation for all of it, but some? Perhaps the idea is not that you need only to truly know the nature of digital technology, or know only the details of the better learning, business, etc you want to create. You need to know both (with a healthy critical perspective) and be able to fruitfully combine.

Overall, much of this appears to be standard Harvard MBA/Business school like.

The platform economy – technology-enabled platforms – get a mention which also gets a mention in the nascent nature of digital technology stuff I worked on a couple of years ago. Platforms are something the critical perspective has examined, so I wonder if this belongs in the NoDT stuff?

Links to learning etc.

I came to this idea from this post from a principal come consultant/researcher around leading in schools. It’s a post that references this on building the perfect 21st Century worker as apparently captured in the following infographic.

Which includes the obligatory Digital skills which are listed in article as (emphasis added)

  • Basic digital literacy – ability to use computers and Internet for common tasks, like emailing
  • Workplace technology – using technologies required by the job
  • Digital learning – using software or online tools to learn new skills or information
  • Confidence and facility learning and using new technologies
  • Determining trustworthiness of online information

Talk about setting the bar low and providing a horrendous example of digital literacy, but then it does tend to capture the standard nature of most attempts at digital literacy I’ve seen, including:

  • A focus on using technology as is, rather than being able to renovate and manipulate it.
  • Revealing a ignorance of basic understanding. e.g. “software or online tools”, aren’t the “online tools” also software?
  • Continuing the medium divide, i.e. online information or online tools are somehow different from all the other information and tools I use?

(Not to mention that the article uses an image to display the bulleted list above, not text)

Teacher DIY learning analytics – implications & questions for institutional learning analytics

The following provides a collection of information and resources associated with a paper and presentation given at ALASI 2017 – the Australian Learning Analytics Summer Institute in Brisbane on 30 November, 2017. Below you’ll find an abstract, a recording of a version of the presentation, the presentation slides and the references.

The paper examines the DIY development and use of a particular application of learning analytics (known as Know thy student) within a single course during 2015 and 2016. The paper argues that given limitations about what is known about the institutional implementation of learning analytics that examining teacher DIY learning analytics can reveal some interesting insights. The paper identifies three implications and three questions.

Three implications

  1. Institutional learning analytics currently falls short of an important goal.

    If the goal of learning analytics is that “of getting key information to a human being who can use it” (Baker, 2016, p. 607) then institutional learning analytics is falling short, and not just at a specific institution.

  2. Embedded, ubiquitous, contextual learning analytics encourages greater use and enables emergent practice.

    This case suggests that learning analytics interventions designed to provide useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable significant levels of usage, including usage that was unplanned, emerged from experience, and changed practice.

    In this case, Know thy student was used by the teacher on 666 different days (~91% of the days that the tool was available) to find out more about ~90% of the enrolled students. Graphical representations below.

  3. Teacher DIY learning analytics is possible.

    Know thy student was implemented by a single academic using a laptop, widely available software (including some coding), and existing institutional data sources.

Three questions

  1. Does institutional learning analytics have an incomplete focus?

    Research and practice around the institutional implementation of learning analytics tends to appear to have a focus on “at scale”. Learning analytics that can be used across multiple courses or an entire institution. That focus appears to be at the expense of course or learning design specific, which appear to be more useful.

  2. Does the institutional implementation of learning analytics have an indefinite postponement problem?

    Aspects of Know thy student are specific to the particular learning design within a single course. The implementation of such a specific requirement would appear unlikely to have ever been undertaken by existing institutional learning analytics implementation. It would have been indefinitely postponed.

  3. If and how do we enable teacher DIY learning analytics?

    This case suggests that teacher DIY learning analytics is possible and potentially overcomes limitations in current institutional implementation of learning analytics. However, it’s also not without its challenges and limitations. Should institutions support teacher DIY learning analytics? How might that be done?

Usage

The following heat map shows the number of times Know thy student was used on each day during 2015 and 2016.

Know thy student usage clicks per day

The following bar graph contains 761 “bars”. Each bar represents a unique student enrolled in this course. The size of the bar shows the number of times Know thy student was used for that particular student. (One student was obviously used for testing purposes during the development of the tool)

Know thy student usage clicks per student

Abstract

The paper on which it is based has the following abstract.

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Presentation recording

The following is a recording of a talk given at CQUni a couple of weeks after ALASI. It uses the same slides as the original ALASI presentation, however, without a time limit the description is a little expanded.

Slides

Also view and download here.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124–129.

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Díaz, O., & Arellano, C. (2015). The Augmented Web: Rationales, Opportunities, and Challenges on Browser-Side Transcoding. ACM Trans. Web, 9(2), 8:1–8:30. https://doi.org/10.1145/2735633

Dron, J. (2014). Ten Principles for Effective Tinkering (pp. 505–513). Presented at the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Association for the Advancement of Computing in Education (AACE).

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Germonprez, M., Hovorka, D., & Collopy, F. (2007). A theory of tailorable technology design. Journal of the Association of Information Systems, 8(6), 351–367.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43. https://doi.org/10.3102/0013189X12463051

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Comput. Surv., 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University. Retrieved from https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf

Levi-Strauss, C. (1966). The Savage Mind. Weidenfeld and Nicolson.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017, from http://blog.ascilite.org/what-academics-really-want-out-of-learning-analytics/

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable Systems: Pressing the Issues with Buttons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 175–182). New York, NY, USA: ACM. https://doi.org/10.1145/97243.97271

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable Game Design and the Development of a Checklist for Getting Computational Thinking into Public Schools. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 265–269). New York, NY, USA: ACM. https://doi.org/10.1145/1734263.1734357

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London. Retrieved from http://tel.ioe.ac.uk/wpcontent/%0Duploads/2013/11/BeyondPrototypes.pdf

Sinha, R., & Sudhish, P. S. (2016). A principled approach to reproducible research: a comparative review towards scientific integrity in computational research. In 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (pp. 1–9). https://doi.org/10.1109/ETHICS.2016.7560050

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Wiliam, D. (2006). Assessment: Learning communities can use it to engineer a bridge connecting teaching and learning. JSD, 27(1).

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119(7), 1974–2040.

Page 1 of 8

Powered by WordPress & Theme by Anders Norén

css.php