Assembling the heterogeneous elements for (digital) learning

Year: 2021

Representing problems to make the solution transparent

The following illustrates how the game Number Scrabble and Herb Simon’s thoughts on the importance of design representation appears likely to help with migration of 1000s of course sites from Blackboard Learn (aka Blackboard Original) to another LMS. Not to mention becoming useful post-migration.

Number Scrabble

Number Scrabble is a game I first saw described in Simon’s (1996) book Sciences of the Artificial. I used it in a presentation from 2004 (the source of the following images).

Number Scrabble is a game played between two players. The players are presented with nine cards. The players take turns selecting one card at a time. The aim being to get three cards which add up to 15 (aka a “book”). The first player to obtain a book wins. If no player gets a book, the game is a draw.

Basic number scrabble

Making the solution transparent

Simon (1996) argues that problem representation is an important part of problem solving and design. He identifies the extreme (perhaps not always possible) version of this view as

Solving a problem simply means representing it so as to make the solution transparent.

He uses the example of Number Scrabble to illustrate the point.

How much easier would you find it to pay Number Scrabble if the cards were organised in the following magic square?

Would it help any if I mentioned another game, tic-tac-toe?

Number scrabble's magic square

With this new representation Number Scrabble becomes a game of tic-tac-toe. No arithmetic required and tactics and strategies most are familiar with become applicable.

My Problem: Course Migration – Understand what needs migrating

Over the next two years my colleagues and I will be engaged in the process of migrating University courses from the Blackboard Learn (aka Blackboard Original) LMS to another LMS. Our immediate problem is to understand what needs migrating and identifying if and how that should/can be migrated to the new LMS.

I’ve actually grown to quite like Blackboard Learn. But it’s old and difficult to use (well). It’s very hard to fully understand the purpose and design of a course site by looking at and navigating around it. A course site is likely to have a handful of areas curated by the teaching staff. Each with a collection of different tools and content organised according to various schemes. There are another handful of areas for configuring the course site.

To make things more difficult a Blackboard course site has a modal interface. Meaning the course site will look different for different people at different times.

In addition, using Dron’s (2021) definition, Blackboard Learn is a very soft technology, which makes it hard to use. As a soft technology, Blackboard Learn provides great flexibility in how it is used. Flexibility when applied across 1000s of course sites will reveal many interesting approaches.

Attempting to understand the design, purpose and requirements of a Blackboard course site by looking at it is a bit like playing Number Scrabble with a single line of cards. A game we have to play 1000s of times.

Can we make the migration problem (more) transparent? How we’re trying

I wondered if the design problem of if/what/how to migrate a course site would be simpler if we were able to change the representation of the course site. Could we develop a representation that would make the solution (more) transparent?

COuld we develop a representation we designers could use to gain an initial understanding of the intent and method of a course site. A representation we could use during collaboration with the teaching staff and other colleagues to refine that understanding and plan the migration. A representation that could scaled for use across 1000s of course sites and perhaps lay the foundation for business as usual post-migration.

What I currently have is a collection of Python code that given a URL for a Blackboard course site will:

  1. Scrape the course site and store a data structure representing the site, it’s content and configuration.
  2. Perform various forms of analysis and modeling with this data to reveal important features.
  3. Generate a Word document summarising the course and hopefully providing the representation we need.

The idea is that given a list of 1000s of Blackboard courses. The code can quickly perform these steps and provide a more transparent representation of the problem.

But is it useful? Is it making solutions transparent? Yes

The script is not 100% complete. But it’s already proving useful.

Yesterday I was helping a teacher with one task on their course site (a story for another blog post). The teacher mentioned in passing another problem from earlier in the course. A problem that has been worked around, but for which the cause remains mysterious. It was quite a strange problem. Not one I’d encountered before. I had some ideas but confirmation would require further digging into the complexity of a Blackboard course site. Who has the time?!

As I’m also currently working on the “representation” script I thought I’d experiment with this course. Mainly to test the script, but maybe to reveal some insights.

I ran the script. Skimmed the resulting Word document and bingo there’s the cause. A cause I would never have considered. But it is understandable how it came about.

The different representation made the solution transparent!!


Dron, J. (2021). Educational technology: What it is and how it works. AI & SOCIETY.

Simon, H. (1996). The sciences of the artificial (3rd ed.). MIT Press.

Exploring Dron’s definition of educational technology

Pre-COVID the role of technology in learning and teaching in higher education was important. However, in 2020 it became core as part of the COVID response. Given the circumstances it is no surprise that chunks of that response were not that great. There was some good work. There was a lot of a “good enough for the situation” work. There was quite a bit that really sucked. For example,

Drake Hotline Bling Meme

Arugably, I’m not sure there’s much difference from pre-COVID practice. Yes, COVID meant that the importance and spread of digital technology use was much, much higher. But, rapid adoption whilst responding to a pandemic was unlikely to be better (or as good?) qualitatively than previous practice. There just wasn’t time for many to engage in the work required to question prior assumptions and redesign prior practices to suit the very different context and needs. Let alone harness technology transformatively.

It is even less likely if – as I believe – most pre-COVID individual and organisational assumptions and practices around learning, teaching and technology were built on fairly limited conceptual foundations. Building a COVID response on that sandy foundation was never going to end well. As individuals, institutions, and vendors (thanks Microsoft?) begin to (re-)imagine what’s next for learning and teaching in higher education, it is probably a good time to improve those limited conceptual foundations.

That’s where this post comes in. It is an attempt to explore in more detail Dron’s (2021) definition of educational technology and how it works. There are other conceptual/theoretical framings that could be used. For example, postdigital (Fawns, 2019). That’s for other posts. The intent here it to consider Dron’s definition of educational technology and if/how it might help improve the conceptual foundations of institutional practices with educational technology.

After writing this post, I’m seeing some interesting possible implications. For example:

  • Another argument for limitations in the “pedagogy before technology” argument (pedagogy is technology, so this is an unhelpful tautology).
  • A possible explanation for why most L&T professional development is attended by the “usual suspects” (it’s about purpose).
  • Thoughts on the problems created by the separation of pedagogy and technology into two organisational universities (quality of learning experience is due to the combination of these two, separate organisational units, separate purposes, focused on their specific phenomena).
  • One explanation why the “blank canvas” (soft) nature of the LMS (& why the NGDLE only makes this worse) is a big challenge for quality learning and teaching (soft is hard).
  • Why improving digital fluency or the teaching qualifications of teaching staff are unlikely to address this challenge (soft is hard and solutions focused on individuals don’t adress the limitations in the web of institutional technologies – in the broadest Dron sense).

Analysing a tutorial room

Imagine you’re responsible for running a tutorial at some educational institution. You’ve rocked up to the tutorial room for the first time and you’re looking at one of the following room layouts: computer lab, or classroom. How does Dron’s definition of educational technology help understand the learning and teaching activity and experience you and your students are about to embark upon? How might it help students, teachers, and the people from facilities management and your institution’s learning and teaching centre?

Computer lab Classroom
Czeva , CC BY-SA 4.0 via Wikimedia Commons Thedofc, Public domain, via Wikimedia Commons

Ask yourself these questions

  1. What technology do you see in the rooms above (imagine you can see a tutorial being run in both)?
  2. What is the nature of the work you and your students do during the tutorial?
  3. Which of the rooms above would be “best” for your tutorial? Why?
  4. How could the rooms above be modified to be better for tutorials? Why?

What is the (educational) technology in the room?

Assuming we’re looking at a tutorial being carried out in both images. What would be on your list of technology being used?

A typical list might include chairs, tables, computers, whiteboards (interactive/smart and static), clock, notice boards, doors, windows, walls, floors, cupboards, water bottles, phones, books, notepads etc.

You might add more of the technologies that you and your students brought with you. Laptops, phones, backpacks etc. What else?

How do you delineate between what is and isn’t technology? How would you define technology?

Defining technology

Dron (2021) starts by acknowledging that this is difficult. That most definitions of technology are vague, incomplete, and often contradictory. He goes into some detail why. Dron’s definition draws on Arthur’s (2009) definition of technlogy as (emphasis added)

the orchestration of phenomena for some purpose (Dron, 2021, p. 1)

Phenomena includes stuff that is “real or imagined, mental or physical, designed or existing in the natural world” (Dron, 2021, p. 2). Phenomena can be seen as belonging to physics (materials science for table tops), biology (human body climate requirements), chemistry etc. Phenomena can be: something you touch (the book you hold); another technology (the book you hold); a cognitive practice (reading); and, partially or entirely human enacted (think/pair/share, organisational processes etc).

For Arthur, technological evolution comes from combining technologies. The phenomena being orchestrated in a technology can be another technology. Writing (technology) orchestrates language (technology) for another purpose. A purpose Socrates didn’t much care for. Different combinations (assemblies) of technologies can be used for different purposes. New technologies are built using assemblies of existing technologies. There are inter-connected webs of technologies orchestrated by different people for different purposes.

For example, in the classrooms above manufacturers of furniture orchestrated various physical and material phenomena to produce the chairs, desks and other furniture. Some other people – probably from institutional facilities management – orchestrated different combinations of furniture for the purpose of designing cost efficient and useful tutorial rooms. The folk designing the computer lab had a different purpose (provide computer lab with desktop computers) than the folk designing the classroom (provide a room that can be flexibly re-arranged). Those different purposes led to decisions about different approaches to orchestration of both similar and different phenomena.

When the tutorial participants enter the room they start the next stage of orchestration for different, more learning and teaching specific purposes. Both students and teachers will have their own individual purposes in mind. Purposes that may change in respone to what happens in the tutorial. Those diverse purposes will drive them to orchestrate different phenomena in different ways. To achieve a particular learning outcome, a teacher will orchestrate different phenomena and technology. They will combine the technologies in the room with certain pedagogies (other technologies) to create specific learning tasks. The students then orchestrate how the learning tasks – purposeful orchestrations of phenomena – are adapted to serve their individual purposes.

Some assemblies of technologies are easier to orchestrate than others (e.g. the computers in a computer lab can be used to play computer games, rather than “learning”). Collaborative small group pedagogies would probably be easier in the classroom, than the computer lab. The design of the furniture technology in the classroom has been orchestrated with the purpose of enabling this type of flexibility. Not so the computer lab.

For Dron, pedagogies are a technology and education is a technology. For some,

Them's fighting words

What is educational technology?

Dron (2021) answers

educational technology, or learning technology, may tentatively be defined as one that, deliberately or not, includes pedagogies among the technologies it orchestrates.

Consequently, both the images above are examples of educational technologies. The inclusion of pedagogies in the empty classroom is more implicit than in the computer lab which shows people apparently engaged in a learning activity. The empty classroom implicitly illustrates some teacher-driven pedagogical assumptions in terms of how it is laid out. With the chairs and desks essentially in rows facing front.

The teacher-driven pedagogical assumptions in the computer lab are more explicit and fixed. Not only because you can see the teacher up the front and the students apparently following along. But also because the teacher-driven pedagogical assumptions are enshrined in the computer lab. The rows in the computer lab are not designed to be moved (probably because of the phenomena associated with desktop computers, not the most moveable technologies). The seating positions for students are almost always going to be facing toward the teacher at the front of the room. There are even partitions between each student making collaboration and sharing more difficult.

The classroom, however, is more flexible. It implicitly enables a number of different pedagogical assumptions. A number of different orchetrations of different phenomena. The chairs and tables can be moved. They could be pushed to sides of the room to open up a space for all sorts of large group and collaborative pedagogies. The shapes of the desks suggest that it would be possible to push four of them together to support small group pedagogies. Pedagogies that seek to assemble or orchestrate a very different set of mental and learning phenomena. The classroom is designed to be assembled in different ways.

But beyond that both rooms appear embedded in the broader assembly of technology of formal education. They appear to be classrooms within the buildings of an educational institution. Use of these classrooms are likely scheduled according to a time-table. Scheduled classes are likely led by people employed according to specific position titles and role descriptions. Most of which are likely to make some mention of pedagogies (e.g. lecturer, tutor, teacher).

Technologies mediate all formal education and intentional learning

Dron’s (2021) position is that

All teachers use technologies, and technologies mediate all formal education (p. 2)

Everyone involved in education has to be involved in the orchestration of new assemblies of technology. e.g. as you enter one of the rooms above as the teacher, you will orchestrate the available technologies including your choice of explicit/implicit pedagogical approaches into a learning experience. If you enter one of the rooms as the learner, you will orchestrate the assembly presented to you by the teacher and institution with your technologies, for your purpose.

Dron does distinguish between learning and intentional learning. Learning is natural. It occurs without explicit orchestration of phenomena for a purpose. He suggests that babies and non-human entities engage in this type of learning. But when we start engaging in intentional learning we start orchestrating assemblies of phenomena/technologies for learning. Technologies such as language, writing, concepts, models, theories, and beyond.

Use and particpation: hard and soft

For Dron (2021) students and teachers are “not just users but participants in the orchestration of technologies” (p. 3).

The technology that is the tutorial you are running, requires participation from both you and the students. For example, to help organise the room for particular activities, use the whiteboard/projector to show relevant task information, use language to share a particular message, and use digital or physical notebooks etc. Individuals perform these tasks in different ways, with lesser or greater success, with different definitions of what is required, and with different preferences. They don’t just use the technology, the participate in the orchestration.

Some technologies heavily pre-deterimine and restrict what form that participation takes. For example, the rigidity of the seating arrangements in the computer lab image above. There is very limited capacity to creatively orchestrate the seating arrangement in the computer lab. The students participation is largely (but not entirely) limited to sitting in rows. The constraints this type of technology places on our behaviour leads Dron to label them as hard technologies. But even hard technologies can orchestrated in different ways by coparticipants. Which in turn lead to different orchestrations.

Other technologies allow and may require more active and creative orchestration. As mentioned above, the classroom image includes seating that can be creatively arranged in different ways. It is a soft technology. The additional orchestration that soft technologies require, requires from us additional knowledge, skills, and activities (i.e additional technology) to be useful. Dron (2021) identifies “teaching methods, musical instruments and computers” as further examples of soft technologies. Technologies that require more from us in terms of orchestration. Soft technologies are harder to use.

Hard is easy, soft is hard

Hard technologies typically don’t require additional knowledge, processes and techniques to achieve their intended purpose. What participation hard technologies require is constrained and (hopefully) fairly obvious. Hard technologies are typically easy to use (but perhaps not a great fit). However, the intended purpose baked into the hard technology may not align with your purpose.

Soft technologies require additional knowledge and skills to be useful. The more you know the more creatively you can orchestrate them. Soft technologies are hard to use because they require more of you. However, the upside is that there is often more flexibility in the purpose you can achieve with soft technologies.

For example, let’s assume you want to paint a picture. The following images show two technologies that could help you achieve that purpose. One is hard and one is soft.

Hard is easy Soft is hard
Aleksander Fedyanin CC0, via Wikimedia Commons Small easel with a blank canvas CC0

Softness is not universally available. It can only be used if you have the awareness, permission, knowledge, and self-efficacy necessary to make use of it. Since I “know” I “can’t paint”, I’d almost certainly never even think of using of a blank canvas. But then if I’m painting by numbers, then I’m stuck with producing whatever painting has been embedded in this hard technology. At least as long as I expect the hardness. Nor is hard versus soft a categorisation, it’s a spectrum.

As a brand new tutor entering the classroom shown above, you may not feel confident enough to re-arrange the chairs. You may also not be aware of certain beneficial learning activites that require moving the chairs. If you’ve never taught a particular tutorial or topic with a particular collection of students, you may not be aware that different orchestrations of technologies may work better.

Hard technologies are first and structural

Harder technologies are structural. They funnel practice in certain ways. Softer technologies tend to adapt to those funnels, some won’t be able to adapt. The structure baked into the hard technology of the computer lab above makes it difficult to effectively use a circle of voices activity. The structure created by hard technologies may mean you have to consider a different soft technology.

This can be difficult because hard technologies become part of the furniture. They become implicit, invisible and even apparently natural parts of education. The hardness of the computer lab above is quite obvious, especially the first time you enter the room for a tutorial. But what about the other invisible hard technologies embedded into the web technologies that is formal education.

You assemble the tutorial within a web of other technologies. As the number of hard technologies and interconnections between hard technologies increases, the web in which you’re working becomes harder to change. Various policies, requirements and decisions are made before you start assembling the tutorial. You might be a casual paid for 1 hour to take a tutorial in the computer lab shown above on Friday at 5pm. You might be required to use a common, pre-determined set of topics/questions. To ensure a common learning experience for students across all tutorials you might be required to use a specific pedagogical approach.

While perhaps not as physically hard as the furniture in the computer lab, these technologies tend to funnel practice toward certain forms.

Education is a coparticipative technological process

For Dron (2021) education is a coparticipative technological process. Education – as a technology – is a complex orchestration of different nested phenomena for diverse purposes.

How it is orchestrated and for what purposes are inherently situated, socially constructed, and ungeneralizable. While the most obvious coparticipants in education are students and teachers there are many others. Dron (2021) provides a sample, including “timetablers, writers, editors, illustrators of textbooks, creators of regulations, designers of classrooms, whiteboard manufacturers, developers and managers of LMSs, lab technicians”. Some of a never ending list of roles that orchestrate some of the phenomena that make up the technologies that teachers and students then orchestrate to achieve their diverse purposes.

Dron (2021) argues that how the coparticipants orchestrate the technologies is what is important. That the technologies of education – pedagogies, digital technologies, rooms, policies, etc. – “have no value at all without how we creatively and responsively orchestrate them, fuelled by passion for the subject and process, and compassion for our coparticipants” (p. 10). Our coparticipative orchestration is the source of the human, socially constructed, complex and unique processes and outcomes of learning. More than this Dron (2021) argues that the purpose of education is to both develop our knowledge and skills and to encourge the never-ending development of our ability to assemble our knowledge and skills “to contribute more and gain more from our communities and environments” (p. 10)

Though, as a coparticipant in this technological process, I assume I could orchestrate that particular technology with other phenemona to achieve a different purpose. e.g. if I were a particular type of ed-tech bro, then profit might be my purpose of choice.

Possible questions, applications, and implications

Dron (2021) applies his definition of educational technology to some of the big educational research questions including: the no significant different phenomena; learning styles; and the impossibility of replication studies for educational interventions. This produces some interesting insights. My question is whether or not Dron’s definition can be usefully applied to my practitioner experience with educational technology within Australian Higher Education. This is a start.

At this stage, I’m drawn to how Dron’s definition breaks down the unhelpful duality between technology and pedagogy. Instead, it positions pedagogy and technology as “just” phenomena that the coparticipants in education will orchestrate for their purposes. Echoing the sociomaterial and postdigital turns. The notions of hard and soft technologies and what they mean for orchestration also seem to offer an interesting lens to understand and guide institutional attempts to improve learning and teaching.

Pulling apart Dron’s (2021) definition

the orchestration of phenomena for some purpose (Arthur, 2009, p. 51)
seems to suggest the following questions about L&T as being important
1. Purpose: whose purpose and what is the purpose?
2. Orchestration: how can orchestration happen and who is able orchestrate?
3. Phenomena: what phenomena/assemblies are being orchestrated?

Questions that echo Fawn’s (2020) argument using a postdigital perspective to argue against the pedagogy before technology purpose and landing on the following

(context + purpose) drives (pedagogy [ which includes actual uses of technology])

Withi this in mind, designing a tutorial in one of the rooms would start with the content and purpose. In this case the context is the web of existing technologies that have led you and your students being in the room ready for a tutorial. The purpose includes the espoused learning goals of the tutorial, but also the goals of all the other participants, including those that emerge during the orchestration of the tutorial. This context and purpose is then what ought to drive the orchestration of various phenomena (which Fawn labels “pedagogy”) for that diverse and emergent collection of purposes.

Suggesting that it might be useful if the focus for institutional attempts to improve learning and teaching aimed to improve the quality of that orchestration. The challenge is that the quality of that orchestration should be driven by context and purpose, which are inherently diverse and situated. A challenge which I don’t think existing institutional practices are able to effectively deal with. Which is perhaps why discussions of quality learning and teaching in higher education “privileges outcome measures at the expense of understanding the processes that generate those outcomes” (Ellis and Goodyear, 2019, p. 2).

It’s easier to deal with abstract outcomes (very soft, non-specific technologies) than with the situated and contexual diversity of specifics and how to help with the orchestration of how to achieve those outcomes. In part, because many of the technologies that contribute to institutional L&T are so hard to reassemble. Hence it’s easier to put the blame on teaching staff (e.g. lack of teaching qualifications or digital fluency), than think about how the assembly of technologies that make up an institution should be rethought (e.g. this thread).

More to come.


Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Dron, J. (2021). Educational technology: What it is and how it works. AI & SOCIETY.

Fawns, T. (2019). Postdigital Education in Design and Practice. Postdigital Science and Education, 1(1), 132–145.

On formal qualifications and improving learning and teaching

The following is sparked by Twitter conversations arising from a tweet from @neilmosley5 quoting from this article by Tony Bates. In particular, pondering a tweet from @gamerlearner where the idea is that a “consistent requirement for educators in HE to have some kind of formal teaching qual” will not only help motivate academics “to take time out to learn how to teach better” and generally value teaching more.

It is somewhat troubling and inconsistent that there is no requirement for university academics to have formal teaching qualifications. But I don’t see how such a requirement by itself will fix issues with the quality of learning and teaching in universities. Especially in the context of Australian higher education given the growing complexity learning and teaching arising from on-going change (e.g. micro-credentials, WIL, multi-modal, flexible, COVID…)

Instead, requiring formal qualifications appears to be a simple solution to a complex problem. It is a solution that seems to fall into the second of three levels of improving teaching – “What management does”. It is a solution that allows someone to “lead” the implementation of a project (e.g. the institutional implementation of HEA fellowships), pass some policies, deliver against some KPIs, and provide demonstrable evidence that the institution takes learning and teaching seriously.

While this is going on the reality of teaching reveals a different story about how seriously learning and teaching are taken. Some examples follow, but there are many more (e.g. I don’t even mention the great value placed on research). Workload formulas specify a maximum of 30 minutes for all outside class student interactions per student. There is significant rhetoric around moving away from lectures, but workload formulas are built around time-tabling lecture theatres. A significant proportion of teaching is done by fantastic but under paid sessional staff often appointed at the last minute. The systems and technologies provided to support learning and teaching are disjointed, require significant extra work to be somewhat useful. Mainly because they can’t even provide the simplest of functionality or help do the little things.

Given this mismatch, is it any surprise that there are concerns and signs that any requirement for formal teaching qualifications is likely to lead to task corruption. At the individual level, as @AmandasAudit suggests “how many will short cut and just phone it in?”. At the organisational level, e.g. @scotxc argues that it becomes “very easy to become a box-ticking exercises for uni’s to say ‘They’re qualified!!'”.

My argument is that actually improving learning and teaching requires moving to level 3 of improving teaching – “What the teacher does”. What Biggs (2001) suggests as focusing on teaching, not individual teachers. Ensuring that the institutional systems, processes, policies etc. all encourage and enable effective teaching practice and move toward a distributive view of learning and knowledge.

This is not a simle task. It is complex. It is a wicked problem. There is no simple solution. There is no silver bullet. Formal teaching qualifications might be part of the broader solution, but I can’t see it being the solution. I’m not convinced it is even likely to be the most beneficial contributor.


Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Japanese store front - dog and boy

What are the symbols in digital education/design for learning?

Benbya et al (2020, p. 3) argue that digitial technologies do make a difference, including this point (among others)

Digital technologies not only give rise to complex sociotechnical systems; they also distinguish sociotechnical systems from other complex physical or social systems. While complexity in physical or social system is predominantly driven by either material operations or human agency, complexity in sociotechnical systems arises from the continuing and evolving entanglement of the social (human agency), the symbolic (symbol-based computation in digital technologies), and the material (physical artifacts that house or interact with computing machines).

An argument that resonates with my (overly) digital background and predilictions, but I wonder how valid/valuable this point is, whether the socio-material/post-digital folk have written about this, and what if any value it might generate for pondering (post-)digital education?

This resonates because my expeience in L&T in higher education suggests two shortcomings of most individual and organisational practices of “digital” education (aka online learning etc.):

  1. Few have actually grokked digital technologies, and;
  2. Even less recognise, let alone respond, the importance of “the continuing and evolving entanglement” of the social, symbolic, and material of sociotechnical systems that Benbya et al (2020) identify.

Returning to symbol-based computation, Benbya et al (2020) quote Ada Lovelace

Symbol-based computation provides a generalizable and applicable mechanism to unite the operations of matter and the abstract mental processes (`Lovelace 1842).
They explain that symbol-based computation – i.e. “provide a standard form of symbols to encode, input, process, and output a wide variety of tasks” – is at the heart of digital technologies.

Which seem to beg questions like

  1. What are the variety of L&T tasks that digital technologies support?
  2. What are the symbols that those digital technologies encode, input, process and output?
  3. How do those symbols and tasks evolve over time and contribute to the “continuing and evolving entanglement” of the L&T sociotechnical system?

Symbol systems in L&T – focus on management

It’s not hard to find literature talking about the traditional, one-ring-to-rule-them-all Learning Management System as being focused largely on “management” i.e. administration. Indeed, the one universal set of tasks supported by digital technology in higher education appears to be focused on student enrolment, grade management, and timetabling. Perhaps because courses, programs, grades, and timetables are the only symbols that are consistent across the institution.

When you enter the nitty, gritty of learning and teaching in specific disiplines you leave consistency behind and enter a diverse world of competing traditions, pedagogies, and ways of seeing the world. A world where perhaps the most commonly accepted symbols are lectures, tutorials, assignments, exams, grades. Again somewhat removed from the actual practice of learning and teaching.


To deal with this diversity institutions are moving to Tech Ecoysystems aka Next-Generation Digital Learning Environments (NGDLE). The NGDLE rationale is that no one digital technology (e.g. the LMS) can provide it all. You’ll need an ecosystem that will “allow individuals and institutions the opportunity to construct learning environments tailored to their requirements and goals” (Brown et al., 2015, p. 1).

Recent personal experience suggests, however, that what currently passes for such an ecosystem is a collection of disaparte tools. Where each tool has its own set of symbols to represent what it does. Symbols that typically aren’t those assumed by other tools in the ecosystem, or commonly in use by the individuals and organisations using the tools. The main current solution to this symbolic tower of babel is the LTI standard, which defines a standard way for these disparate tools to share information. Information that is pretty much the same standard symbols identified above. i.e. student identity, perhaps membership, and marks/grades.

Consquently, the act of constructing a learning environment tailored to the requirements of an individual or a course is achieved by somehow understanding and cobbling together these disaparate symbol systems and the technologies that embody them. Not surprisingly, a pretty difficult task.

Constructing learning environments

At the other end, there are projects like ABC Learning Design that provide symbols and increasingly digitial technologies for manipulating those symbols for design for learning that could be integrated into sociotechnical systems. For example, work at University of Sydney or ways of using digital technology to harness these symbols to marry curriculum design with project management. Which appears to finally provide digital technology that is supporting symbol computation that is directly related to learning and teaching and can be used across a variety of tasks and contexts.

But I do wonder how to bridge the final gap. While this approach promises a way to bridge curriculum design and project managing the implementation of that design. It doesn’t yet actively help with the implementation of that design. If and how might you bridge the standard symbols used by ABC Learning Design and the disparate collection of different symbol systems embedded in the tech ecosystem provided to implement it?

Learning Design tools like LAMS used something like the “one-ring-to-rule-them-all”/LMS approach and then engaged with something like the LTI approach. So either there was a single system that could define its own symbol system and ignore the rest of the world. Or, it could communicate with the rest of the world by the common universal symbols: student identity, membership, marks/grades etc and add one more disparate system to understand and try to integrate when constructing a learning environment.

Is there a different way?

What about a sociotechnical system that focused on actively helping with the task of cobbling together disparate symbol systems embedded in a tech ecosystem into learning environments? A method that actively engaged with developing a “continuing and evolving entanglement” of the social, symbolic, and material? A sociotechnical system that actively enabled relevant symbol-based computation?

What would that look like?


Benbya, H., Ning Nan, Tanriverdi, H., & Youngjin Yoo. (2020). Complexity and Information Systems Research in the Emerging Digital World. MIS Quarterly, 44(1), 1–17.

Brown, M., Dehoney, J., & Millichap, N. (2015). The Next Generation Digital Learning Environment: A Report on Research (A Report on Research, p. 11). EDUCAUSE.

Mountain lake

Reflecting on the spread of the Card Interface for Blackboard Learn

In late 2018 I started work at an institution using Blackboard Learn. My first project helping “put online” a group of 7 courses highlighted just how ugly Blackboard sites could be and how hard it was to do anything about it. By January 2019 I shared the solution I’d developed – the Card Interface. Below is a before/after image illustrating how the Card Interface ‘tweaks’ a standard Blackboard Learn content area into something more visual and contemporary. To do this you add some provided Javascript to the page and then add some card meta data to the other items.

Since 2019, the work has since grown in three ways:

  1. The addition of the Content Interface as a way to design and maintain online content and refinement of both the Card and Content Interfaces.
  2. Conceptually through the development of some design principles for this type of artefact (dubbed Contextually Appropriate Scaffolding Assemblages – CASA).
  3. Uptake of the Card Interface (and to a lesser extent the Content Interface) within my institution and beyond.

The spread – Card Interface Usage – Jan-March 2021

The following graph illustrates the number of unique Blackboard sites that have requested the Card Interface javascript file in the first few months of 2021. In the same time frame, the Content Interface has been used by a bit over 70 Griffith University sites.

The heaviest use is within the institution where this all started. Usage this year is up from the original 7 courses at the same time in 2019. What’s surprising about this spread is that this work is not an officially approved technology. It’s just a kludgge developed by some guy that works for one of the L&T areas in the institution. Uptake appears to have largely happened through word of mouth.

Adoption beyond the original institution – especially in Ireland – was sparked by
this chance encounter on Twitter (for the life of me I can’t figure out to embed a good visual of this tweet, it used to be easy). Right person, right time. More on that below.


So why has it played out this way?

What follows are my current reflections bundled up with the CASA design princples.

Would be interesting (to me at least) to actually ask and find out.

1. A CASA should address a specific contextual need within a specific activity

The Card Interface address an unfulfilled need. The default Blackboard Learn interface is ugly and people want it to look better. And there isn’t much help coming from elsewhere. The Irish adoption of the Card Interface that this isn’t a problem restricted to my institution.

The Content Interface isn’t as widely used. I wonder if part of that is because the activity it helps with (design and maintain online content) is diversely interpreted. e.g. People differ on what they think is acceptable/good online content, if/how it should be developed, and thinking about it beyond just getting some “stuff” online for Monday. Meaning a lot more effort is required to see the Content Interface as a solution to a need they have.

2. CASA should be built using and result in generative technologies

First, to give Blackboard Learn its due. It is a generative platform. It allows just about anyone to include Javascript. This generative capacity is the enabler for the Card and Content Interfaces and numerous other examples. Sadly, Blackboard have decided generativity is not important for Blackboard Ultra.

Early versions of the Card Interface didn’t do much. But over the years its evolved and added features. They’ve been responding to evolving local needs. Perhaps making it more useful?

I think a key point is that the Card Interface is generative for the designer. It provides some scope for the designer to change how it works. The most obvious example being the easy inclusion of images.

It would be interesting to explore more if and how people have used the Card Interface in different and unexpected ways. Or, have they stuck to the minimum.

The Content Interface can be generative, but requires expert knowledge and isn’t quite as easy. What choice is available is not that attractice. I suspect if it were more mearningfully and effective generative that would positively impact adoption.

3. CASA development should be strategically aligned and supported

Neither of these tools are institutionally aligned. They have become fairly widely adopted with the team of educational designers I work with and more of a part of our strategic processes. But not core or genral. There’s been some spread beyond into other groups but not at the institutional level. There is talk that the Card Interface has had some level of approval by one of the more central groups. It would be interesting to analyse further.

But these tools remain accepted but not formally recognised.

4. CASA should package appropriate design knowledge to enable (re-)use by teachers and students.

To paraphrase Stephen Downes this is where a CASA does things right thereby “allowing designers to focus on the big things”. Just the ability to implement a card interface is a good first start, but I also wonder how much some of the more contextual design knowledge built into the Card and Content Interface influence use? e.g. the university date feature of both.

It would be good to test this hypotheses. Also to find out what impact this has on the designer/teacher and the students.

5. CASA should actively support a forward-oriented approach to design for learning

It appears that the University date feature of the cards is used a fair bit. It’s the main “forward-oriented” design features. But there’s perhaps not much more of this focus in the Card Interface.

The Content Interface is conceived of as a broader assemblage of technologies to design and mantain online content. It can make use of O365 to enable more collaborative discussion amongst the teaching team and enable version control. But I’m not sure many teachers currently think about a lot more than what they are putting up this study period, or this week.

6. CASA are conceptualised and treated as contextual assemblages

i.e. it’s not just the LMS or any other technology. It’s more about how easily and effectively each teacher is able to integrate these tools into their practices, tools and context.

The Card Interface is a simpler and more generic tool. It’s easier to integrate and achieve a positive outcome. Hence great adoption within the institution and beyond.

The Content Interface is itself a more complex collection of technology and also attempting to integrate into a more complex set of practices, tools and context.

It would be very interesting to see if, how, and what assemblages people have constructed around each of these tools.

Green shoot growing out of a power pole

Do the little things matter in design for learning?

In learning what matters most is what the learner does. As people looking to help people learn we can’t make them learn. The best we can do is to create learning situations – see Goodyear, Carvalho & Yeoman (2021) for why I’m using situation and not environment. We design the task, the learning space and social organisation (alone, pairs, groups etc.) that make up the situation in which they will engage in some activity. Hopefully activity that will lead to the learning outcomes we had in mind when designing the situation.

But maybe not. We can’t be certain. All we can do is create a learning situation that is more likely to encourage them to engage in “good” activity.

How much do the little things matter in the design of these learning situations?

We spend an awful lot of time on the big picture things. A lot of time is spent on: creating, mapping and aligning learning outcomes; ensuring we’ve chosen the right task informed by the right learning theory and research to achieve those outcomes; and, a lot of time selecting, building, and supporting the physical and digital learning spaces in which the activity will take place. But what about the little things?

Are the little things important? I’m not sure, but in my experience the little things are typically ignored. Is that experience? Why are they ignored? What impact does this have on learners and learning?

Some early thinking about these questions follow. Not to mention the bigger question, if we can’t get the little things right, what does that say about our ability to get the big things right?

What are some “little things”?

To paraphrase Potter Stewart I won’t attempt to define “little things” but rather show what I think I mean with a couple of examples from recent experience.

Specific dates

For better or worse, dates are important in formal education. Submit the assignment by date X. We’ll study topic Y in week X. Helping students plan how and when to complete the requested task is a good thing. Making explicit the timeframes would seem a good thing. i.e. something more like situation A in the following image than situation B.

However, as pointed out in this 2015 comment the more common practice has been situation B. Since the print-based distance education days of the 80s and 90s the tendency has been to make learning materials (physical or digital) “stand-alone”. i.e. independent of a particular study period so that they can be reused again and again. Generally because it’s hard to ensure that the dates continue to be correct in offering after offering.

Note: These images are intended as examples of “little things” not examplars of anything else

Using web links

In a web-based learning situation, it’s not uncommon to require students to use some additional web-based resources to complete a task. For example, read some document, contribute to a discussion board etc. If it is an online resource then it appears sensible to – use a core feature of the web and – provide a link to that resource. Making it easier – requiring less cognitive load – for the student to complete the task.

But, as someone who gets to see a little of different web-based learning situations I continue to be shocked that the majority are more like situation B then sitation B in the following image.

Is it common to ignore the “little things”?

As mentioned in the above, my observations over the last 10 years suggest that these two examples of “little things” are largely ignored. I wonder if this is a common experience?

There can be differences. For example, it can be difficult to use links to resources within an LMS. It’s not unusual for different offerings of the same course to use different sites within the LMS. This means that a link to a discussion forum in one course offering is not the same as the same discussion forum in the next course offering. I was shocked that my current institution’s LMS’ site roll over process did not automatically update such links as was standard practice at a previous institution. The previous institution also had a course based link checker that would look for broken links. My current institution/LMS doesn’t.

“Little things” appear to matter

A course I helped re-design has just completed. The results from the student evaluation of the course are in with a respons rate of ~30% (n=21). All very positive.

There was a question about the course being well organised and easy to use. 15 strongly agreed and 6 agreed. What struck me was the organisation of the course included mentions of the little things.

Two of the responses mentioned dates, both positively. Explaining that the dates were “very helpful”. That this was the first course to have included them and that it was “a big stress having to look it up often”.

Three of the responses mentioned links, all positively. Explaining that the numerous links to discussion board topics were “helpful”, “great” and “easy”.

These “little things” aren’t likely to radically transform the learning outcomes, but they appear to have improved the learner experience. Removing a “stress” has to help.

Why are the “little things” ignored?

My primary hypothesis is that while these are “little things”, they aren’t “easy things”. Our tools and process don’t make it easy to do the “little things”. The following describes three possible reasons for this inability. Are there more?

Reusability paradox

First, is the Reusability Paradox. As mentioned in the dates example above. To make study materials reusable you have to remove context. For example, dates specific to a particular study period. The emphasis on reuse is a plus, but comes at the cost of reducing the pedagogic value. With the rise of micro-credentials and the flexible reuse of modular learning materials this is only going to be more of a factor moving forward.

The reusability paradox extends to the tools we use to produce and host our learning sitations (e.g. various forms of LMS and the latest shiny things like Microsoft Teams). Any tool that’s designed to be sold/used by the broadest possible market tends to be designed to be reusable. It doesn’t know a lot about the specifics of one individual context. For example, it doesn’t know about the dates for the institution’s study periods, let alone the dates important for an individual learning situation.

Hierarchical versus Distributed

Second, is the difference between a hierarchical (tree-like) and distributed conception of the world. Most contemporary professional practices (e.g. software development, design for learning, and managing organisations) is hierarchical. A tree of black box components responsible for specific purposes. With the complexity and detail of each activity hidden from view. The functionality of an LMS is generally organised this way. There’s a lot of value in this approach, but it makes it very difficult to do something in across each of the black boxes. To be distributed. For example, make sure that all the dates and links mentioned in the discussion forums, the quizzes, the content areas, the lecture slides etc. are correct.

This is also visible at an organisational level. It appears that offering specific dates for assignemnts and the linked are typically entered into some sort of administrative system that produces a formal profile/synopsis for a course/unit. Learning typically takes place elsewhere (e.g. the LMS). Extra work has to be performed to transfer information between the two systems. Work to transfer such information between systems is typically only done for “important” tasks. e.g. transfering grades out of the LMS into the student administration system.

Limited focus on forward-oriented design

Third, is limited attention paid to forward-oriented design (Goodyear & Dimitriadis, 2013). Common practice is that design focuses on configuration. i.e. making sure that the learning situation is ready for students to engage with. Goodyear & Dimitriadis (2013) argue that design for learning should be an on-going and forward-looking process that actively considers design for configuration, orchestration, reflection and re-design. For example, rather than just provide ways for links to be added during configuration of a learning situation. Think about what link related functionality will be required during (orchestration) and after (reflection and re-design) learntime. For example, provide indications of if and how links are being used, or a link checker.


Goodyear, P., Carvalho, L., & Yeoman, P. (2021). Activity-Centred Analysis and Design (ACAD): Core purposes, distinctive qualities and current developments. Educational Technology Research and Development.

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: Reframing design for learning. Research in Learning Technology, 21, 1–13.

Powered by WordPress & Theme by Anders Norén