Assembling the heterogeneous elements for (digital) learning

Month: March 2021

On formal qualifications and improving learning and teaching

The following is sparked by Twitter conversations arising from a tweet from @neilmosley5 quoting from this article by Tony Bates. In particular, pondering a tweet from @gamerlearner where the idea is that a “consistent requirement for educators in HE to have some kind of formal teaching qual” will not only help motivate academics “to take time out to learn how to teach better” and generally value teaching more.

It is somewhat troubling and inconsistent that there is no requirement for university academics to have formal teaching qualifications. But I don’t see how such a requirement by itself will fix issues with the quality of learning and teaching in universities. Especially in the context of Australian higher education given the growing complexity learning and teaching arising from on-going change (e.g. micro-credentials, WIL, multi-modal, flexible, COVID…)

Instead, requiring formal qualifications appears to be a simple solution to a complex problem. It is a solution that seems to fall into the second of three levels of improving teaching – “What management does”. It is a solution that allows someone to “lead” the implementation of a project (e.g. the institutional implementation of HEA fellowships), pass some policies, deliver against some KPIs, and provide demonstrable evidence that the institution takes learning and teaching seriously.

While this is going on the reality of teaching reveals a different story about how seriously learning and teaching are taken. Some examples follow, but there are many more (e.g. I don’t even mention the great value placed on research). Workload formulas specify a maximum of 30 minutes for all outside class student interactions per student. There is significant rhetoric around moving away from lectures, but workload formulas are built around time-tabling lecture theatres. A significant proportion of teaching is done by fantastic but under paid sessional staff often appointed at the last minute. The systems and technologies provided to support learning and teaching are disjointed, require significant extra work to be somewhat useful. Mainly because they can’t even provide the simplest of functionality or help do the little things.

Given this mismatch, is it any surprise that there are concerns and signs that any requirement for formal teaching qualifications is likely to lead to task corruption. At the individual level, as @AmandasAudit suggests “how many will short cut and just phone it in?”. At the organisational level, e.g. @scotxc argues that it becomes “very easy to become a box-ticking exercises for uni’s to say ‘They’re qualified!!'”.

My argument is that actually improving learning and teaching requires moving to level 3 of improving teaching – “What the teacher does”. What Biggs (2001) suggests as focusing on teaching, not individual teachers. Ensuring that the institutional systems, processes, policies etc. all encourage and enable effective teaching practice and move toward a distributive view of learning and knowledge.

This is not a simle task. It is complex. It is a wicked problem. There is no simple solution. There is no silver bullet. Formal teaching qualifications might be part of the broader solution, but I can’t see it being the solution. I’m not convinced it is even likely to be the most beneficial contributor.

References

Biggs, J. (2001). The Reflective Institution: Assuring and Enhancing the Quality of Teaching and Learning. Higher Education, 41(3), 221–238.

Japanese store front - dog and boy

What are the symbols in digital education/design for learning?

Benbya et al (2020, p. 3) argue that digitial technologies do make a difference, including this point (among others)

Digital technologies not only give rise to complex sociotechnical systems; they also distinguish sociotechnical systems from other complex physical or social systems. While complexity in physical or social system is predominantly driven by either material operations or human agency, complexity in sociotechnical systems arises from the continuing and evolving entanglement of the social (human agency), the symbolic (symbol-based computation in digital technologies), and the material (physical artifacts that house or interact with computing machines).

An argument that resonates with my (overly) digital background and predilictions, but I wonder how valid/valuable this point is, whether the socio-material/post-digital folk have written about this, and what if any value it might generate for pondering (post-)digital education?

This resonates because my expeience in L&T in higher education suggests two shortcomings of most individual and organisational practices of “digital” education (aka online learning etc.):

  1. Few have actually grokked digital technologies, and;
  2. Even less recognise, let alone respond, the importance of “the continuing and evolving entanglement” of the social, symbolic, and material of sociotechnical systems that Benbya et al (2020) identify.

Returning to symbol-based computation, Benbya et al (2020) quote Ada Lovelace

Symbol-based computation provides a generalizable and applicable mechanism to unite the operations of matter and the abstract mental processes (`Lovelace 1842).
They explain that symbol-based computation – i.e. “provide a standard form of symbols to encode, input, process, and output a wide variety of tasks” – is at the heart of digital technologies.

Which seem to beg questions like

  1. What are the variety of L&T tasks that digital technologies support?
  2. What are the symbols that those digital technologies encode, input, process and output?
  3. How do those symbols and tasks evolve over time and contribute to the “continuing and evolving entanglement” of the L&T sociotechnical system?

Symbol systems in L&T – focus on management

It’s not hard to find literature talking about the traditional, one-ring-to-rule-them-all Learning Management System as being focused largely on “management” i.e. administration. Indeed, the one universal set of tasks supported by digital technology in higher education appears to be focused on student enrolment, grade management, and timetabling. Perhaps because courses, programs, grades, and timetables are the only symbols that are consistent across the institution.

When you enter the nitty, gritty of learning and teaching in specific disiplines you leave consistency behind and enter a diverse world of competing traditions, pedagogies, and ways of seeing the world. A world where perhaps the most commonly accepted symbols are lectures, tutorials, assignments, exams, grades. Again somewhat removed from the actual practice of learning and teaching.

The NGDLE

To deal with this diversity institutions are moving to Tech Ecoysystems aka Next-Generation Digital Learning Environments (NGDLE). The NGDLE rationale is that no one digital technology (e.g. the LMS) can provide it all. You’ll need an ecosystem that will “allow individuals and institutions the opportunity to construct learning environments tailored to their requirements and goals” (Brown et al., 2015, p. 1).

Recent personal experience suggests, however, that what currently passes for such an ecosystem is a collection of disaparte tools. Where each tool has its own set of symbols to represent what it does. Symbols that typically aren’t those assumed by other tools in the ecosystem, or commonly in use by the individuals and organisations using the tools. The main current solution to this symbolic tower of babel is the LTI standard, which defines a standard way for these disparate tools to share information. Information that is pretty much the same standard symbols identified above. i.e. student identity, perhaps membership, and marks/grades.

Consquently, the act of constructing a learning environment tailored to the requirements of an individual or a course is achieved by somehow understanding and cobbling together these disaparate symbol systems and the technologies that embody them. Not surprisingly, a pretty difficult task.

Constructing learning environments

At the other end, there are projects like ABC Learning Design that provide symbols and increasingly digitial technologies for manipulating those symbols for design for learning that could be integrated into sociotechnical systems. For example, work at University of Sydney or ways of using digital technology to harness these symbols to marry curriculum design with project management. Which appears to finally provide digital technology that is supporting symbol computation that is directly related to learning and teaching and can be used across a variety of tasks and contexts.

But I do wonder how to bridge the final gap. While this approach promises a way to bridge curriculum design and project managing the implementation of that design. It doesn’t yet actively help with the implementation of that design. If and how might you bridge the standard symbols used by ABC Learning Design and the disparate collection of different symbol systems embedded in the tech ecosystem provided to implement it?

Learning Design tools like LAMS used something like the “one-ring-to-rule-them-all”/LMS approach and then engaged with something like the LTI approach. So either there was a single system that could define its own symbol system and ignore the rest of the world. Or, it could communicate with the rest of the world by the common universal symbols: student identity, membership, marks/grades etc and add one more disparate system to understand and try to integrate when constructing a learning environment.

Is there a different way?

What about a sociotechnical system that focused on actively helping with the task of cobbling together disparate symbol systems embedded in a tech ecosystem into learning environments? A method that actively engaged with developing a “continuing and evolving entanglement” of the social, symbolic, and material? A sociotechnical system that actively enabled relevant symbol-based computation?

What would that look like?

References

Benbya, H., Ning Nan, Tanriverdi, H., & Youngjin Yoo. (2020). Complexity and Information Systems Research in the Emerging Digital World. MIS Quarterly, 44(1), 1–17. https://doi.org/10.25300/MISQ/2020/13304

Brown, M., Dehoney, J., & Millichap, N. (2015). The Next Generation Digital Learning Environment: A Report on Research (A Report on Research, p. 11). EDUCAUSE.

Mountain lake

Reflecting on the spread of the Card Interface for Blackboard Learn

In late 2018 I started work at an institution using Blackboard Learn. My first project helping “put online” a group of 7 courses highlighted just how ugly Blackboard sites could be and how hard it was to do anything about it. By January 2019 I shared the solution I’d developed – the Card Interface. Below is a before/after image illustrating how the Card Interface ‘tweaks’ a standard Blackboard Learn content area into something more visual and contemporary. To do this you add some provided Javascript to the page and then add some card meta data to the other items.

Since 2019, the work has since grown in three ways:

  1. The addition of the Content Interface as a way to design and maintain online content and refinement of both the Card and Content Interfaces.
  2. Conceptually through the development of some design principles for this type of artefact (dubbed Contextually Appropriate Scaffolding Assemblages – CASA).
  3. Uptake of the Card Interface (and to a lesser extent the Content Interface) within my institution and beyond.

The spread – Card Interface Usage – Jan-March 2021

The following graph illustrates the number of unique Blackboard sites that have requested the Card Interface javascript file in the first few months of 2021. In the same time frame, the Content Interface has been used by a bit over 70 Griffith University sites.

The heaviest use is within the institution where this all started. Usage this year is up from the original 7 courses at the same time in 2019. What’s surprising about this spread is that this work is not an officially approved technology. It’s just a kludgge developed by some guy that works for one of the L&T areas in the institution. Uptake appears to have largely happened through word of mouth.

Adoption beyond the original institution – especially in Ireland – was sparked by
this chance encounter on Twitter (for the life of me I can’t figure out to embed a good visual of this tweet, it used to be easy). Right person, right time. More on that below.

Reflections

So why has it played out this way?

What follows are my current reflections bundled up with the CASA design princples.

Would be interesting (to me at least) to actually ask and find out.

1. A CASA should address a specific contextual need within a specific activity

The Card Interface address an unfulfilled need. The default Blackboard Learn interface is ugly and people want it to look better. And there isn’t much help coming from elsewhere. The Irish adoption of the Card Interface that this isn’t a problem restricted to my institution.

The Content Interface isn’t as widely used. I wonder if part of that is because the activity it helps with (design and maintain online content) is diversely interpreted. e.g. People differ on what they think is acceptable/good online content, if/how it should be developed, and thinking about it beyond just getting some “stuff” online for Monday. Meaning a lot more effort is required to see the Content Interface as a solution to a need they have.

2. CASA should be built using and result in generative technologies

First, to give Blackboard Learn its due. It is a generative platform. It allows just about anyone to include Javascript. This generative capacity is the enabler for the Card and Content Interfaces and numerous other examples. Sadly, Blackboard have decided generativity is not important for Blackboard Ultra.

Early versions of the Card Interface didn’t do much. But over the years its evolved and added features. They’ve been responding to evolving local needs. Perhaps making it more useful?

I think a key point is that the Card Interface is generative for the designer. It provides some scope for the designer to change how it works. The most obvious example being the easy inclusion of images.

It would be interesting to explore more if and how people have used the Card Interface in different and unexpected ways. Or, have they stuck to the minimum.

The Content Interface can be generative, but requires expert knowledge and isn’t quite as easy. What choice is available is not that attractice. I suspect if it were more mearningfully and effective generative that would positively impact adoption.

3. CASA development should be strategically aligned and supported

Neither of these tools are institutionally aligned. They have become fairly widely adopted with the team of educational designers I work with and more of a part of our strategic processes. But not core or genral. There’s been some spread beyond into other groups but not at the institutional level. There is talk that the Card Interface has had some level of approval by one of the more central groups. It would be interesting to analyse further.

But these tools remain accepted but not formally recognised.

4. CASA should package appropriate design knowledge to enable (re-)use by teachers and students.

To paraphrase Stephen Downes this is where a CASA does things right thereby “allowing designers to focus on the big things”. Just the ability to implement a card interface is a good first start, but I also wonder how much some of the more contextual design knowledge built into the Card and Content Interface influence use? e.g. the university date feature of both.

It would be good to test this hypotheses. Also to find out what impact this has on the designer/teacher and the students.

5. CASA should actively support a forward-oriented approach to design for learning

It appears that the University date feature of the cards is used a fair bit. It’s the main “forward-oriented” design features. But there’s perhaps not much more of this focus in the Card Interface.

The Content Interface is conceived of as a broader assemblage of technologies to design and mantain online content. It can make use of O365 to enable more collaborative discussion amongst the teaching team and enable version control. But I’m not sure many teachers currently think about a lot more than what they are putting up this study period, or this week.

6. CASA are conceptualised and treated as contextual assemblages

i.e. it’s not just the LMS or any other technology. It’s more about how easily and effectively each teacher is able to integrate these tools into their practices, tools and context.

The Card Interface is a simpler and more generic tool. It’s easier to integrate and achieve a positive outcome. Hence great adoption within the institution and beyond.

The Content Interface is itself a more complex collection of technology and also attempting to integrate into a more complex set of practices, tools and context.

It would be very interesting to see if, how, and what assemblages people have constructed around each of these tools.

Green shoot growing out of a power pole

Do the little things matter in design for learning?

In learning what matters most is what the learner does. As people looking to help people learn we can’t make them learn. The best we can do is to create learning situations – see Goodyear, Carvalho & Yeoman (2021) for why I’m using situation and not environment. We design the task, the learning space and social organisation (alone, pairs, groups etc.) that make up the situation in which they will engage in some activity. Hopefully activity that will lead to the learning outcomes we had in mind when designing the situation.

But maybe not. We can’t be certain. All we can do is create a learning situation that is more likely to encourage them to engage in “good” activity.

How much do the little things matter in the design of these learning situations?

We spend an awful lot of time on the big picture things. A lot of time is spent on: creating, mapping and aligning learning outcomes; ensuring we’ve chosen the right task informed by the right learning theory and research to achieve those outcomes; and, a lot of time selecting, building, and supporting the physical and digital learning spaces in which the activity will take place. But what about the little things?

Are the little things important? I’m not sure, but in my experience the little things are typically ignored. Is that experience? Why are they ignored? What impact does this have on learners and learning?

Some early thinking about these questions follow. Not to mention the bigger question, if we can’t get the little things right, what does that say about our ability to get the big things right?

What are some “little things”?

To paraphrase Potter Stewart I won’t attempt to define “little things” but rather show what I think I mean with a couple of examples from recent experience.

Specific dates

For better or worse, dates are important in formal education. Submit the assignment by date X. We’ll study topic Y in week X. Helping students plan how and when to complete the requested task is a good thing. Making explicit the timeframes would seem a good thing. i.e. something more like situation A in the following image than situation B.

However, as pointed out in this 2015 comment the more common practice has been situation B. Since the print-based distance education days of the 80s and 90s the tendency has been to make learning materials (physical or digital) “stand-alone”. i.e. independent of a particular study period so that they can be reused again and again. Generally because it’s hard to ensure that the dates continue to be correct in offering after offering.


Note: These images are intended as examples of “little things” not examplars of anything else

Using web links

In a web-based learning situation, it’s not uncommon to require students to use some additional web-based resources to complete a task. For example, read some document, contribute to a discussion board etc. If it is an online resource then it appears sensible to – use a core feature of the web and – provide a link to that resource. Making it easier – requiring less cognitive load – for the student to complete the task.

But, as someone who gets to see a little of different web-based learning situations I continue to be shocked that the majority are more like situation B then sitation B in the following image.

Is it common to ignore the “little things”?

As mentioned in the above, my observations over the last 10 years suggest that these two examples of “little things” are largely ignored. I wonder if this is a common experience?

There can be differences. For example, it can be difficult to use links to resources within an LMS. It’s not unusual for different offerings of the same course to use different sites within the LMS. This means that a link to a discussion forum in one course offering is not the same as the same discussion forum in the next course offering. I was shocked that my current institution’s LMS’ site roll over process did not automatically update such links as was standard practice at a previous institution. The previous institution also had a course based link checker that would look for broken links. My current institution/LMS doesn’t.

“Little things” appear to matter

A course I helped re-design has just completed. The results from the student evaluation of the course are in with a respons rate of ~30% (n=21). All very positive.

There was a question about the course being well organised and easy to use. 15 strongly agreed and 6 agreed. What struck me was the organisation of the course included mentions of the little things.

Two of the responses mentioned dates, both positively. Explaining that the dates were “very helpful”. That this was the first course to have included them and that it was “a big stress having to look it up often”.

Three of the responses mentioned links, all positively. Explaining that the numerous links to discussion board topics were “helpful”, “great” and “easy”.

These “little things” aren’t likely to radically transform the learning outcomes, but they appear to have improved the learner experience. Removing a “stress” has to help.

Why are the “little things” ignored?

My primary hypothesis is that while these are “little things”, they aren’t “easy things”. Our tools and process don’t make it easy to do the “little things”. The following describes three possible reasons for this inability. Are there more?

Reusability paradox

First, is the Reusability Paradox. As mentioned in the dates example above. To make study materials reusable you have to remove context. For example, dates specific to a particular study period. The emphasis on reuse is a plus, but comes at the cost of reducing the pedagogic value. With the rise of micro-credentials and the flexible reuse of modular learning materials this is only going to be more of a factor moving forward.

The reusability paradox extends to the tools we use to produce and host our learning sitations (e.g. various forms of LMS and the latest shiny things like Microsoft Teams). Any tool that’s designed to be sold/used by the broadest possible market tends to be designed to be reusable. It doesn’t know a lot about the specifics of one individual context. For example, it doesn’t know about the dates for the institution’s study periods, let alone the dates important for an individual learning situation.

Hierarchical versus Distributed

Second, is the difference between a hierarchical (tree-like) and distributed conception of the world. Most contemporary professional practices (e.g. software development, design for learning, and managing organisations) is hierarchical. A tree of black box components responsible for specific purposes. With the complexity and detail of each activity hidden from view. The functionality of an LMS is generally organised this way. There’s a lot of value in this approach, but it makes it very difficult to do something in across each of the black boxes. To be distributed. For example, make sure that all the dates and links mentioned in the discussion forums, the quizzes, the content areas, the lecture slides etc. are correct.

This is also visible at an organisational level. It appears that offering specific dates for assignemnts and the linked are typically entered into some sort of administrative system that produces a formal profile/synopsis for a course/unit. Learning typically takes place elsewhere (e.g. the LMS). Extra work has to be performed to transfer information between the two systems. Work to transfer such information between systems is typically only done for “important” tasks. e.g. transfering grades out of the LMS into the student administration system.

Limited focus on forward-oriented design

Third, is limited attention paid to forward-oriented design (Goodyear & Dimitriadis, 2013). Common practice is that design focuses on configuration. i.e. making sure that the learning situation is ready for students to engage with. Goodyear & Dimitriadis (2013) argue that design for learning should be an on-going and forward-looking process that actively considers design for configuration, orchestration, reflection and re-design. For example, rather than just provide ways for links to be added during configuration of a learning situation. Think about what link related functionality will be required during (orchestration) and after (reflection and re-design) learntime. For example, provide indications of if and how links are being used, or a link checker.

References

Goodyear, P., Carvalho, L., & Yeoman, P. (2021). Activity-Centred Analysis and Design (ACAD): Core purposes, distinctive qualities and current developments. Educational Technology Research and Development. https://doi.org/10.1007/s11423-020-09926-7

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: Reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Powered by WordPress & Theme by Anders Norén

css.php