What follows is a collection of ad hoc ramblings around learning analytics prompted by a combination of Col’s recent post, associated literature arising from ALASI’2018, and “sustainable retention” project that I’m involved with as part of my new job. It’s mainly sensemaking and questions. Intended to help me clarify some thinking and encourage Col’s to think a bit more about his design theory for meso-level practitioners implementing learning analytics.

 Architecting for learning analytics

In the context of learning analytics, Buckingham Shum and McKay (2018) ask the question, “How can an institution innovate for sustainable impact?”.

Loops

Buckingham Shum and McKay (2018) make this comment ” to create much more timely feedback loops for tracking the effectiveness of a complex system” which reminds me of this talk talking about complex systems. A talk that emphasises the importance of loops to complex systems.

Col develops the idea of interaction cycles between people, technology and education objects as part of his explanation for the EASI system’s implementation. Are these a type of loop? A loop that seeks to create more timely feedback loops for tracking/responding to the (non/mis)-use of learning analytics?

Persistent problems of practice

Is this particular loop a response to helping address what Buckingham Shum and McKay (2018) see as “It also requires a steady focus on the needs of the user community—what the Design-Based Implementation Research (DBIR) community calls the “persistent problems of practice.”?

Organisational architectures

Buckingham Shum and McKay (2018) identify three organisational models that they see being used to deliver learning analytics and analyse each. The three are:

  1. The IT service centre model
  2. the Faculty Academics model
  3. the hybrid Innovation Centre model

    Expanded examples of this interesting architecture is provided.

What’s hidden in the shadows?

I haven’t yet read all of the paper but it appears that what’s missing is what happens in the shadows. Arguably my current institution fits within one of the organisational architectures identified. As I imagine would many contemporary universities (to varying extents). However, the project I’m getting involved with (and was running before I came on the scene) is beyond/different to the institutional organisational architecture.

Perhaps it’s an example of the need for the third institutional architecture But what do you do as a meso-level practitioner when you don’t have that architecture to draw upon and still have to produce? Are there ways to help address the absence of an appropriate architecture?

Regardless of institutional hierarchies perceived mismatches/disagreements arise. The very nature of the hierarchical organisational map (the Buckingham Shum and McKay article includes a number of examples) means that there will be competing perspectives and priorities. Suggesting that it may almost be inevitable that shadow systems – systems avoiding the accepted organisational way – will arise.

I would assume that an effective institutional organisational architecture would reduce this likelihood, but will it remove it?

What is happening in the shadows?

The project I’m involved with started 6 months ago. It’s a local response to an institutional strategic issue. It was original implemented using Excel, CSV files and various kludges around existing data and technology services. 

I’m wondering how much the broader organisation knows about this project and how many other similar projects are happening? Here and elsewhere? Small scale projects implemented with consumer software?

Is scale the problem?

Buckingham Shum and McKay (2018) identify a “chasm between innovation and infrastructure” and talk about three transitions:

  1. From prototype to small-scale pilots
  2. Small-scale pilots to pilots with several hundred students
  3. Pilots with several hundred students to mainstream rollout (thousands of students)

This is something I’ve struggled with, it’s the first question – Does institutional learning analytics have an incomplete focus? – in this paper. Does all learning analytics have to scale to mainstream rollout?

Are there ways – including organisational architectures and information technology infrastructures – that enable the development of learning analytics specific to disciplinary and course level contexts? Should there be?

Might a learning analytics platform/framework something like what is mentioned by Buckingham Shum and McKay (2018) ” a shared, extensible framework that can be used by all tools, preventing the need to re-create this framework for each tool” something that might enable this?

The notion of scale and its apparent necessity is something I’ve yet to grok to my satisfaction. Some very smart people like these authors see scale as necesary. The diagram from some ALASI presentation slides in Col’s post includes an arrow for scale. Suggesting that “Innovation and problem solving” must scale to become mainstream.

Breaking apart the scale concept

Perhaps I’m troubled by the interpretation of scale meaning used by lots of people? There are some things that won’t be used by lots of people, but are still useful. 

Is the assumption of scale necessary to make it efficient for the organisational systems to provide support? i.e. For any use of digital technology to be supported within an institution (e.g. helpdesk, training etc) it has to be used by a fair number of people. Otherwise it is not cost effective.

Mainstream technology includes other concerns such as security, privacy, responsiveness, risk management etc. However, if as mentioned by Buckingham Shum and McKay (2018) the innovation and problem solving activities are undertaken according to appropriate ” specifications approved by the IT division (e.g., security, architecture), integrating with the institution’s enterprise infrastructure”, then you would hope that those other concerns would be taken care of.

Digital technology is no longer scarce. Meaning the need to be cost efficient and thus used by thousands of people should increasingly not be driven by that expense?

Is it then the design, implementation and maintenance of appropriate solutions using the appropriate institutional infrastructure that is the other major cost? Is this largely driven by the assumption that such activities are costly because they require expertise and can’t be done by just anyone?

But what if learning analytics adopted principles from research into providing computational rich environments for learners. e.g. Grover and Pea’s (2013) principles: low floor, high ceiling, support for the “use-modify-create” progression, scaffolding, enable transfer, support equity, and be systemic and sustainable?

Not that everyone would use it, but enable it so that anyone could.

Why did EASICONNECT die?

Col’s post talks about a system he was involved with. A system that succeeded in climbing the three scale transitions mentioned by Buckingham Shum and McKay (2018). It is my understanding that it is soon going to die. Why? Could those reasons be addressed to prevent subsequent projects facing a similar outcome? How does that apply to my current context?

Was it because the system wasn’t built using the approved institutional infrastructure? If so, was the institutional infrastructure appropriate? Institutional infrastructures aren’t known for being flexible and supporting end-user development. Or future proof.

Was it for rational concerns about sustainability, security etc?

Was it because the organisational hierarchy wasn’t right? 

Lessons for here?

The project I’m involved with is attempting the first “scale” transition identified by Buckinham Shum and McKay (2018). The aim is to make this small pilot scalable and low cost. Given the holes in the institutional infrastructure/platform we will be doing some web scraping etc to enable this. The absence of an institutional infrastructure will mean what we do won’t translate to the mainstream.

Should we recognise and embrace this. We’re not trying to scale this, make it mainstream. We’re at the right hand innovation and problem solving end of the image below and aren’t worried about everything to the left. So don’t worry about it.

Borrowed from Col Beer

References

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43.