The following is an early attempt to write and share some thoughts on what, why and with what impacts Australian universities are going to engage with learning analytics over the next couple of years. Currently it’s fairly generic and the same structure could be used with any fad or change process.
You could read the next section, but it’s basically an argument as to why its important to consider the how learning analytics will impact academics. The three likely paths section describes the paths.
Context and rationale
By all indications learning analytics is one of the next big things in university learning and teaching. Ferguson (2012) identifies learning analytics as one of the fastest-growing areas of research within technology-enhanced learning with interest being driven by a combination of technological, pedagogical, political and economic drivers. The 2012 Horizon Report (Johnson & Cummins, 2012) argues that while learning analytics is still in its early stages of development it is likely to see widespread adoption within the higher education sector in the next 2-3 years. The recent Horizon technology outlook for Australian Tertiary Education (Johnson, Adams, & Cummins, 2012) places learning analytics – for the first time anywhere in the world – into the “one year or less” time-frame for adoption. Given what I hear around the traps, it would appear that every single Australian university is doing something (or thinking about it) around learning analytics.
My interest is in how these plans are going to impact upon academics and their pedagogical practice. It’s a fairly narrow view, but an interesting, self-serving and possibly important one. Johnson & Cummins (2012) suggestion that the larger promise of learning analytics is when it is used “to more precisely understand students’ learning needs and to tailor instruction appropriately far more accurately and far sooner than is possible today” (p. 23). I don’t think automated-tutoring information systems are going to be up to that task anytime soon. At least not across a broad cross-section of what is taught at Universities. So academics/teachers/instructors will be involved in someway.
But we don’t know much about this and it appears to be difficult. Dawson et al. (2011) make the observation of “a dearth of studies that have investigated the relationship between learning analytics and data requirements that would better assist teachers in the design and evaluation of learning and teaching practice” (p. 4). Not only that, it has been found that being able to interpret the patterns provided by learning analytics and apply them to practice is difficult, time-consuming and requires additional support (Dawson et al., 2011; Dawson & McWilliam, 2008). So I wonder how and with what impacts the almost inevitable strategies adopted by Australian Universities will help with this.
Not surprisingly, I am not optimistic. So I’m trying to develop some sort of framework to help think about the different likely paths they might adopt, the perspectives which underpin these paths, and what the likely positives, problems and outcomes might be of those paths.
The three likely paths
For the moment, I’ve identified three likely paths which I’ve labelled as
- Do it to the academics.
- Do it for the academics.
- Do it with the academics.
There are probably other paths (e.g. do nothing, ignore the academics) that might be adopted, but I feel these are probably the most likely.
These are listed in order of which I think are most likely to happen. There may be examples of path #3 spread throughout institutions, but I fear they will be few and far between.
Currently, it’s my theory that organisations probably need to travel all three paths. The trouble is that the 3rd path will probably be ignored and this will reduce the impact and adoption of learning analytics.
The eventual plan is to compare and contrast these different paths by the different assumptions or perspectives on which they are based. The following gives a quick explanation of each of the paths and an initial start on this analysis.
Those of you who know me, can probably see some correspondence between these three paths and the 3 levels of improving teaching. There is a definite preference in the following for the 3rd path, this is not to suggest that it should (or can) be the only path explored, or that the other paths have no value. All three have there part to play, but I think it would be wrong if the 3rd path was ignored.
Perhaps that’s the point here, to highlight the need for the 3rd path to complement the limitations of the other two. Not to mention helping surface some of the limitations of the other two so they can be appropriately addressed.
Some questions for your
- Is there any value in this analysis?
- What perspectives/insights/theories do I need to look at to better inform this?
- What might be some useful analysis lens for the three paths?
- Are there other paths?
- What am I missing?
Do it to the academics
It seems a fair bit of interest in learning analytics is being driven by non-teaching folk. Student administration and IT folk amongst the foremost with senior management in there somewhere as well. Long and Siemens (2001) define this level as academic analytics rather than learning analytics. But I believe it belongs here because of the likelihood that if senior managers use academic analytics to make decisions, that some of the decisions they make will have an impact on academics (i.e. do it to them).
I can see this path leading to outcomes like
- Implementation of a data warehouse, various dashboards and reports.
- Some of these may be used to make data-driven decisions.
- The implementation of various strategies such as “at-risk” processes that are done independently of academics.
- At it’s worst, the creation of various policies or processes that require courses to meet certain targets or adopt certain practices (e.g. the worst type of “common course site policy), i.e. performativity.
In terms of analysing/characterising this type of approach, you might suggest
- Will tend to actually be academic analytics, rather than learning analytics (as defined by Long and Siemens, 2011) but may get down to learning analytics at the departmental level.
- It’s based on a “If I tell them to do it, they will…” assumption.
i.e. what is written in the policy is what the academics will actually do. - A tendency to result in task corruption and apparent compliance.
- It assumes academics will change their teaching practice based on what you told them to do.
- Is based on the assumptions of teleological processes.
i.e. the system is stable and predictable, the designers can manipulate the system’s behaviour, and the designer’s can determine the goals/criteria for success. - It is located a long way from the actual context of learning and teaching and assumes that big data sets and data mining algorithms will enable the identification of useful information that can guide decision making.
- It does not recognise the diversity inherent in teaching/teachers and learning/learners.
Assumes learning is like sleeping - It is based on the assumption of senior managers (of people in general) as rational decision makers, if only they had the right data.
- What is actually done will rely heavily on which vendor gets chosen to implement.
- Will be largely limited to the data that is already in the system(s).
Do it for the academic
There are possibly two sub-paths within this path
- The researcher path.
Interested researchers develop theoretically-informed, research-based approaches to how learning analytics can be used by academics to improve what they do. They are developing methods for the use of academics. - The support division path.
This is where the Information Technology or Learning and Teaching support division of the university note the current buzz-word (learning analytics) and implement a tool, some staff development etc to enable academics to harness the buzz-word.
In terms of analysing/characterising this approach, I might identify
- It’s based on the “If I build it, they will come” assumption.
- It assumes you can improve/change teaching by providing new tools and awareness.
- Which generally hasn’t worked for a range of reasons including perhaps the chasm
i.e. the small number of early adopter academics engage, the vast majority don’t. - It assumes some level of commonality in teaching/teachers and learning/learners.
At least at some level, as it assumes implementing a particular tool or approach may be applicable across the organisation. Assumes learning is perhaps more like eating? - It assumes that the researchers or the support division have sufficient insight to develop something appropriate.
- It assumes we know enough about learning analytics and helping academics use learning analytics to inform pedagogical practice to enshrine practice around a particular set of tools.
- Is based on the assumptions of teleological processes.
i.e. the system is stable and predictable, the designers can manipulate the system’s behaviour, and the designer’s can determine the goals/criteria for success. - It will be constrained by the institutions existing systems and the support divisions’ people and their connections.
- The support division path can be heavily influenced by the perspective of the academic (or others) as a client/customer which assumes that the client/customer knows what they want and because they generally don’t often sinks to a process of “manage the customer” rather than help.
Do it with the academic
In this path the application of learning analytics is treated as something that needs to be learned about. Folk work with the academics to explore how learning analytics can be best used to inform individual pedagogical practice. Perhaps drawing on insights from the other paths, but also modifying the other paths based on what is learned.
In terms of analysing/characterising this approach, I might identify
- It assumes that if you want to change/improve teaching, then the academics need to learn and be helped to learn.
(That probably sounds more condescending than I would like). - Based on a “If they learn it, they will do it” premise.
Which doesn’t have to be true. - It assumes learning/learners and teaching/teachers are incredibly diverse.
- It assumes we don’t know enough about what might be found with learning analytics and how we might learn how to use it.
- Assumption that the system(s) in place will change in response to this learning which in turn means more learning ….and the cycle continues.
References
Dawson, S., Bakharia, A., Lockyer, L., & Heathcote, E. (2011). “Seeing” networks : visualising and evaluating student learning networks Final Report 2011. Main. Canberra.
Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.
Ferguson, R. (2012). The State of Learning Analytics in 2012 : A Review and Future Challenges a review and future challenges. Milton Keynes: UK.
Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis. New Media Consortium. Austin, Texas.
Johnson, L., & Cummins, M. (2012). The NMC Horizon Report: 2012 Higher Education Edition (p. 42). Austin, Texas.
Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).
4 Pingbacks