The following is an early description of work arising out of The Indicators Project an ongoing attempt to think about learning analytics. With IRAC (Information, Representation, Affordances and Change) Colin Beer, Damien Clark and I are trying to develop a set of questions that can guide the use of learning analytics to improve learning and teaching. The following briefly describes:

  • Why we’re doing this?
  • Introduces some of our assumptions.
  • Touches on the origins of IRAC.
  • Describes the four questions.
  • A very early and rough attempt to use the four questions to think about existing approaches to learning analytics.

Why?

The spark for this work is based on observations made in a presentation from last year. In summary, the argument is that learning analytics has become a management fashion/fad in higher education and how this generally means most implementation of learning analytics is not likely to be very mindful. In turn it is very likely to be limited in its impact on learning and teaching. Having much in common with the raft of expenditure in data warehouses some years ago. Let alone examples such as: graduate attributes, eportfolios, the LMS, open learning, learning objects etc. It would nice to avoid this yet again.

There are characteristics of learning analytics that make the difficulties associated with developing appropriate innovations beyond the faddish adoption of analytics. One of the major contributors is that the use of learning analytics encompasses many different bodies of literature both within and outside learning and teaching. In fact, many of these different bodies of literature have developed important insights that can directly help inform the use of learning analytics to improve learning and teaching. What’s worse is that early indications are that – not unsurprisingly – most institutional projects around learning analytics are apparently ignorant of the insights and lessons gained from this prior work.

In formulating IRAC – our four questions for learning analytics interventions – we’re attempting to aid institutions consider the insights from this earlier work and thus enhance the quality of their learning analytics interventions. We’re also hoping that these four questions will inform our attempts to explore the effective use of learning analytics to improve learning and teaching. For me personally, I’m hoping this work can provide me with the tools and insights necessary to make my own teaching manageable, enjoyable and effective.

Assumptions

Perhaps the largest assumption underpinning the four questions is that the aim of learning analytics interventions is to encourage and enable action by a range of stakeholders. If no action (use) results from a learning analytics project, then there can’t be any improvement to learning and teaching. This is simlar to the argument by Clow (2012) that the key to learning analytics is action in the form of appropriate interventions. Also, Elias (2011) describes two steps that are necessary for the advancement of learning analytics

(1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning (p. 5)

Earlier work has found this integration into practice difficult. For example, Dawson & McWilliam (2008) identify a significant challenge for learning analytics being able to “to readily and accurately interpret the data and translate such findings into practice” (p. 12). Adding further complexity is the observation from Harmelen & Workman (2012) that learning analytics are part of a socio-technical system where success relies as much on “human decision-making and consequent action…as the technical components” (p. 4). The four questions proposed here aim to aid in the design of learning analytics interventions that are integrated into the practice of learning and teaching.

Audrey Watters’ friday night rant serves a slightly similar perspective more succinctly and effectively.

Foundations

In thinking about the importance of action and of learning analytics tools being designed to aid action we were led to the notion of Electronic Performance Support Systems (EPSS). EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin et al., 2001, p. 658). EPSS are computer-based systems that “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001. p. 63).

All well and good, in reading about EPSS we came across the notion of the performance zone. In framing the original definition of an EPSS, Gery (1991) identifies the need for people to enter the performance zone. The performance zone is defined as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful completion of a task (Gery, 1991). For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 550).

This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artifacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artifacts that “more often interferes and confuses than aids and clarifies” (p. 9).

Given our recent experience with institutional e-learning systems this view resonates quite strongly as a decent way of approaching the problem.

While the notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artifacts form the scaffolding for the four questions, additional insight and support for each question arises from a range of other bodies of literature. The description of the four questions given below includes very brief descriptions of some of this literature. There is significantly more useful insights to be gained and extending this will form a part of our on-going work.

Our proposition is that effective consideration of these four questions with respect to a particular context, task and intervention will help focus attention on factors that will improve the implementation of a learning analytics intervention. In particular, it will increase the chances that the intervention will be integrated into practice and subsequently have a positive impact on the quality of the learning experience.

IRAC – the four questions

The following table summarises the four questions with a bit more of an expansion below.

Label Question
Information Is all the relevant information and only the relevant information available and being used appropriately?
Representation Does the representation of this information aid the task being undertaken?
Affordances Are there appropriate affordances for action?
Change How will the information, representation and the affordances be changed?

Information

While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? (and many, many more).

Representation

A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straight forward task (Norman, 1993). Representation has a profound impact on design work (Hevner et al., 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). How information is represented can make a dramatic difference in the ease of a task (Norman, 1993). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference access necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica, Stone, & Endicott, 2006, p. 540). Considerations here are how easy is it for people to understand and analyse the implications of the findings from learning analytics? (and many, many more).

Action

A poorly designed or constructed artifact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue and Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, p. 106). The nature of such affordances are not inherent to the artifact, but instead co-determined by the properties of the artifact in relation to the properties of the individual, including the goals of that individual (Young et al., 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). The consideration here is whether or not the tool provides support for action that is appropriate to the context, the individuals and the task.

Change

The idea of evolutionary development has been central to the theory of decision support systems (DSS) sinces its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems or tools to under go change, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches through which the data was generated. Another factor is Bollier’s and Firestone’s (2010) observation that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Finally, is the observation that universities are a complex system (Beer et al. 2012). Complex systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustaini, 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Using the four questions

It is not uncommon for Australian Universities to rely on a data warehouse system to support learning analytics interventions. This in part is due to the observation that data warehouses enable significant consideration of the information (question 1). This is not surprising given that the origins and purpose of data warehouses was to provide an integrated set of databases to provide information to decision makers (Arnott & Pervan, 2005). Data warehouses provide the foundation for learning analytics. However, the development of data warehouses can be dominated by IT departments with little experience with decision support (Arnott & Pervan, 2005) and a tendency to focus on technical implementation issues at the expense of user experience (Glassey, 1998).

In terms of consideration of the representation (question 2) the data warehouse generally provides reports and dashboards for ad hoc analysis and standard business measurements (van Dyk, 2008). In a learning analytics context, dashboards from a data warehouse will typically sit outside of the context in which learning and teaching occurs (e.g. the LMS). For a learner or teacher to consult the data warehouse requires the individual to break away from the LMS, open up another application and expend cognitive effort in connecting the dashboard representation with activity from the LMS. Data warehouses also provide a range of query tools that offer a swathe of options and filters for the information they hold. While such power potentially offers good support for change (question 4) that power comes with an increase in difficulty. At least one institution mandates the completion of training sessions to assure competence with the technology and ensure the information is not misinterpreted. This necessity could be interpreted as evidence of limited consideration of representation (question 2) and affordances (question 3). The source of at least some of these limitations arise from the origins of data warehouse tools in the management of businesses, rather than learning and teaching.

Harmelen and Workman (2012) use Purdue University’s Course Signals and Desire2Learn’s Student Success System (S3) as two examples of the more advanced learning analytics applications. The advances offered by these systems arise from greater considerations being given to the four questions. In particular, both tools provide a range of affordances (question 3) for action on the part of teaching staff. S3 goes so far as to provide a “basic case management tool for managing interventions” (Harmelen & Workman, 2012, p. 12) and has future intentions of using this feature to measure intervention effectiveness. Course Signals offers advancements in terms of information (question 1) and representation (question 2) by moving beyond simple tabular reporting of statistics, toward a traffic lights system based on an algorithm drawing on 44 different indicators from a range of sources to predict student risk status. While this algorithm has a history of development, Essa and Ayad (2012) argue that the reliance on a single algorithm contains “potential sources of bias” (n.p.) as it is based on the assumptions of a particular course model from a particular institution. Essa and Ayad (2012) go onto to describe S3’s advances such as an ensemble modeling strategy that supports model tuning (information and change); inclusion of social network analysis (information); and, a range of different visualisations including interactive visualisations allowing comparisons (representation, affordance and change).

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland (Ed.), . Weatherhead School of Management.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Buckingham Shum, S. (2012). Learning Analytics. UNESCO. Moscow. http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12, 134–138. doi:10.1145/2330601.2330636

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. http://learninganalytics.net/LearningAnalyticsDefinitionsProcessesPotential.pdf.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12 (pp. 2–5). Vancouver: ACM Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213. doi:10.2307/249689

Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Harmelen, M. Van, & Workman, D. (2012). Analytics for Learning and Teaching. http://publications.cetis.ac.uk/2012/516

Van Dyk, L. (2008). A data warehouse model for micro-level decision making in higher education. The Electronic Journal of e-Learning, 6(3), 235–244.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.