Assembling the heterogeneous elements for (digital) learning

Category: 4paths Page 1 of 2

Theory of workarounds

Introduction

The following is a summary of the paper Theory of Workarounds.

Alter, S. (2014). Theory of Workarounds. Communications of the Association for Information Systems, 34(1). https://doi.org/10.17705/1CAIS.03455

The paper provides “an integrated theory of workarounds that describes how and why” they are created. It is a process theory “driven by the interaction of key factors that determine whether possible workarounds are considerd and how they are executed” and is seen as useful for

  • classifying workarounds and analysing how they occur;
  • understanding compliance and noncompliance to management mandates;
  • figuring how to consider possible workarounds as part of systems development;
  • studying how workarounds may lead to larger planned changes.

My interest – digital learning and teaching

I’m interested in workarounds as a way to better understanding what’s happening around higher education’s use of digital technology to support learning and teaching, and identifying ways to improve it.

Definition and theory of workarounds

Alter (2014) offers the following definition of workarounds

A workaround is a goal-driven adaptation, improvisation, or other change to one or more aspects of an existing work system in order to overcome, bypass, or minimize the impact of obstacles, exceptions, anomalies, mishaps, established practices, management expectations, or structural constraints that are perceived as preventing that work system or its participants from achieving a desired level of efficiency, effectiveness, or other organizational or personal goals. (p. 1044)Comparisons between this and related definitions suggest this is a broader definition, including additional factors such as

  • workarounds don’t need to use digital technology;
  • workarounds may include work not formally recognised by the organisation;
  • workarounds don’t always compensate for or bypass system deficiencies;
  • workarounds may not be temporary;
  • workarounds are not necessarily examples of noncompliance;

Alter’s (2014) definition of workarounds does rely on it occuring within a work system. Another theoretical concept developed by Alter (2002). See this section from an old paper of mine for a summary of the Work System Framework.

It is argued that this reliance on the work system framework provides a “broader and more comprehensive view of the changes that can be included in workarounds” (Alter, 2014, p. 1046)

Figure 1 is a representation of Alter’s (2014) theory of workarounds. It is positioned as a process theory that describes how and why workarounds are created. A brief description follows the figure.

Alter’s theory of workarounds draws on a number of theories and concepts, including:

  • Theory of planned behaviour;
  • Improvisation and bricolage;
  • Agency theory;
  • Work system theory

Figure 1 – Alter’s (2014) Theory of Workarounds (p. 1056)

Workarounds arise from a context that includes each work system participant’s personal goals, interests and values. Communication and sharing of these goals/values between participants may be flaws or incomplete leading to misalignment in the work system. It also includes the structure of the work system that includes architecture, characteristics, performance goals and also emergent change.

From this context arises the perceived need for a work around.

Which triggers a process of trying to identify possible workarounds. Often starting with the obstacles in the current situation and the perceived need combined with consideration of the costs, benefits, risks of being identifed, and possible ramifications. An essential component is the knowledge available to those involved.

Eventually this leads to a decision to select a workaround to pursue, if any.

If going ahead, then development and execution of the workaround is driven by factors such as attention to current conditions, intuition guiding action, testing of intuitive understanding, and situational decision making.

Subsequently, there are local consequences and broader consequences. Locally, may lead to elimination of the obstacles that initiated the process, but may also include failure of the workaround or various unintendended consquences. More broadly, these types of consquences might be felt or pushed into other locations.

Temporality of workarounds

Alter also makes a point of outlining the temporality of workarounds as outlined in Figure 2.


Figure 2: Temporality of Workarounds (adapted from Alter, 2014, p. 1058)

Five voices in the workarounds literature

Alter performed a literature review on the workarounds literature. He gather 289 papers and used that to derive his theory. He summarises that work by using five “voices” which in turn include topics, including:

Phenomena associated with workarounds;

  • Obstacles, exceptions, anomalies, mishaps and structural constraints
  • Agency
  • Improvisation and bricolage
  • Routines, processes and methods
  • Articulation work and loose coupling
  • Technology misfits
  • Design and emergence
  • Technology usage and adaptation
  • Motives and control systems
  • Knowledge
  • Temporality

Types of workarounds;

  • Overcome inadequate IT functionality
  • Bypass an obstacle built into processes or practices
  • Respond to a mishap or anomaly with a quick fix
  • Substitute for unavailable resources
  • Design and implement new resources
  • Prevent future mishaps
  • Pretent to comply
  • Lie, cheat, steal for personal benefit
  • Colluse for mutual benefit

direct effects of workarounds;

  • Continuation of work despite obstacles, mishaps or anomalies
  • Creation of hazards, inefficiencies or errors
  • Impact on subsequent activities
  • Compliance or non-compliance with management intentions

perspectives on workarounds; and,

  • Workarounds as necessary activities in everyday life
  • Workarounds as sources for future improvements
  • Workarounds as creative acts
  • Workarounds as add-ons or shadow systems
  • Workarounds as quick fixes that won’t go away
  • Workarounds as facades of compliance
  • Workarounds as inefficiencies of hazards
  • Workarounds as resistance
  • Workarounds as distortions or subterfuge

organisational challenges and dilemmas related to workarounds.

  • Ability to operate despite obstacles
  • Enactment of interpretive flexibility
  • Balance of personal, group and organisational interests
  • Permitting and learning from emergent change

He uses these 5 voices then to group and establish some sense of causality within the “breadth of ideas and examples that were found in the literature” (p. 1047).

Usefulness and Further research

Since the theory is developed based on a literature search, it is limited by anything that hasn’t made it into the literature. e.g. accounts of workarounds that were considered, but never attempted.

Each step in the process theory could inform survey and/or case study research to explore how well the theory maps onto reality and lead to discoveries of factors/relationships not currently in the theory.

The workaround literature identifies fundamental limtiations in assumptions underpinning traditional approaches to organisational and system analysis and design (e.g. that prescribed business processes will be followed consistently). Theory of workarounds can be used to analyse systems in organisations, reveal conditions that lead to workarounds, provide opportunities to incorproate learning from workaround into emergent/planned change. Helping reveal insights into whether or not intended methods are followed; how systems in organisations evolve over time, how implementation evolves over time..

Starting from alternate theoretical foundations (e.g. Actor Network Theory, activity theory, socio-materiality etc) might lead to different outcomes and insights.

Software engineering for computational science : past, present, future

Following is a summary of Johanson and Hasselbring (2018) and an exploration of what, if anything, it might suggest for learning design and learning analytics. Johanson and Hasselbring (2018) explore why scientists whom have been developing software to do science (computational science) haven’t been using principles and practices from software engineering to develop this software. The idea is that such an understanding will help frame advice for how computational science can be improved through application of appropriate software engineering practice (**assumption**).

This is interesting because of potential similarities between learning analytics (and perhaps even learning design in our now digitally rich learning environments) and computational science. Subsequently, lessons about if and how computational science has/hasn’t been using software engineering principles might provide useful insights for the implementation of learning analytics and the support of learning design. I’m especially interested due to my observation that both practice and research around learning analytics implementation isn’t necessarily exploring all of the possibilities.

In particular, Johanson and Hasselbring (2018) argue that it is necessary to examine the nature of computational science and subsequently select and adapt software engineering techniques that are better suited to the needs of computational scientists. For me, this generates questions such as:

  1. What is the nature of learning analytics?
  2. What is the nature of learning design?
  3. What happens to the combination of both?

    Increasingly it is seen as necessary that learning analytics be tightly aligned with learning design. Is the nature/outcome/practice of this combination different? Does it require different types of support?

  4. For all of the above is there a difference between the nature espoused in the research literature and the nature experienced by the majority of practitioners?
  5. What types and combination of software engineering/development principles and practices are best suited to the nature of learning analytics and learning design?

Summary of the paper

  • Question of problem

    The development of software with which to do science is increasing, but this practice isn’t using software engineering practices. Why? What are the underlying causes? How can it be changed?

  • Method

    Survey of relevant literature examining software development in computational science. About 50 publications examined. A majority case studies, but some surveys.

  • Findings

    Identify 13 key characteristics (divided into 3 groups) of computational science that should be considered (see table below) when thinking about which software engineering knowledge might apply and be adapted.

    Examines some examples of how software engineering principles might be/are being adapted.

Implications for learning analytics

Johanson and Hasselbring (2018) argue that the chasm between computational scientists and software engineering researchers arose from the rush on the part of computer scientists and then software engineers to avoid the “stigma of all things applied”. The search for general principles that applied in all places. Leading to this problem

Because of this ideal of generality, the question of how specifically computational scientists should develop their software in a well-engineered way, would probably have perplexed a software engineer and the answer might have been: “Well, just like any other application software.

In learning analytics there are people offering more LA specific advice. For example, Wise & Vytasek (2017) and just this morning via Twitter this pre-print of looming BJET article. Both focused on providing advice that links learning analytics and learning design.

But I wonder if this is the only way to look at learning analytics? What about learning analytics for reflection and exploration? Does the learning design perspective cover it?

But perhaps a more interesting question might be whether or not it is assumed that the learning analytics/learning design principles identified by these authors should then be implemented using traditional software engineering practices?

Category

Characteristics

Nature of scientific challenges

  1. Requirements are not known up front
    • uses software to make novel discoveries and further
      understanding, software is “deeply embedded” in an
      exploratory process

    • to produce software but to obtain scientific results”. Segal
      (2005) scientific people say they are “programming
      experimentally”

    • design
      and requirements rarely seen as distinct steps

  2. Verification and validation is difficult and
    strictly scientific

    • verification
      demonstrate that the implementation of models is correct

    • validation
      demonstrate software captures the real world

    • Validation is hard
      because models are being used “precisely because the subject
      at hand is ‘too complex, too large, t
      oo
      small, too dangerous, or too expensive to explore in the real
      world’ (Segal and Morris, 2008)

    • Problems arise from four
      different dimensions/combinations (Carver et al, 2007)

      • Model of reality is
        insufficient

      • Algorithm used to
        discretise the mathematical problem can be inadquate

      • Implementation of the
        algorithm is wrong

      • Combination of models can
        propagate errors

    • Testing methods could help,
      but rarely used

  1. Overly formal software processes restrict research

    • Easterbrook and Johns (2009) big up front design “poor
      fit” for computational science – deeply embedded in the
      scientifical model

    • there is a need for the flexibility to quickly
      experiment with different solution approaches (Carver et al,
      2007)

    • Use a very iterative process, iterating over both the
      software and the underlying scientific theory

    • Explicit connections with agile software development
      established in the literature but even those lightweight
      processes are largely rejected

Representation shown in figure

Limitations of computers

  1. Development is driven and limited by hardware

    • scientific software not limited by the science theory, but by the available computing resources

    • Computationl power is an issue

  2. Use of “old” programming languages and
    technologies
    • Some communities moving
      toward python, but typically non-technical disciplines
      (biology/psychology) and only for small scale projects

  1. Intermingling of domain logica and implementation
    details

  2. Conflicting software quality requirements
    (performance, portability and maintainability)

    • interviews of scientific
      developers rank requirements as

      • Functional correctness

      • Performance

      • Portability

Maintainability

Cultural environment

  1. Few scientists are trained in software engineering

    • Segal (2007) describe them
      am “professional end user developers”…develop software to
      advance their own professional goals

    • “In contrast to most
      conventional end user developers, however, computational
      scientists rarely experience any difficulties learning
      general-purpose languages”

    • But keeping up with sw eng
      is just too much for people who are already busy writing grants
      etc.

    • Didn’t want to delegate
      development as often required a PhD in the discipline to be
      able to understand and implement the softare

  1. Different terminology

    • e.g. computational
      scientists speak of “code” not “software”

  1. Scientific software in itself has no value but still
    it is long-lived

    • Code is valued because of
      the domain knowledge captured within it

  1. Creating a shared understanding of a “code” is
    difficult

    • preference for informal,
      collegial ways of knowledge transfer, not documentation

    • “scientists find it
      harder to read and understand documentation artifacts than to
      contact the author and discuss”

  1. Little code re-use

Disregard of most modern software engineering methods

A model of scientific software development

Johanson and Hasselbring (2018) include the following figure as a representation of how scientific software is developed. They note its connections with agile software development, but also describe how computational scientists find even the light weight discipline of agile software development as not a good fit.

Model of Scientific Software Development

Anecdotally, I’d suggest that the above representation would offer a good description of much of the “learning design” undertaken in universities. Though with some replacements (e.g. “develop piece of software” replaced with “develop learning resource/experience/event”).

If this is the case, then how well does the software engineering approach to the development and implementation of learning analytics (whether it follows the old SDLC or agile practices) fit with this nature of learning design?

References

Johanson, A., & Hasselbring, W. (2018). Software Engineering for Computational Science: Past, Present, Future. Computing in Science & Engineering. https://doi.org/10.1109/MCSE.2018.108162940

Wise, A., & Vytasek, J. (2017). Learning Analytics Implementation Design. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 151–160). Alberta, Canada: Society for Learning Analytics Research (SoLAR). Retrieved from http://solaresearch.org/hla-17/hla17-chapter1

Context-Appropriate Scaffolding Assemblages: A generative learning analytics platform for end-user development and participatory design

David Jones, Celeste Lawson, Colin Beer, Hazel Jones

Paper accepted to the LAK2018 workshop – Participatory design of learning analytics

Jones, D., Lawson, C., Beer, C., & Jones, H. (2018). Context-Appropriate Scaffolding Assemblages: A generative learning analytics platform for end-user development and participatory design. In A. Pardo, K. Bartimote, G. Lynch, S. Buckingham Shum, R. Ferguson, A. Merceron, & X. Ochoa (Eds.), Companion Proceedings of the 8th International Conference on Learning Analytics and Knowledge. Sydney, Australia: Society for Learning Analytics Research. Retrieved from http://bit.ly/lak18-companion-proceedings

Abstract

There remains a significant tension in the development and use of learning analytics between course/unit or learning design specific models and generic, one-size fits all models. As learning analytics increases its focus on scalability there is a danger of erring toward the generic and limiting the ability to align learning analytics with the specific needs and expectations of users. This paper describes the origins, rationale, and use cases of a work in progress design-based research project attempting to develop a generative learning analytics platform. Such a platform encourages a broad audience to develop unfiltered and unanticipated changes to learning analytics. It is hoped that such a generative platform will enable the development and greater adoption of embedded and contextually specific learning analytics and subsequently improve learning and teaching. The paper questions which tools, social structures, and techniques from participatory design might inform the design and use of the platform, and asks whether or not participatory design might be more effective when partnered with generative technology?

Keywords: Contextually Appropriate Scaffolding Assemblages (CASA); generative platform; participatory design; DIY learning analytics

Introduction

One size does not fit all in learning analytics. There is no technological solution that will work for every teacher, every time (Mishra & Koehler, 2006). Context specific models improve teaching and learning, yield better results and improve the effectiveness of human action (Baker, 2016; Gašević, Dawson, Rogers, & Gasevic, 2016). Despite this, higher education institutions tend to adopt generalised approaches to learning analytics. Whilst this may be cost effective and efficient for the organisation (Gašević et al., 2016), the result is a generic approach that provides an inability to cater for the full diversity of learning and learners and shows “less variety than a low-end fast-food restaurant” (Dede, 2008).

Institutional implementation of learning analytics in terms of both practice and research remain limited to conceptual understandings and are empirically narrow or limited (Colvin, Dawson, Wade, & Gašević, 2017). In practice, learning analytics has suffered from a lack of human-centeredness (Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017). Even when learning analytics tools are designed with the user in mind (e.g. Corrin et al., 2015), the resulting tools tend to be what Zittrain (2008) defines as non-generative or sterile. In particular, the adoption of such tools tends to require institutional support and subsequently leans toward the generic, rather than the specific. This perhaps provides at least part of the answer of why learning analytics dashboards are seldom used to intervene during the teaching of a course (Schmitz, Limbeek, van Greller, Sloep, & Drachsler, 2017) and leading us to the research question: How can the development of learning analytics better support the needs of specific contexts, drive adoption, and ongoing design and development? More broadly, we are interested in if and how learning analytics can encourage the adoption of practices that position teaching as design and subsequently improve learning experiences and outcomes (Goodyear, 2015) by supporting a greater focus on the do-it-with (DIW – participatory design) and do-it-yourself (DIY) design (where teachers are seen as designers), implementation, and application of learning analytics. This focus challenges the currently more common Do-It-To (DIT) and Do-It-For (DIF) approaches (Beer, Tickner & Jones, 2014).

This project seeks to explore learning analytics using a design-based research approach informed by a broader information systems design theory for e-learning (Jones, 2011), experience with Do-It-With (DIW) (Beer et al., 2014) and teacher Do-It-Yourself (DIY) learning analytics (Jones, Jones, Beer, & Lawson, 2017), and technologies associated with reproducible research to design and test a generative learning analytics platform. Zittrain (2008) defines a generative system as having the “capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences” (p. 70). How generative a system is depends on five principal factors: (1) leverage; (2) adaptation; (3) ease of mastery; (4) accessibility; and (5) transferability (Zittrain, 2008). A focus for this project is in exploring how and if a generative learning analytics platform can act as a boundary object for the diverse stakeholders involved with the design, implementation and use of institutional learning analytics (Suthers & Verbert, 2013). Such an object broadens the range of people who can engage in creative acts of making learning analytics as a way to make sense of current and future learning and teaching practices and the contexts within which it occurs. The platform – named CASA, an acronym standing for Contextually Appropriate Scaffolding Assemblages – will be designed to enable all stakeholders alone or together to participate in decisions around the design, development, adoption and sharing of learning analytics tools. These tools will be created by combining, customising, and packaging existing analytics – either through participatory design (DIW) or end-user development (DIY) – to provide context-sensitive scaffolds that can be embedded within specific online learning environments.

Know thy student – Teacher DIY learning analytics

Jones et al., (2017) uses a case of teacher DIY learning analytics to draw a set of questions and implications for the institutional implementation of learning analytics and the need for CASA. The spark for the teacher DIY learning analytics was the observation that it took more than 10 minutes, using two separate information systems including a number of poorly designed reports, to gather the information necessary to respond to an individual learner’s query in a discussion forum. The teacher was able to design an embedded, ubiquitous and contextually specific learning analytics tool (Know Thy Student) that reduced the time taken to gather the necessary information to a single mouse click. The tool was used in four offerings of a third year teacher education unit across 2015 and 2016. Analysis of usage logs indicates that it was used 3,100 separate times to access information on 761 different students, representing 89.5% of the total enrolled students. This usage was spread across 666 days over the two years, representing 91% of the available days during this period. A significant usage level, especially given that most learning analytics dashboards are seldom used to intervene during the teaching of a course (Schmitz et al., 2017). Usage also went beyond responding to discussion forum questions. Since the tool was unintentionally available throughout the entire learning environment (embedded and ubiquitous) unplanned use of the tool developed contributing to improvements in the learner experience. This led to the implication that embedded, ubiquitous, contextual learning analytics encourages greater use and enables emergent practice (Jones et al., 2017). It provides leverage to make the difficult job of teaching a large enrolment, online course easier. However, the implementation of this tool required significant technical knowledge and hence is not easy to master, not accessible, nor easily transferable, Zittrain’s (2008) remaining principles required for a generative platform. The questions now become: How to reduce this difficulty? How to develop a generative learning analytics platform?

CASA: Technologies and Techniques

To answer this question CASA will draw on a combination of common technologies associated with reproducible research including virtualisation, literate computing (e.g. Jupyter Notebooks), and version control systems (Sinha & Sudhish, 2016) combined with web augmentation (Díaz & Arellano, 2015) and scraping (Glez-Peña, Lourenço, López-Fernández, Reboiro-Jato, & Fdez-Riverola, 2014). Reproducible research technologies enable CASA to draw upon a large and growing collection of tools developed and used by the learning analytics and other research communities. Growth in the importance of reproducible research also means that there is a growing number of university teaching staff familiar with the technology. It also means that there is emerging research literature sharing insights and advice in supporting academics to develop the required skills (e.g. Wilson, 2016). Virtualisation allows CASA to be packed into a single image which allows individuals to easily download, install and execute within their own computing platforms. Web augmentation provides the ability to adapt existing web-based learning environments to embed learning analytics directly into the current common learning context. The combination of these technologies will be used to implement the CASA platform, enabling the broadest possible range of stakeholders to individually and collaboratively design and implement different CASA instances. Such instances can be mixed and matched to suit context-specific requirements and shared amongst a broader community. The following section provides a collection of CASA use case scenarios including explicit links to Zittrain’s (2008) five principal factors of a generative platform.

CASA: Use case scenarios

A particular focus with the CASA platform is to enable individual teachers to adopt CASA instances while minimising the need to engage with institutional support services (accessibility). Consequently a common scenario would be where a teacher (Cara) observes another teacher (Daniel) using a CASA instance. It is obvious to Cara that this specific CASA instance makes a difficult job easier (leverage) and motivates her to trial it. Cara visits the CASA website and downloads and executes a virtual image (the CASA instance) on her computer, assuming she has local administrator rights. Cara configures CASA by visiting a URL to this new CASA instance and stepping through a configuration process that asks for some context specific information (e.g. the URL for Cara’s course sites). Cara’s CASA uses this to download basic clickstream and learner data from the LMS. Finally, Cara downloads the Tampermonkey browser extension and installs the CASA user script to her browser. Now when visiting any of her course websites Cara can access visualisations of basic clickstream data for each student.

To further customise her CASA instance Cara uploads additional data to provide more contextual and pedagogical detail (adaptation). The ability to do this is sign-posted and scaffolded from within the CASA tool (mastery). To expand the learner data Cara sources a CSV file from her institution’s student records system. Once uploaded to CASA all the additional information about each student appears in her CASA and Cara can choose to further hide, reveal, or re-order this information (adaptation). To associate important course events (Corrin et al., 2015) with the clickstream data Cara uses a calendar application to create an iCalendar file with important dates (e.g assignment due dates, weekly lecture times). This is uploaded or connected to CASA and the events are subsequently integrated into the clickstream analytics. At this stage, Cara has used CASA to add embedded, ubiquitous and contextually specific learning analytics about individual students into her course site. At no stage has Cara gained access to new information. CASA has simply made it easier for Cara to access this information, increasing her efficiency (leverage). This positive experience encourages Cara to consider what more is possible.

Cara engages in a discussion with Helen, a local educational designer. The discussion explores the purpose for using learning analytics and how it relates to intended learning outcomes. This leads to questions about exactly how and when Cara is engaging in the learning environment. This leads them to engage in various forms of participatory design with Chuck (a software developer). Chuck demonstrates how the student clickstream notebook form Cara’s existing instance can be copied and modified to visualise staff activity (mastery). Chuck also demonstrates how this new instance can be shared back to the CASA repository and how this process will eventually allow Daniel to choose to adopt this new instance (transferability). These discussions may also reveal insights into other factors such as limitations in Cara’s conceptions and practices of learning and teaching, or institutional factors and limitations (e.g. limited quality or variety of available data).

Conclusions and questions

This paper has described the rationale, origins, theoretical principles, planned technical implementation and possible use cases for CASA. CASA is a generative learning analytics platform which acts as a boundary object. An object that engages diverse stakeholders more effectively in creative acts of making to help make sense of and respond to the diversity and complexity inherent in learning and teaching in contemporary higher education. By allowing both DIW (participatory design) and DIY (end-user development) approaches to the implementation of learning analytics we think CASA can enable the development of embedded, ubiquitous and contextually specific applications of learning analytics, better position teaching as design, and subsequently improve learning experiences and outcomes. As novices to the practice of participatory design we are looking for assistance in examining how insights from participatory design can inform the design and use of CASA. For us, there appear to be three areas of design activity where participatory design can help and a possibility where the addition of generative technology might help strengthen participatory design.

First, the design of the CASA platform itself could benefit from participatory design. A particular challenge to implementation within higher education institutions is that as a generative platform CASA embodies a different mindset. A generative mindset invites open participation and assumes open participation provides significant advantage, especially in terms of achieving contextually appropriate applications. It sees users as partners and co-designers. An institutional mindset tends to see users as the subject of design and due to concerns about privacy, security, and deficit models seek to significantly limit participation in design. Second, the DIW interaction between Cara, Helen and Chuck in the use case section is a potential example of using participatory design and the CASA platform to co-design and co-create contextually specific CASA instances. What methods, tools and techniques from participatory design could help these interactions? Is there benefit in embedding support for some of these within the CASA platform? Lastly, the CASA approach also seeks to enable individual teachers to engage in DIY development. According to Zittrain (2008) the easier we can make it for teachers to develop their own CASA instances (mastery) the more generative the platform will be. What insights from participatory design might help increase CASA’s generative nature? Can CASA be seen as an example of a generative toolkit (Sanders & Strappers, 2014)? Or, does the DIY focus move into the post-design stage (Sanders & Strappers, 2014)? Does it move beyond participatory design? Is the combination of participatory design and generative technology something different and more effective than participatory design alone? If it is separate, then how can the insights generated by DIY making with CASA be fed back into the on-going participatory design of the CASA platform, other CASA instances, and sense-making about the broader institutional context?

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600-614. https://doi.org/10.1007/s40593-016-0105-0

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242-250).

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (pp. 281-289). Alberta, Canada: Society for Learning Analytics Research.

Corrin, L., Kennedy, G., Barba, P. D., Williams, D., Lockyer, L., Dawson, S., & Copeland, S. (2015). Loop : A learning analytics tool to provide teachers with useful data visualisations. In T. Reiners, B. von Konsky, D. Gibson, V. Chang, L.

Irving, & K. Clarke (Eds.), Globally connected, digitally enabled. Proceedings ascilite 2015 (pp. 57-61).

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43-62). New York: Springer.

Díaz, O., & Arellano, C. (2015). The Augmented Web: Rationales, Opportunities, and Challenges on Browser-Side Transcoding. ACM Trans. Web, 9(2), 8:1-8:30.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2016). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68-84. https://doi.org/10.1016/j.iheduc.2015.10.002

Glez-Peña, D., Lourenço, A., López-Fernández, H., Reboiro-Jato, M., & Fdez-Riverola, F. (2014). Web scraping technologies in an API world. Briefings in Bioinformatics, 15(5), 788-797. https://doi.org/10.1093/bib/bbt026

Goodyear, P. (2015). Teaching As Design. HERDSA Review of Higher Education, 2, 27-50.

Jones, D. (2011). An Information Systems Design Theory for E-learning (Doctoral thesis, Australian National University, Canberra, Australia). Retrieved from https://openresearch-repository.anu.edu.au/handle/1885/8370

Jones, D., Jones, H., Beer, C., & Lawson, C. (2017, December). Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics. Paper presented at the ALASI 2017: Australian Learning Analytics Summer Institute, Brisbane, Australia. Retrieved from http://tiny.cc/ktsdiy

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143-169). Springer International Publishing.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Sanders, E. B.-N., & Stappers, P. J. (2014). Probes, toolkits and prototypes: three approaches to making in codesigning. CoDesign, 10(1), 5-14. https://doi.org/10.1080/15710882.2014.888183

Schmitz, M., Limbeek, E. van, Greller, W., Sloep, P., & Drachsler, H. (2017). Opportunities and Challenges in Using Learning Analytics in Learning Design. In Data Driven Approaches in Digital Education (pp. 209-223). Springer, Cham.

Sinha, R., & Sudhish, P. S. (2016). A principled approach to reproducible research: a comparative review towards scientific integrity in computational research. In 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (pp. 1-9).

Suthers, D., & Verbert, K. (2013). Learning analytics as a middle space. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK ’13 (pp. 2-5).

Wilson, G. (2014). Software Carpentry: lessons learned. F1000Research, 3.

Zittrain, J. (2008). The Future of the Internet–And How to Stop It. Yale University Press.

Teacher DIY learning analytics – implications & questions for institutional learning analytics

The following provides a collection of information and resources associated with a paper and presentation given at ALASI 2017 – the Australian Learning Analytics Summer Institute in Brisbane on 30 November, 2017. Below you’ll find an abstract, a recording of a version of the presentation, the presentation slides and the references.

The paper examines the DIY development and use of a particular application of learning analytics (known as Know thy student) within a single course during 2015 and 2016. The paper argues that given limitations about what is known about the institutional implementation of learning analytics that examining teacher DIY learning analytics can reveal some interesting insights. The paper identifies three implications and three questions.

Three implications

  1. Institutional learning analytics currently falls short of an important goal.

    If the goal of learning analytics is that “of getting key information to a human being who can use it” (Baker, 2016, p. 607) then institutional learning analytics is falling short, and not just at a specific institution.

  2. Embedded, ubiquitous, contextual learning analytics encourages greater use and enables emergent practice.

    This case suggests that learning analytics interventions designed to provide useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable significant levels of usage, including usage that was unplanned, emerged from experience, and changed practice.

    In this case, Know thy student was used by the teacher on 666 different days (~91% of the days that the tool was available) to find out more about ~90% of the enrolled students. Graphical representations below.

  3. Teacher DIY learning analytics is possible.

    Know thy student was implemented by a single academic using a laptop, widely available software (including some coding), and existing institutional data sources.

Three questions

  1. Does institutional learning analytics have an incomplete focus?

    Research and practice around the institutional implementation of learning analytics tends to appear to have a focus on “at scale”. Learning analytics that can be used across multiple courses or an entire institution. That focus appears to be at the expense of course or learning design specific, which appear to be more useful.

  2. Does the institutional implementation of learning analytics have an indefinite postponement problem?

    Aspects of Know thy student are specific to the particular learning design within a single course. The implementation of such a specific requirement would appear unlikely to have ever been undertaken by existing institutional learning analytics implementation. It would have been indefinitely postponed.

  3. If and how do we enable teacher DIY learning analytics?

    This case suggests that teacher DIY learning analytics is possible and potentially overcomes limitations in current institutional implementation of learning analytics. However, it’s also not without its challenges and limitations. Should institutions support teacher DIY learning analytics? How might that be done?

Usage

The following heat map shows the number of times Know thy student was used on each day during 2015 and 2016.

Know thy student usage clicks per day

The following bar graph contains 761 “bars”. Each bar represents a unique student enrolled in this course. The size of the bar shows the number of times Know thy student was used for that particular student. (One student was obviously used for testing purposes during the development of the tool)

Know thy student usage clicks per student

Abstract

The paper on which it is based has the following abstract.

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Presentation recording

The following is a recording of a talk given at CQUni a couple of weeks after ALASI. It uses the same slides as the original ALASI presentation, however, without a time limit the description is a little expanded.

Slides

Also view and download here.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124–129.

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Díaz, O., & Arellano, C. (2015). The Augmented Web: Rationales, Opportunities, and Challenges on Browser-Side Transcoding. ACM Trans. Web, 9(2), 8:1–8:30. https://doi.org/10.1145/2735633

Dron, J. (2014). Ten Principles for Effective Tinkering (pp. 505–513). Presented at the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Association for the Advancement of Computing in Education (AACE).

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Germonprez, M., Hovorka, D., & Collopy, F. (2007). A theory of tailorable technology design. Journal of the Association of Information Systems, 8(6), 351–367.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43. https://doi.org/10.3102/0013189X12463051

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Comput. Surv., 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University. Retrieved from https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf

Levi-Strauss, C. (1966). The Savage Mind. Weidenfeld and Nicolson.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017, from http://blog.ascilite.org/what-academics-really-want-out-of-learning-analytics/

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable Systems: Pressing the Issues with Buttons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 175–182). New York, NY, USA: ACM. https://doi.org/10.1145/97243.97271

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable Game Design and the Development of a Checklist for Getting Computational Thinking into Public Schools. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 265–269). New York, NY, USA: ACM. https://doi.org/10.1145/1734263.1734357

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London. Retrieved from http://tel.ioe.ac.uk/wpcontent/%0Duploads/2013/11/BeyondPrototypes.pdf

Sinha, R., & Sudhish, P. S. (2016). A principled approach to reproducible research: a comparative review towards scientific integrity in computational research. In 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (pp. 1–9). https://doi.org/10.1109/ETHICS.2016.7560050

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Wiliam, D. (2006). Assessment: Learning communities can use it to engineer a bridge connecting teaching and learning. JSD, 27(1).

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119(7), 1974–2040.

Improving teacher awareness, action and reflection on learner activity

The following post contains the content from a poster designed for the 2017 USQ Toowoomba L&T celebration event. It provides some rationale for a technology demonstrator at USQ based on the Moodle Activity Viewer.

What is the problem?

Learner engagement is a key to learner success. Most definitions of learner engagement include “actively participating, interacting, and collaborating with students, faculty, course content and members of the community” (Angelino & Natvig, 2009, p. 3).

70% of USQ students study online. By mid-November 2017, 26,754 students had been active in USQ’s Moodle LMS.

In online learning, the absence of visual cues makes teacher awareness of student activity difficult (Govaerts, Verbert, & Duval, 2011).  Richardson (2011) identifies “the role which teaching staff play in inspiring, challenging and engaging students” as “perhaps the most woefully neglected aspect of quality in higher education” (p. 2)

Learning analytics (LA) is the “use of (big) data to provide actionable intelligence for learners and teachers” (Ferguson, 2014). However, current tools provide poor data aggregation, poor visualisation capabilities and have other limitations that inhibit teachers’ ability to: understand student activity; respond appropriately; and, reflect on course design (Dawson & McWilliam, 2008; Corrin et al, 2013; Jones, & Clark, 2014).

How will it be addressed?

Teachers can be supported through tools that help them “analyse, appraise and improve practices in their everyday activity systems” (Knight et al, 2016, p. 337).

This Technology Demonstrator has implemented and will customise and scaffold the use of the Moodle Activity Viewer (MAV) within the USQ activity system.

The MAV provides a useful and easy to use tool that provides representations of student activity from within all Moodle learning spaces. It provides affordances to support teacher intervention and further analysis.

MAV - How many students

MAV’s overlay answering the question how many and what percentage of students have accessed each Moodle activity & resource?

What are the expected outcomes?

The project aims to explore two questions:

  1. If and how does the provision of contextual, useful, and easy to use representations of online learner activity help teachers analyse, appraise and improve their practices?
  2. If and how does this change in teacher activity influence learner activity and learning outcomes?

MAV - How many clicks

MAV’s overlay answering the question how many times have those students clicked on each Moodle activity & resource?

Want to learn more?

Ask for a demostration of MAV during the poster session.

USQ staff can learn more* about and start using MAV from http://tiny.cc/aboutmav and http://tiny.cc/installmav

* (Only from a USQ campus or via the USQ VPN)

MAV - How many students in a forum

MAV’s overlay answering the question how many and what percentage of students have read posts in this introductory activity?

MAV - Who accessed and how to contact them

MAV’s student access dialog providing details of and enabling teacher contact with the students who have accessed the “Fix my class IWB” forum?

References

Angelino, L. M., & Natvig, D. (2009). A Conceptual Model for Engagement of the Online Learner. Journal of Educators Online, 6(1), 1–19.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Govaerts, S., Verbert, K., & Duval, E. (2011). Evaluating the Student Activity Meter: Two Case Studies. In Advances in Web-Based Learning – ICWL 2011 (pp. 188–197). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25813-8_20

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Proceedings of the 31st Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE 2014) (pp. 262–272). Sydney, Australia: Macquarie University.

Knight, P., Tait, J., & Yorke, M. (2006). The professional learning of teachers in higher education. Studies in Higher Education, 31(3), 319–339. https://doi.org/10.1080/03075070600680786

Introducing the Moodle Activity Viewer (MAV) & digital reno

What follows are the resources associated with a workshop being run at the University of Southern Queensland. As the title suggests, the aim is to get USQ folk started using the Moodle Activity Viewer to explore usage of Moodle activities and resources, and to briefly introduce the idea of digital renovation.

Apart from the presentation slides and references below, other related resources include:

  • Instructions for installing the MAV for USQ staff.

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

  • Additional details on other USQ digital reno tools

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

Slides

References

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics

David Jones, Hazel Jones, Colin Beer, Celeste Lawson, Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics, To appear in the proceedings of the 2017 Australian Learning Analytics Summer Institute (ALASI 2017)

Abstract

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Introduction

Learning analytics has been receiving attention since the late noughties. The promise of data driven decision making and the nature of the higher education environment – decreasing funding, increasing focus on quality, increasing use of technology enhanced learning (TEL) – is seen as making the institutional adoption of learning analytics an imperative for institutions of higher education (Macfadyen, Dawson, Pardo, & Gasevic, 2014, p. 17). By 2017, there appears to have been sufficient time and resources invested to realise the affordances learning analytics offers to education at the whole-of-institution scale (Colvin, Dawson, Wade, & Gašević, 2017), especially given predictions in 2012 that it was one year away from mainstream adoption within the Australian Higher Education sector (Johnson, Adams, & Cummins, 2012). However, there are only a small number of institutions that have demonstrated impact on learning and teaching outcomes through large-scale learning analytics programs (Ferguson, Clow, et al., 2014) and there are concerns that there remains limited evidence of the effectiveness of learning analytics at scale, or sufficient understanding to guide successful implementation (Colvin et al., 2017; Ferguson, Macfadyen, et al., 2014).

To address this concern there is a growing conceptual literature offering various models and frameworks to guide learning analytics adoption. Colvin et. al (2017) categorise and analyse this literature and argue that “while the models afford insight, they do not capture the breadth of factors that shape LA implementations” (p. 284). As a result these models are unable to provide those responsible for institutional implementation of learning analytics “the nuanced, situated, fine-grained insight they require to guide them through learning analytics implementation” (Colvin et al., 2017, p. 284). Such a restriction could be addressed through empirical research that examines the “burgeoning, albeit nascent implementations found across higher education institutions” (Colvin et al., 2017, p. 285). Research by Colvin et al (2015) offers one valuable contribution, however, there are limitations. One such limitation is the focus on the perspectives from one set of participants involved in learning analytics projects: senior leaders charged with responsibility for implementation. While an important source of insight, this focus perhaps echoes the lack of human-centeredness that pervades learning analytics implementation (Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017) and tends “to privilege the administrator rather than the student – or even the instructor” (Kruse & Pongsajapan, 2012, p. 4). This limitation raises questions such as:

What is the experience of students and teachers using institutional learning analytics? How might an understanding of their experience inform the institutional implementation of learning analytics?

It is these questions that this paper seeks to explore, with a particular focus on the experience of teaching staff. To do this, it describes a single teacher’s experience developing and using a do-it-yourself (DIY) approach to learning analytics. The paper starts by describing this approach and then draws from it three implications and three questions for institutional implementation of learning analytics.

Know thy student

During 2015 and 2016 one of the authors developed and used a DIY learning analytics tool (Know thy student) within a third-year Bachelor of Education course. Offered twice a year, the course had an annual enrolment of 400+ students. Two-thirds of these students studied via online only, and less than 15% were ever likely meet the course examiner in person. The design of the course focused explicitly on making significant use of a Moodle course site and sought to encourage: significant active student online engagement; formative assessment; student reflection via individual blogs; and, use of social bookmarking. Know thy student was developed to address limitations in existing institutional systems and enable more meaningful responses to student queries. The tool was inspired by and built on top of the Moodle Activity Viewer (MAV) developed at CQUniversity (Jones and Clark, 2014). While the tool interacted with, and extracted information from a number of institutional systems, it could only be used via the implementer’s laptop to interact with the specific course site.

When in use, Know thy student modified every page of the course site viewed by the teacher. It added a [details] link where ever a link to a user profile appeared, as illustrated in Figure 1.

Forum post + more student details

Figure 1 – Modified course page

Clicking on one of the [details] link would open a new pop up window (Figure 2) to provide access to information about the student. The pop-up window provided information in three separate tabs, including: personal details (Figure 2); activity completion (Figure 3); and, blog posts (Figure 4). Know thy student provided the examiner with ubiquitous and embedded access to course specific information about each student enrolled in the course.

Student background

Figure 2 – Personal details

Across four offerings of the course in 2015 and 2016 the teacher used the tool 3,100 separate times to access information on 761 different students. Representing 89.5% of the enrolled students. For one student, the tool was used 32 separate times. The median number of uses per student was three.

Initially, most of this use was generated when answering student questions on course discussion forums. However, the embedded and ubiquitous availability of the [details] link enabled other unplanned uses. For example, the course home page provided a list of all course participants who had been recently logged into the course site. As designed, Know thy students would add a [details] link to this list. This modification to the learning environment encouraged the development of a practice where the teacher would use that link to proactively learn more about students. In turn, this led to an increase in engaging with students via their blog posts and other means. Since the tool was simple and easily within grasp it provided a platform that encouraged more meaningful and unexpected connections with hundreds of students.

Implications and questions for learning analytics implementation

Analysis and discussion about the case have led the authors to suggest three implications about and three questions for the institutional implementation of learning analytics. Given the exploratory nature of this research there are tenative suggestions and each implication and question in turn generates additional questions for further investigation.

Implication #1: Institutional learning analytics currently falls short of an important goal

Baker (2016) identifies a common goal shared by learning analytics systems, that “of getting key information to a human being who can use it” (p. 607). This case shows that at least one institution’s approach to learning analytics is falling short of this goal, and there are indications that this problem is not limited to a single institution. Almost 10 years ago, Dawson & McWilliam (2008) comment on how poor the LMS data aggregation and visualisation tools of the day were in helping academics understand student learning behaviour. In 2013, focus groups of academics from the University of Melbourne identified a common need to be better able to correlate from different institutional systems (Corrin, Kennedy, & Mulder, 2013). A recent unpublished experiment at another institution by one of the co-authors of this paper identified that gathering relevant information for ten post-graduate students took over an hour and required the use of five separate information systems owned by three separate institutional departments. This reinforces the observation from Liu (2017) that academics “rarely have the data that they actually want in a place and form where it can actually be used”.

How widespread is this apparent failure? What are the factors contributing to this apparent failure? What can be done to address it?

Student activity completion

Figure 3 – Activity completion

Implication #2: Embedded, ubiquitous, contextual learning analytics enable emergent practice

Experience from this case suggests that providing useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable unplanned and effective interventions. In this case, being able to access student and course specific information throughout the learning environment enabled the teacher to adopt the unplanned practice of proactively connecting with students. Arguably, this may fit with characterisations of teachers as bricoleurs focused on making do with and creatively repurposing the tools that are at hand (Hatton, 1989). Providing contextually appropriate tools, however, is difficult given the sheer diversity involved in education where “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029).

Does the provision of embedded, ubiquitous and contextual learning analytics increase and encourage greater adoption and bricolage by teachers with learning analytics? What impact would that have on the learning experience? Given the inherent diversity in education, how can institutional learning analytics provide contextually appropriate learning analytics?

Sentiment analysis of blog posts

Figure 4 – Sentiment analysis of blog posts

Implication #3: Teacher DIY learning analytics is possible

This case shows that technically literate academics are able to leverage available technologies to implement and use teacher DIY learning analytics. The notion of end-user development is not new with “[m]ost programs today … written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support” (Ko et al., 2011, p. 21). Such work can be seen as undesirable due to concerns about inefficiency, error, support, scalability, privacy and security. However, it can also address limitations and flaws in provided systems (Koopman & Hoffman, 2003).

How is DIY learning analytics viewed in relation to the institutional implementation of learning analytics? Is it something to be prevented, or enabled and encouraged? Given technology trends, can it be prevented?

Question #1: Does institutional learning analytics have an incomplete focus?

The common response to seeing the Know thy student tool is to ask if and how it can be reused in other courses. Such a response aims to understand if and how this particular learning analytics tool can “make the leap from the focused and particular to the broad and general” (Lonn et al., 2013, p. 235). This echoes what is seen as the core goal for most learning analytics project “to move from small-scale research towards broader institutional implementation” (Ferguson, Macfadyen, et al., 2014, p. 120). However, if “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029), then how can a broad and general focus effectively respond to diverse contextual requirements? How can the institutional implementation of learning analytics address concerns that it is focused at an “institutional scale rather than a human scale” (Kruse & Pongsajapan, 2012)? Should and can its focus be expanded to include both the human and institutional scale?

Question #2: Does the institutional implementation of learning analytics have an indefinite postponement problem?

In seeking to move learning analytics beyond a research project to institutional scale Lonn et al ( 2013) partnered with a university’s Information Technology (IT) service. A first step in their project involved the IT service performing a feasibility of the project and placing “it in their timeline of priorities” (p. 236) and subsequently the project “was delayed due to existing projects … that were a higher priority for the institution” (Lonn et al., 2013, p. 238). Given the typical prioritisation scheme used by a university IT service, a tool like Know thy student which focuses on a need from a single course is unlikely to ever be of sufficient priority to be actioned at the institutional level. It will be indefinitely postponed.

Would learning analytics that are specific to the learning designs within a single course ever be implemented by institutional IT? Would such a project be indefinitely postponed? What impact does this have on the institutional implementation of learning analytics? Should and can this problem be addressed?

Question #3: If and how do we enable teacher DIY learning analytics?

The above has suggested that teacher (and perhaps student) DIY learning analytics may make a useful contribution to institutional learning analytics implementation. However, there are numerous significant questions around if and how it can be achieved, including: whether or not it can be integrated sustainably into institutional implementation. and whether or not teaching staff have sufficient data and technical literacy to effective contribute?

In terms of institutional implementation, Colvin et al (2017) provide recommendations necessary for sustainable learning analytics adoption that could offer useful guidance. In addition, there are projects like that described by Liu et al (2017) that are actively using such recommendations to support a level of teacher DIY learning analytics. The challenge is that enabling and encouraging teacher DIY learning analytics appears to represent a mindset that is incommensurable with the assumptions underpinning the majority of contemporary institutional practices (Jones & Clark, 2014). There is also research suggesting that the convergent and generative characteristics of pervasive digital technology requires the development of radically different approaches to corporate IT infrastructures and organisational strategic frameworks (Yoo, Boland, Lyytinen, & Majchrzak, 2012).

The low digital fluency of teaching staff has been identified as a significant challenge impeding the adoption of digital technology within higher education (Johnson, Adams Becker, Estrada, & Freeman, 2014). If low digital fluency is challenging the effective use of digital technologies by teaching staff, then it does raise questions about the likelihood of teacher DIY learning analytics. However, research in end user development suggests that such DIY practices are already happening and that such practices have positive impacts on the quantity and quality of adoption of digital technologies (Ko et al., 2011; Koopman & Hoffman, 2003). Finally, Scanlon et al (2013) observes that the complexity of technology-enhanced learning – such as learning analytics – means that accepting “’user-driven’ contributions from both teachers and students” (p. 34) may be necessary “to allow for effective intervention” and in order to understand the complexity of practices that is the “context for any particular TEL innovation” (p. 34).

Conclusion

This paper has briefly described a single case of teacher DIY learning analytics, which raises a number of implications and questions for the institutional implementation of learning analytics. It is suggested that empirical research moving beyond those in charge of the institutional implementation of learning analytics to those living with such systems can deepen the understanding of current experience with such systems and subsequently contribute improvements. From this case it appears that current approaches are failing to meet a potentially important goal of “getting key information to a human being who can use it” (Baker, 2016, p. 607). The paper has asked whether or not this may be due to learning analytics over-emphasising the broad at the expense of the specific or contextual. It may also be due to the nature of how institutional IT projects are prioritised leading to indefinite postponement of contextually specific projects. The case illustrates that technological trends are making teacher DIY learning analytics are possible, if only in very limited situations, and has provided an indication that ubiquitous, embedded and contextual learning analytics can enable and encourage positive and unplanned usage. Suggesting that enabling and encouraging teacher DIY learning analytics in the form of more generative institutional learning analytics implementations may offer an interesting and fruitful direction.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance, 41–41.

Ferguson, R., Macfadyen, L. P., Clow, D., Tynan, B., Alexander, S., & Dawson, S. (2014). Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. https://doi.org/10.18608/jla.2014.13.7

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis (No. 9780984660155). Austin, Texas.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Computing Surveys, 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London.

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Exploring options for teacher DIY learning analytics


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

A few of us recently submitted a paper to ALASI’2017 that examined a “case study” of a teacher (me) engaging in a bit of DIY learning analytics. The case was used to drawing a few tentative conclusions and questions around the institutional implementation of learning analytics. The main conclusion is that teacher DIY learning analytics is largely ignored at the institutional level and that there appears to be a need and value to support it. The question is how (and then if supported, what happens)?

This post is the start of an exploration of some technologies that combined may offer some of the affordances necessary to supporting teacher DIY learning analytics. The collection of technologies and the approach owes a significant amount of inspiration to Tony Hirst, especially in this post in which he writes

What I care about are some of the features that Docker has, and how I can use those features to make my own life easier, … supporting personal, DIY, BYOA (“bring your own app”) IT that works at an individual level in the form of end-user applications, or personal digital workbenches

The plan/hope here is that Docker combined with some other technologies can provide a platform to enable a useful combination of do-it-with (DIW) and do-it-yourself (DIY) paths for the institutional implementation of learning analytics. The follow is mostly documenting ad hoc exploration of the technologies.

In the end, I’ve been able to get working a Jupyter notebook working as a JSON API and started explorer docker containers. Laid the ground work for the next step which will be to explore how and if some of this can be combined to integrate some of the work Hazel is doing with some of the Indicators work from earlier in the year.

Learning more – Juypter notebook JSON api

Tony provides a description of using Jupyter Notebooks to provide a JSON API. Potentially this provides a way for DIY teachers to create their own MAV-like server.

Tony’s exploration is informed by this from some aspect of IBM that aims to introduce the Jupyter kernel gateway (github repo)

The README.md from github repo mentions serving HTTP requests from “annotated notebook cells”. Suggesting that the method of annotation will be important. The IBM example code that each API call is handled by a particular block starting with an appropriately formatted comment i.e.

single-line comments containing a HTTP verb … followed by a parameterised URL path

Have a simple example working.

Deploying – user experience

The IBM bit then goes about using Docker to to deploy this API. But before I do that. Lets get some experience at the user en with Tony’s example.

  1. Install VirtualBox
    Question: Is this something a standard user can do?
  2. Install vagrant
  3. command line to install a vagrant plugin

    Question: Too much? But can probably be worked around.

  4. Download the repo as a zip file.

    Had to figure out to go back to the repo “home” to get the download option (long time between drinks doing this).

  5. Run the vagrant file

    Ok, it’s downloading the file from the vagrant server (from the ouseful area on Vagrant).

    It’s a 1.66Gb file. That size could potentially be an issue, suggesting the need for a local copy. Especially given the slow download.

    An hour or two later and it is up and running. There’s a GUI linux box running on my Mac.

Don’t know a great deal about the application that is the focus, but it appears to work. It’s a 3D application, so the screen refresh isn’t all that fast. But as a personal server for DIY teacher analytics, it should work fine, at least in terms of speed.

Running it a second time includes a check to see if it’s up to date and then up it pops.

The box appears to have Perl, Python and Juypter installed.

Deploying – developing a docker/container/images

This raises the question of the best option for creating and sharing a docker/container/insert appropriate term – I’ll go with images – that has Jupyter notebooks and the kernel_gateway tool running. At this stage, this purpose seems best served by a headless virtual machine with browser-based communication the method for interacting with Jupyter notebooks.

Tony appears to do exactly this (using OpenRefine) using Kitematic in this post. Later in the post the options appear to include

  • Sharing images publicly via the Dockerhub registry
  • Use a private Dockerhub registry (one with the free plan)
  • On a local computer
  • Run your own image registry
  • And, I assume use an alternative.

Tony sees using the command line a draw back for running your own. Perhaps not the biggest problem in my case. But what is the best approach?

Dockerhub and its ilk do appear to provide extra help (e.g. official repositories you can build upon).

One set of alternatives appear largely focused on supporting central IT, not the end user. Echoing a concern expressed by Tony.

Intro from another alternative suggests that docker is becoming more generic. Time to look and read further afield.

Intro to containers

From Medium

  • Containers abstract the OS etc to make it simple to deploy
  • Containers usually measured in 10s of megabytes
  • Big distinction made between containers and virtual machines, perhaps boils down to “containers virtualise the OS; virtual machines the hardware”

    Though interesting, the one tried above required the downloading of a virtual machine first. Update: That appears to be because I’m running Mac OS X. If I were on a Linux box, I probably wouldn’t have needed that.

  • The following seem to resonate most with the needs of teacher DIY learning analytics
    • Using containers can decrease the time needed for development, testing, and deployment of applications and services.
    • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production.
    • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.
  • Docker is based on Linux and open source, is the big player.
  • Spends some attention on container orchestration – appears to be focused on enterprise IT.

Following offers a creative intro to Kubernetes

Starts with the case for containers (Docker), but then moves onto orchestration and the need for Kubernetes. Puts containers into a pod, perhaps with more than one if tightly coupled. Goes onto to explain the other features provided by Kubernetes.

And intro to Docker

Rolling my own

Possible technology options

Do the following and I have a web server running in Docker that I can access from my Mac OS browser.

AA17-00936:docker david$ docker run -d -p 80:80 --name webserver nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
afeb2bfd31c0: Pull complete 
7ff5d10493db: Pull complete 
d2562f1ae1d0: Pull complete 
Digest: sha256:af32e714a9cc3157157374e68c818b05ebe9e0737aac06b55a09da374209a8f9
Status: Downloaded newer image for nginx:latest
f1f6925acc31f80faf726358f8de5712458ff3649d2c0626bf3bb37f11d1b070
AA17-00936:docker david$

Dig into tutorials and have a play

Docker share a git repo for tutorials and labs. Which are quite good and useful.

Getting set up with some advice above.

Running your first container includes some simple commands. e.g. to show details of installed images. Showing that they can be quite small.

Question: To have folk install Docker, or do the VM route as above?

AA17-00936:docker david$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              2d696327ab2e        11 days ago         122MB
nginx               latest              da5939581ac8        2 weeks ago         108MB
alpine              latest              76da55c8019d        2 weeks ago         3.97MB
hello-world         latest              05a3bd381fc2        2 weeks ago         1.84kB

Web apps with docker, which also starts looking at the process of rolling your own.

This is where discussion of different types of images commence

  • Base (e.g. an OS) and child images which add functionality to a base image
  • Official images – sactioned by docker
  • user images

Process can be summarised as

  • Create the app (example is using a Python web framework – Flask)
  • Add in a Dockerfile – text file of commands for the Docker daemon when creating an image
  • Build the image

    Does require an account on the Docker cloud

    And there it goes getting all the pre-reqs etc. Quite quick.

And successful running.

Docker Swarm running multiple copies, including on the cloud. Given the use case I’m interested in is people running their own…not a priority.

It does provide a look at Docker Compose files and a more complex application – multiple containers and two networks. Given my focus on using Jupyter Notebooks and perhaps the kernel gateway, this may be simplified a bit.

Seems we’re at the stage of actually trying to do something real.

Create a Docker image – TDIY

Jupyter Notebook, kernel gateway and a simple collection of notebooks – perhaps with greasemonkey script

Misc. related stuff

Bit on microservices (microservice architectural style) pointing out the focus on

principles of loose coupling and high cohesion of services

and in turn a number of characteristics

  • Applications are made up of small independent services

    Is TDIY LA about allowing teachers to create applications by combining these services?

  • Services are independently modifiable and (re)deployable

    But by whom?

  • Decentalised data management: each service can have its own database

    What about each user?

Goes on to list a range of advantages, but the disadvantages include

  • inefficiency – remote calls, network latency, potential duplication etc.

    But going local might help address some of this.

  • Developing a user case could need the cooperation of multiple teams

    This is the biggest barrier to implementation within an instituiton. But raises the spectre of shadow systems, kludges etc.

  • complications in debugging, communication

Microservices and containers covers some of the alternatives.

Seems docker is the place — it’s bought Kitematic and apparently not loved it – a risk for basing the DIY approach on it.

Another part of the story is that you can build your own images and either share them publicly via the Dockerhub registry, keep them locally on your own computer, post them to a private Dockerhub repository (you get a single private repository as part of the Dockerhub free plan, or can pay for more…), or run your own image registry.

Dockerhub is probably the option I want to use here because of the focus on being open, of being cross institutional etc.

Learning analytics, quality indicators and meso-level practitioners

When it comes to research I’ve been a bit of failure, especially when measured against some of the more recent strategic and managerial expectations. Where are those quartile 1 journal articles? Isn’t your h-index showing a downward trajectory?

The concern generated by these quantitative indicators not only motivated the following ideas for a broad research topic, but also is one of the issues to explore within the topic. The following outlines early attempts to identify a broader research topics that is relevant enough for current sector and institutional concerns; provides sufficient space for interesting research and contribution; aligns nicely (from one perspective) with my day job; and, will likely provide a good platform for a program of collaborative research.

The following:

  1. explains the broad idea for research topic within the literature; and,
  2. describes the work we’ve done so far including two related examples of the initial analytics/indicators we’ve explored.

The aim here is to be generative. We want to do something that generates mutually beneficial collaborations with others. If you’re interested, let us know.

Research topic

As currently defined the research topic is focused around the design and critical evaluation of the use and value of a learning analytics platform to support meso-level practitioners in higher education to engage with quality indicators of learning and teaching.

Amongst the various aims, are an intent to:

  • Figure out how to design and implement an analytics platform that is useful for meso-level practitioners.
  • Develop design principles for that platform informed by the analytics research, but also ideas from reproducible research and other sources.
  • Use and encourage the use by others of the platform to:
    1. explore what value (if any) can be extracted from a range of different quality indicators;
    2. design interventions that can help improve L&T; and,
    3. to enable for a broader range of research – especially critical research – around the use of quality indicators and learning analytics for learning and teaching.

Quality Indicators

The managerial turn in higher education has increased the need for and use of various indicators of quality, especially numeric indicators (e.g. the number of Q1 journal articles published, or not). Kinash et al (2015) state the quantifiable performance indicators are important to universities because they provide “explicit descriptions of evidence against which quality is measured” (p. 410). Chalmers (2008) offers the following synthesized definition of performance indicators

measures which give information and statistics context; permitting comparisons between fields, over time and with commonly accepted standards. They provide information about the degree to which teaching and learning quality objectives are being met within the higher education sector and institutions. (p. 10)

However, the generation and use of these indicators is not without issues.

There is also a problem with a tendency to rely on quantitative indicators. Quantitative indicators provide insight into “how much or how many, but say little about quality” (Chalmers & Gardiner, 2015, p. 84). Ferguson and Clow (2017) – writing in the context of learning analytics – argue the good quality qualitative research needs to support good-quality quantitative research because “we cannot understand the data unless we understand the context”. Similarly, Kustra et al (2014) suggest that examining the quality of teaching requires significant qualitative indicators to “provide deeper interpretation and understanding of the measured variable”. Qualitative indicators are used by Universities to measure performance in terms of processes and outcomes,however, “because they are more difficult to measure and often produce tentative results, are used less frequently” (Chalmers & Gardiner, 2015, p. 84)

Taking a broader perspective there are problems such as Goodhart’s law and performativity. As restated by Strathern (1997), Goodhart’s Law is ‘When a measure becomes a target, it ceases to be a good measure’ (p. 308) Elton (2004) describes Goodhart’s Law as “a special case of Heisenberg’s Uncertainty Principle in Sociology, which states that any observation of a social system affects the system both before and after the observation, and with unintended and often deleterious consequences” (p. 121). When used for control and comparison purposes (e.g league tables) indicators “distort what is measured, influence practice towards what is being measured and cause unmeasured parts to get neglected” (Elton, 2004, p. 121).

And then there’s the perception that quality indicators and potentially this whole research project becomes an unquestioning part of part of performativity and all of the issues that generates. Ball (2003) outlines the issues and influence of the performative turn in institutions. He describes performativity as

a technology, a culture and a mode of regulation that employs judgements, comparisons and displays as means of incentive, control, attrition and change ^ based on rewards and sanctions (both material and symbolic). The performances (of individual subjects or organizations) serve as measures of productivity or output, or displays of ‘quality’, or ‘moments’ of promotion or inspection (Ball, 2003, p. 216)

All of the above (and I expect much more) all point to there being interesting and challenging questions to explore and answer around quality indicators and beyond. I do hope that any research we do around this topic engages with the necessary critical approach to this research. As I re-read this post now I can’t help but see echoes of a previous discussion Leigh and I have had around inside out, outside in, or both. This approach is currently framed as an inside out approach. An approach where those inside the “system” are aware of the constraints and work to address those. The question remains whether this is possible.

Learning analytics

Siemens and Long (2011) define LA as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (p. 34). The dominant application of learning analytics has focused on “predicting student learning success and providing proactive feedback” (Gasevic, Dawson and Siemens, 2015), often driven by an interest in increasing student retention and success. Colvin et al (2016) found two distinct trajectories of activity in learning analytics within Australian higher education. The first were ultimately motivated by measurement and retention implemented specific retention related learning analytics programs. The second saw retention as a consequence of the broader learning and teaching experience and “viewed learning analytics as a process to bring understanding to learning and teaching practices” (Colvin et al, 2016, p. 2).

Personally, I’m a fan of the second trajectory and see supporting that trajectory as a major aim for this project.

Not all that surprisingly, learning analytics has been applied to the question of quality indicators. Dawson and McWilliam (2008) explored the use of “academic analytics” to

address the need for higher education institutions (HEIs) to develop and adopt scalable and automated measures of learning and teaching performance in order to evaluate the student learning experience (p. 1)

Their findings included (emphasis added):

  • “LMS data can be used to identify significant differences in pedagogical approaches adopted at school and faculty levels”
  • “provided key information for senior management for identifying levels of ICT adoption across the institution, ascertaining the extent to which teaching approaches reflect the strategic institutional priorities and thereby prioritise the allocation of staff development resources
  • refining the analysis can identify “further specific exemplars of online teaching” and subsequently identify “‘hotspots’ of student learning engagement”; “provide lead indicators of student online community and satisfaction”; and, identify successful teaching practices “for the purposes of staff development activities and peer mentoring”

Macfadyen and Dawson (2012) provide examples of how learning analytics can reveal data that offer “benchmarks by which the institution can measure its LMS integration both over time, and against comparable organizations” (p. 157). However, the availability of such data does not ensure use in decision making. Macfadyen and Dawson (2012) also report that the availability of patterns generated by learning analytics did not generate critical debate and consideration of the implications of such data by the responsible organisational committee and thus apparently failed to influence institutional decision-making.

A bit more surprising, however, is that in my experience there doesn’t appear to have been a concerted effort to leverage learning analytics for these purposes. Perhaps this is related to findings from Colvin et al (2016) that even with all the attention given to learning analytics there continues to be: a lack of institutional exemplars; limited resources to guide implementation; and perceived challenges in how to effectively scale learning analytics across an institution. There remains little evidence that learning analytics has been helpful in closing the loop between research and practice, and made an impact on university-wide practice (Rogers et al, 2016).

Even if analytics is used, there are other questions such as the role of theory and context. Gasevic et al (2015) argue that while counting clicks may provide indicators of tool use it is unlikely to reveal insights of value for practice or the development of theory. If learning analytics is to achieve an a lasting impact on student learning and teaching practice it will be necessary to draw of appropriate theoretical models (Gasevic et al, 2015). Rogers et al (2016) illustrate how such an approach “supports an ever-deepening ontological engagement that refines our understanding and can inform actionable recommendations that are sensitive to the situated practice of educators” (p. 245). If learning analytics aims to enhance to learning and teaching, it is crucial that it engages with teachers and their dynamic contexts (Sharples et al., 2013). Accounting for course and context specific instructional conditions and learning designs are increasingly seen as imperatives for the use of learning analytics (Gaservic et al, 2015; Lockyer et al, 2013)

There remain many other questions about learning analytics. Many of those questions are shared with the use of quality indicators. There is also the question of how learning analytics can harnessed via means that are sustainable, scale up, and at the same time provide contextually appropriate support. How can the tensions between the need for institutional level quality indicators of learning and teaching, and the inherently contextually specific nature of learning and teaching?

Meso-level practitioners

The limited evidence of impact from learning analytics on learning and teaching practice may simply be a mirror of the broader difficulty that universities have had with other institutional learning technologies. Hannon (2013) explains that when framed as a technology project the implementation of institutional learning technologies “risks achieving technical goals accompanied by social breakdowns or failure, and with minimal effect on teaching and learning practices” (p. 175). Breakdowns that arise, in part, from the established view of enterprise technologies. A view that sees enterprise technologies as unable to be changed, and “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (Rushkoff, 2010, p. 15).

Jones et al (2006) describe the term meso-level to describe the “level that was intermediate between small scale, local interaction, and large-scale policy and institutional procesess” (p. 37). Hannon (2013) describes meso-level practitioners as the “teaching academics, learning technologists, and academic developers” (p. 175) working between the learning and teaching coal-face and the institutional context defined by an institution’s policies and technological systems. These are the people who can see themselves as trying to bridge the gaps between the institutional/technological vision (macro-level) and the practical coal-face realities (micro-level). These are the people who are often required to help “optimise humans for machinery”, but who would generally prefer to do the reverse. Hannon (2013) also observes thateEven though there has been significant growth in the meso-level within contemporary higher education, research has continued to largely focused on the macro or micro levels.

My personal experience suggests that the same can be said about the design and use of learning analytics. Most institutional attempts are focused at either the macro or micro level. The macro level focused largely on large-scale student retention efforts. The micro level focused on the provision of learning analytics dashboards and other tools to teaching staff and students. There has been some stellar work by meso-level practitioners in developing supports for the micro-level (e.g. Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017). However, much of this work has been in spite of the affordances and support offered by the macro-level. Not enough of the work, beyond the exceptions already cited, appears to have actively attempted to help optimise the machinery for the humans. In addition, there doesn’t appear to be a great deal of work – beyond the initial work from almost 10 years ago – focused on if and how learning analytics can help meso-level practitioners in the work that they do.

As a result there are sure to be questions to explore about meso-level practitioners, their experience and impact on higher education. Leigh Blackall has recently observed that the growth in meso-level practitioners in the form of “LMS specialists and ed tech support staff” comes with the instruction that they “focus their attentions on a renewed sense of managerial oversight”. Implicating meso-level practitioners in questions related to performativity etc. Leigh also positions these meso-level practitioners as examples of disabling professions. Good pointers to some of the more critical questions to be asked about this type of work.

Can meso-level practitioners break out, or are we doomed to be instruments of performativity? What might it take to break free? How can learning analytics be implemented in a way that allows it to be optimised for the contextually specific needs of the human beings involved, rather than require the humans to be optimised for the machinery? Would such a focus improve the quality of L&T?

What have we done so far?

Initial work has focused on developing an open, traceable, cross-institutional platform for exploring learning analytics. In particular, exploring how recent ideas such as reproducible research and insights from learning analytics might help design a platform that enables meso-level practitioners to break some of the more concerning limitations of current practice.

We’re particularly interested in ideas from Elton (2004) where bottom-up approaches might “be considerably less prone to the undesirable consequences of Goodhart’s Law” (p. 125). A perspective that resonates with our four paths idea for learning analytics. i.e. That it’s more desirable and successful to follow the do-it-with learners and teachers or learner/teacher DIY paths.

The “platform” is seen as an enabler for the rest of the research program. Without a protean technological platform – a platform we’re able to tailor to our requirements – it’s difficult to see how we’d be able to effectively support the deeply contextual nature of learning and teaching or escape broader constraints such as performativity. This also harks back to my disciplinary background as a computer scientist. In particular, the computer scientist as envisioned by Brooks (1996) as a toolsmith whose delight “is to fashion powertools and amplifiers for minds” (p. 64) and who “must partner with those who will use our tools, those whose intelligences we hope to amplify” (p. 64).

First steps

As a first step, we’re revisiting our earlier use of Malikowski, Thompson & Theis (2007) to look at LMS usage (yea, not that exciting, but you have to start somewhere). We’ve developed a set of Python classes that enable the use of the Malikowski et al (2007) LMS research model. That set of classes has been used to develop a collection of Jupyter notebooks that help explore LMS usage in a variety of ways.

The theory is that these technologies (and the use of github to share the code openly) should allow anyone else to perform these same analysis with their LMS/institution. So far, the code is limited to working only with Moodle. However, we have been successful in sharing code between two different installations of Moodle. i.e. one of us can develop some new code, share it via github, and the other can run that code over their data. A small win.

The Malikowski et al (2007) model groups LMS features by the following categories: Content, Communication, Assessment, Evaluation and Computer-Based Instruction. It also suggests that tool use occurs in a certain order and with a certain frequency. The following figure (click on it to see a larger version) is a representation of the Malikowski model.

Malikowski Flow Chart

Looking for engagement?

Dawson and McWilliam (2008) suggested that academic analytics could be used to identify “potential “hotspots” of student learning engagement” (p. 1). Assuming that the number of times students click within an LMS course is a somewhat useful proxy for engagement (a big question), then this platform might allow you to:

  1. Select a collection of courses.

    This might be all the courses in a discipline that scored well (or poorly) on some other performance indicator, all courses in a semester, all large first year courses, all courses in a discipline etc.

  2. Visualise the number of total student clicks within each course clicked on LMS functionality in each of the Malikowski categories.
  3. Visualise the number of clicks per student within each course in each Malikowski category.

These visualisations might then provide a useful indication of something that is (or isn’t) happening. An indication that would not have been visible otherwise and is worthy of further exploration via other means (e.g. qualitative).

The following two graphs were generated by our platform and are included here to provide a concrete example of the above process. Some features of the platform that the following illustrates

  • It generates artefacts (e.g. graphs, figures) that can be easily embedded anywhere on the web (e.g. this blog post). You don’t have to be using out analytics platform to see the artefacts.
  • It can anonymise data for external display. For example, courses in the following artefacts have been randomly given people’s names rather than course codes/names.

Number of total student clicks

The first graph shows a group of 7 courses. It shows the number of students enrolled in each course (e.g. the course Michael has n=451) and the bars represent the total number of clicks by enrolled students on the course website. The clicks are grouped according to the Malikowski categories. If you roll your mouse over one of the bars, then you should see displayed the exact number of clicks for each category.

For example, the course Marilyn with 90 students had

  • 183,000+ clicks on content resources
  • 27,600+ clicks on communication activities
  • 5659 clicks on assessment activities
  • and 0 for evaluation of CBI

Total number of clicks isn’t all that useful for course comparisons. Normalising to clicks per enrolled student might be useful.


 

 

 

 

Clicks per student

The following graph uses the same data as above, however, the number of clicks is now divided by the number of enrolled students. A simple change in analysis that highlights differences between courses.

2000+ clicks on content per student certainly raises some questions about the Marilyn course. Whether that number is good, bad, or meaningless would require further exploration.

 


 

 

 

 

What’s next?

We’ll keep refining the approach, some likely work could include

  • Using different theoretical models to generate indicators.
  • Exploring how to effectively supplement the quantitative with qualitative.
  • Exploring how engaging with this type of visualisation might be a useful as part of professional learning.
  • Exploring if these visualisations can be easily embedded within the LMS, allowing staff and students to see appropriate indicators in the context of use.
  • Exploring various relationships between features quantitatively.

    For example, is there any correlation between results on student evaluation and Malikowski or other indicators? Correlations between disciplines or course design?

  • Combining the Malikowski model with additional analysis to see if it’s possible to identify significant changes in the evolution of LMS usage over time.

    e.g. to measure the impact of organisational policies.

  • Refine the platform itself.

    e.g. can it be modified to support other LMS?

  • Working with a variety of people to explore what different questions they might wish to answer with this platform.
  • Using the platform to enable specific research project.

And a few more.

Want to play? Let me know. The more the merrier.

References

Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065

Brooks, F. (1996). The Computer Scientist as Toolsmith II. Communications of the ACM, 39(3), 61–68.

Chalmers, D. (2008). Indicators of university teaching and learning quality.

Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46, 81–91. https://doi.org/10.1016/j.stueduc.2015.02.002

Colvin, C., Wade, A., Dawson, S., Gasevic, D., Buckingham Shum, S., Nelson, K., … Fisher, J. (2016). Student retention and learning analytics : A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching. Retrieved from http://he-analytics.com/wp-content/uploads/SP13-3249_-Master17Aug2015-web.pdf

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Elton, L. (2004). Goodhart’s Law and Performance Indicators in Higher Education. Evaluation & Research in Education, 18(1–2), 120–128. https://doi.org/10.1080/09500790408668312

Ferguson, R., & Clow, D. (2017). Where is the Evidence?: A Call to Action for Learning Analytics. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 56–65). New York, NY, USA: ACM. https://doi.org/10.1145/3027385.3027396

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64–71. https://doi.org/10.1007/s11528-014-0822-x

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Jones, C., Dirckinck‐Holmfeld, L., & Lindström, B. (2006). A relational, indirect, meso-level approach to CSCL design in the next decade. International Journal of Computer-Supported Collaborative Learning, 1(1), 35–56. https://doi.org/10.1007/s11412-006-6841-7

Kinash, S., Naidu, V., Knight, D., Judd, M.-M., Nair, C. S., Booth, S., … Tulloch, M. (2015). Student feedback: a learning and teaching performance indicator. Quality Assurance in Education, 23(4), 410–428. https://doi.org/10.1108/QAE-10-2013-0042

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. https://doi.org/10.1177/0002764213479367

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Rogers, Tim, Dawson, Shane, & Gasevic, Dragan. (2016). Learning Analytics and the Imperative for Theory-Driven Research. In The SAGE Handbook of E-learning Research (2nd ed., pp. 232–250).

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Sharples, M., Mcandrew, P., Weller, M., Ferguson, R., Fitzgerald, E., & Hirst, T. (2013). Innovating Pedagogy 2013: Open University Innovation Report 2 (No. 9781780079370). Milton Keynes: UK. Retrieved from http://www.open.ac.uk/blogs/innovating/

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5). Retrieved from http://moourl.com/j6a5d

Strathern, M. (1997). “Improving ratings”: audit in the British University system. European Review, 5(3), 305–321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4

Emedding plotly graphs in WordPress posts


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Last year I started using with Perl to play with analytics around Moodle Book usage. This year, @beerc and I have been starting to play with Jupyter Notebooks and Python to play with analytics for meso-level practitioners (Hannon, 2013). Plotly provides a fairly useful platform for generating graphs of various types and sharing the data. Works well with a range of languages and Jupyter Notebooks.

Question here is how well it works with WordPress. WordPress has some (understandable) constraints around embedding external HTML in WordPress posts/pages. But there is a large set of community contributed plugins to WordPress that help with this, including a couple that apparently work with Plotly.

  • wp-plotly designed to embed a Plotly hosted graph by providing the plotly URL. Doesn’t appear to work with the latest version of WordPress. No go
  • Plot.wp provides a WordPress shortcode for Plotly (plotly and /plotly with square brackets) into which you place Plotly json data and hey presto graph. Has a github repo and actually works with the latest version of WordPress.

How to produce JSON from Python

I’m a Python newbie. Don’t really grok it the way I did Perl. I assumed it should be possible to auto-generate the json from the Python code, but how.

Looks like this will work in a notebook, though it does appear to need the resulting single quotes converted into double quotes and two sets of double quotes removed to be acceptable JSON.

#.. Python code to produce plotly figure ready to be plotted
import json
jsonData['data'] = json.dumps( fig['data'])
jsonData['layout'] = json.dumps( fig['layout'])
jsonData

For the graph I’m currently playing with, this ends up with

{"layout": {"yaxis": {"range": [0, 100], "title": "% response rate"}, "title": "EDC3100 Semester 2 MyOpinion % Response Rate", "xaxis": {"ticktext": ["2014 (n=106)", "2015 (n=88)nLeaderboard", "2016 (n=100)nLeaderboard"], "title": "Year", "tickvals": ["2014", "2015", "2016"]}}, 
  "data": [{"type": "bar", "name": "EDC3100", "x": ["2014", "2015", "2016"], "y": [34, 48, 49]}, {"type": "scatter", "name": "USQ average", "x": ["2015", "2016"], "y": [26.83, 23.52]}]}

And the matching graph produced by plotly follows. Roll over the graph to see some “tooltips”.

References

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Understanding systems conditions for sustainable uptake of learning analytics

My current institution is – like most other universities – attempting to make some use of learning analytics. The following uses a model of system conditions for sustainable uptake of learning analytics from Colvin et al (2016) to think about how/if those attempts might be enhanced. This is done by

  1. summarising the model;
  2. explaining how the model is “wrong”; and,
  3. offering some ideas for future work.

My aim here is mainly a personal attempt to make sense of what I might be able to do around learning analytics (LA) given the requirements of my current position. Requirements that include:

  • to better know my “learner”;

    In my current role I’m part of a team responsible for providing professional learning for teaching staff. My belief is that the better we know what the teaching staff (our “learners”) are doing and experiencing, the better we can help. A large part of the learning and teaching within our institution is supported by digital technologies. Meaning that learning analytics (LA) is potentially an important tool.

    How can we adopt LA to better understand teaching staff?

  • to help teaching staff use LA;

    A part of my work also involves helping teaching academics develop the knowledge/skills to modify their practice to improve student learning. A part of that will be developing knowledge/skills around LA.

    How can we better support the adoption of/development of knowledge/skills around LA by teaching staff?

  • increasing and improving research.

    As academics we’re expected to do research. Increasingly, we’re expected to be very pragmatic about how we achieve outcomes. LA is still (at least for now?) a buzz word. Since we have to engage with LA anyway, we may as well do research. Also done a bit in the past, which needs building upon.

    How can we best make a contribution to research around LA?

The model

The following uses work performed by an OLT funded project looking at student retention and learning analytics. A project that took a broader view that resulted in:

Given the questions I asked in the previous section and my current conceptions it appears that much of my work will need to focus on helping encourage the sustainable uptake of LA within my institution. Hence the focus here on that model.

The model looks like this.

Model of system conditions for sustainble uptake of LA (Colvin et al, 2016)

At some level the aim here is to understand what’s required to to encourage educator uptake of learning analytics in a sustainable way. The authors define educator as (Colvin et al, 2016, p. 19)

all those charged with the design and delivery of the ‘products’ of the system, chiefly courses/subjects, encompassing administrative, support and teaching roles

The model identifies two key capabilities that drive “the flow rate that pushes and pulls educators along the educator uptake pipeline from ‘interested‘ to ‘implementing‘”. These are

  1. Strategic capability “that orchestrates the setting for learning analytics”, and
  2. Implementation capability “that integrates actionable data and tools with educator practices”.

There are two additional drivers of the “flow rate”

  1. Tool/data quality – the “tool or combination of tools that manage data inputs and generate outputs in the form of actionable feedback” (Colvin et al, 2016, p. 30).
  2. Research/learning – “the organisational learning capacity to monitor implementations and improve the quality of tools, the identification and extraction of underlying data and the ease of usability of the feedback interface” (Colvin et al, 2016, p. 30)

The overall aim/hope being to create a “reinforcing feedback loop” (Colvin et al, p. 30) between the elements acting in concert that drives uptake. Uptake is accelerated by LA meeting “the real needs of learners and educators”.

How the model is “wrong”

All models are wrong, but some are useful (one explanation for why there are so many frameworks and models within education research). At the moment, I see the above model as useful for framing my thinking, but it’s also a little wrong, but that’s to be expected.

After all, Box (1979) thought

it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Consequently, given that Colvin et al (2016) identify the implementation of LA as complex phenomenon “shaped by multiple interrelated dimensions traversing conceptual, operational and temporal domains…as a non-linear, recursive, and dynamic process..” (p. 22), it’s no great surprise that there are complexities not captured by the model (or my understanding and representation of it in this post).

The aim here is not to argue that (or how) the model is wrong. The aim is not to suggest places where the model should be expanded. Rather the aim is to identify the complexities around implementation that aren’t visible in the model (but which may be in the report) and to use that to identify important/interesting/challenging areas for understanding and action. i.e. for me to think about the areas that interest me the most.

“Complexifying” educator uptake

The primary focus (shown within a green box) of the model appears to be encouraging the sustainable uptake of LA by educators. There are at least two ways to make this representation a bit more complex.

Uptake

Uptake is represented as a two-step process moving from Interested to Implementing. There seems to be scope to explore more broadly than just those two steps.

What about awareness. Arguably, LA is a buzz word and just about everyone may be aware of LA. But are they? If they are aware, what is their conceptualisation of LA. Is it just a predictive tool? Is it even a tool?

Assuming they are aware, how many are actually already in the interested state?
I think @hazelj59 has done some research that might provide some answers about this.

Then there’s the 4 paths work that identifies at least two paths for implementing LA that aren’t captured here. These two paths involve doing it with (DIW) the educator, and enabling educator DIY. Rather than simply implementing LA, these paths see the teacher being involved with the construction of different LA. Moving into the tool/data quality and research/learning elements of the model.

educator

The authors define “educator” to include administrative, support and teaching roles. Yet the above model includes all educators in the one uptake process. The requirements/foci/capabilities of these different types of teaching roles are going to be very different. Some of these types of educators are largely invisible in discussions around LA. e.g. there are currently no moves to provide the type of LA that would be useful to my team.

And of course, this doesn’t even mention the question of the learner. The report does explicitly mention a focus on Supporting student empowerment with a focus on a conception of learners that includes their need to develop agency where LA’s role is to help students take responsibility for their learning.

Institutional data foundation: enabling ethics, privacy, multiple tools, and rapid innovation

While ethics isn’t mentioned in the model, the report does highlight discussion around ethical considerations as important. Ethical and privacy considerations are important.

When discussing tool/data quality the report mentions “an analytic tool or combination of tools that manage data inputs and generate outputs in the form of actionable feedback”. Given the complexity of LA implementation (see the above discussion) and the current realities of digital learning within higher education, it would seem unlikely that a single tool would ever be sufficient.

The report also suggests (Colvin et al, 2016, p. 22)

that the mature foundations for LA implementations were identified in institutions that adopted a rapid innovation cycle whereby small scale projects are initiated and outcomes quickly assessed within short time frames

Combined with the increasing diversity of data sources within an institution, these factors seem to suggest that having an institutional data foundation is a key enabler. Such a foundation could provide a common source for all relevant data to the different tools that are developed as part of a rapid innovation cycle. It might be possible to design the foundation so that it embeds institutional ethical, privacy, and other considerations.

Echoing the model, such a foundation wouldn’t need to be provided by a single tool. It might be a suite of different tools. However, the focus would be on encouraging the provision of a common data foundation used by tools that seek to manipulate that data into actionable insights.

Rapid innovation cycle and responding to context

The report argues that the successful adoption of LA(Colvin et al, 2016, pp. 22-23)

is dependent on an institution’s ability to rapidly recognise and respond to organisational culture and the concerns of all stakeholders

and argues that

the sector can further grow its LA capacity by encouraging institutions to engage in similarly diffuse, small-scale projects with effective evaluation that quickly identifies sites of success and potential impact (p 22)

This appears to be key, but how do you do it? How does an institution create an environment that actively encourages and enables this type of “small-scale projects with effective evaluation”?

My current institution currently has the idea of Technology Demonstrators that appears to resonate somewhat with this idea. However, I’m not sure that this project has currently solved the problem of “effective evaluation” or of how/when to scale beyond the initial project.

Adding in theory/educational research

In discussing LA, Rogers et al (2015, p. 233) argues

that effective interventions rely on data that is sensitive to context, and that the application of a strong theoretical framework is required for contextual interpretation

Where does the “strong theoretical framework” come from, if not educational and related literature/research? How do you include this?

Is this where some one/group needs to take on the role of data wrangler to support this process?

How do you guide/influence uptake?

The report assumes that once the elements in the above model are working in concert to form a reinforcing feedback loop that LA will increasingly meet the real needs of learners and educators. That this will in turn accelerate organisational uptake.

At least for me, this begs the question: How do they know – let alone respond to – the needs of learners and educators?

For me, this harks back to why I perceive that the Technology Acceptance Model (TAM) is useless. TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. i.e. if the LA is useful and easy to use, then uptake will happen.

The $64K question is what combination of features of an LA tool will be widely perceived by educators to be useful and easy to use? Islam (2014, p. 25) identifies the problem as

…despite the huge amount of research…not in a position to pinpoint…what attributes…are necessary in order to build a high level of satisfaction and which…generate dissatisfaction

I’ve suggested one possible answer but there are sure to be alternatives and they need to be developed and tested.

The “communities of transformation” approach appears likely to have important elements of a solution. Especially if combined with an emphasis on the DIW and DIY paths for implementing learning analytics.

The type of approach suggested in Mor et al (2015) might also be interesting.

Expanding beyond a single institution

Given that the report focuses on uptake of LA within an institution, the model focuses on factors within the institution. However, no institution is an island.

There are questions around how an institution’s approach to LA can be usefully influenced and influence what is happening within the literature and at other institutions.

Future work

Frame this future work as research questions

  1. How/can you encourage improvement in the strategic capability without holding up uptake?
  2. How can an institution develop a data foundation for LA?
  3. How to support rapid innovation cycles, including effective evaluation, that quickly identifies sites of success and potential impact?
  4. Can the rapid innovation cycles be done in a distributed way across multiple teams?
  5. Can a combination of technology demonstrators and an institutional data foundation provide a way foward?
  6. How to support/encourage DIW and DIY approaches to uptake?
  7. Might an institutional data foundation and rapid innovation cycles be fruitfully leveraged to create an environment that helps combine learning design, student learning, and learning analytics? What impact might this have?

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press.

Colvin, C., Wade, A., Dawson, S., Gasevic, D., Buckingham Shum, S., Nelson, K., … Fisher, J. (2016). Student retention and learning analytics : A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching. Retrieved from http://he-analytics.com/wp-content/uploads/SP13-3249_-Master17Aug2015-web.pdf

Helping teachers "know thy students"

The first key takeaway from Motz, Teague and Shepard (2015) is

Learner-centered approaches to higher education require that instructors have insight into their students’ characteristics, but instructors often prepare their courses long before they have an opportunity to meet the students.

The following illustrates one of the problems teaching staff (at least in my institution) face when trying to “know thy student”. It ponders if learner experience design (LX design) plus learning analytics (LA) might help. Shows off one example of what I’m currently doing to fix this problem and ponders some future directions for development.

The problem

One of the problems I identified in this talk was what it took for me to “know thy student” during semester. For example, the following is a question asked by a student on my course website earlier this year (in an offering that included 300+ students).

Question on a forum

To answer this question, it would be useful “know thy student” in the following terms

  1. Where is the student located?
    My students are distributed throughout Australian and the world. For this assignment they should be using curriculum documents specific to their location. It’s useful to know if the student is using the correct curriculum documents.
  2. What specialisation is the student working on?
    As a core course the Bachelor of Education degree, my course includes all types of pre-service teachers. Ranging from students studying to be Early Childhood teachers, Primary school teachers, Secondary teachers, and even some looking to be VET teachers/trainers.
  3. What activities and resources has the student engaged with on the course site?
    The activities and resources on the site are designed to help students learn. There is an activity focused on this question, has this student completed it? When did they complete it?
  4. What else has the student written and asked about?
    In this course, students are asked to maintain their own blog for reflection. What the student has written on that blog might help provide more insight. Ditto for other forum posts.

To “know thy student” in the terms outlined above and limited to the tools provided by my institution requires:

  • the use three different systems;
  • use of a number of different reports/services within those two systems; and,
  • at least 10 minutes to click through each of these.
Norman on affordances

Given Norman’s (1993) observations is it any wonder that perhaps I might not spend 10 minutes on that task every time I respond to a question from the 300+ students?

Can learner experience (LX) design help?

Yesterday, Joyce (@catspyjamasnz) and I spent some time exploring if and how learner experience design (Joyce’s expertise) and learning analytics (my interest) might be combined.

As I’m currently working on a proposal to help make it easier for teachers “know thy students” this was uppermost in my mind. And, as Joyce pointed out, “know the students” is a key step in LX design. And, as Motz et al (2015) illustrate there appears to be some value in using learning analytics to help teachers “know thy students”. And, beyond Motz’s et al (2015) focus on planning, learning analytics has been suggested to help with the orchestration of learning in the form of process analytics (Lockyer et al, 2013). A link I was thinking about before our talk.

Out of all this a few questions

  1. Can LX design practices be married with learning analytics in ways that enhance and transform the approach used by Motz et al (2015)?
  2. Learning analytics can be critiqued as being driven more by the available data and the algorithms available to analyse it (the expertise of the “data scientists”) driving it. Some LA work is driven by educational theories/ideas. Does LX design offer a different set of “purposes” to inform the development of LA applications?
  3. Can LX design practices + learning analytics be used to translate what Motz et al (2015) see as “relatively rare and special” into more common practice

    Exceptionally thoughtful, reflective instructors do exist, who customize and adapt their course after the start of the semester, but it’s our experience that these instructors are relatively rare and special, and these efforts at learning about students requires substantial time investment.

  4. Can this type of practice be done in a way that doesn’t require “data analysts responsible for developing and distributing” (Motz et al, 2015) the information?
  5. What type of affordances can and should such an approach provide?
  6. What ethical/privacy issues would need to be addressed?
  7. What additional data should be gathered and how?

    e.g. in the past I’ve used the course barometer idea to gather student experience during a course. Might something like this be added usefully?

More student details

“More student details” is the kludge that I’ve put in place to solve the problem at the top of this post. I couldn’t live with the current systems and had to scratch that itch.

The technical implementation of this scratch involves

  1. Extracting data from various institutional systems via manually produced reports and screen scraping and placing that data into a database on my laptop.
  2. Adapting the MAV architecture to create a Greasemonkey script that talks to a server on my laptop that in turn extracts data from the database.
  3. Install the Greasemonkey script on the browser I use on my laptop.

As a result, when I use that browser to view the forum post at the top of this post, I actually see the following (click on the image to see a larger version). The red arrows have been added to the image to highlight what’s changed. The addition of [details] links.

Forum post + more student details

Whenever the Greasemonkey script sees a Moodle user profile link, it adds a [details] link. Regardless of which page on my Moodle course sites I’m on. The following image shows an excerpt from the results page for a Quiz. It has the [details] links as well.

Quiz results + more student details

It’s not beautiful, but it’s only something I currently use and I was after utility.

Clicking on the [details] links results in a popup window appearing. A window that helps me “know they student”. The window has three tabs. The first is labelled “Personal Details” and is visible below. It provides information from the institutional student records system, including name, email address, age, specialisation, which campus or mode the student is enrolled in, the number of prior units they’ve completed, their GPA, and their location and phone numbers.

Student background

The second tab on “more student details” shows details of the student’s activity completion. This is a Moodle idea where it tracks if and when a student has completed an activity or resource. My course site is designed as a collection of weekly “learning
paths”. Each path is a series of activities and resources design to help the student learn. Each week belongs to one of three modules.

The following image shows part of the “Activity Completion” tab for “more student details”. It shows that Module 2 starts with week 4 (Effective planning: a first step) and week 5 (Developing your learning plan). Each week has a series of activities and resources.

For each activity the student has completed, it shows when they completed that activity. This student completed the “Welcome to Module 2” – 2 months ago. If I hold the mouse over “2 months ago” it will display the exact time and date it was completed.

I did mention above that it’s useful, rather the beautiful.

Student activity completion

The “blog posts tab shows details about all the posts the student has written on their blog for this course. Each of the blog posts include a link to that blog post and shows how long ago the post was made.

Student blog posts

With this tool available, when I answer a question on a discussion forum I can quickly refresh what I know about the student and their progress before answering. When I consider a request for an assignment extension, I can check on the student’s progress so far. Without spending 10+ minutes doing so.

API implementation and flexibility

As currently implemented, this tool relies on a number of manual steps and my personal technology infrastructure. To scale this approach will require addressing these problems.

The traditional approach to doing this might involve making modifications to Moodle to add this functionality into Moodle. I think this is the wrong way to do it. It’s too heavyweight, largely because Moodle is a complex bit of software used by huge numbers of people across the world, and because most of the really useful information here is going to be unique to different courses. For example, not many courses at my institution currently use activity completion in the way my course does. Almost none of the courses at my institution use BIM and student blogs the way my course does. Beyond this, the type of information required to “know thy student” extends beyond what is available in Moodle.

To “know thy student”, especially when thinking of process analytics that are unique to the specific learning design used, it will be important that any solution be flexible. It should allow individual courses to adapt and modify the data required to fit the specifics of the course and its learning design.

Which is why I plan to continue the use of augmented browsing as the primary mechanism, and why I’ve started exploring Moodle’s API. It appears to provide a way to allow the development of a flexible and customisable approach to allowing “know thy student” respond to the full diversity of learning and teaching.

Now, I wonder how LX design might help?

The perceived uselessness of the Technology Acceptance Model (TAM) for e-learning

Below you will find the slides, abstract, and references for a talk given to folk from the University of South Australia on 1 October, 2015. A later blog post outlines core parts of the argument.

Slides

Abstract

In a newspaper article (Laxon, 2013), Professor Mark Brown described e-learning as

a bit like teenage sex. Everyone says they’re doing it but not many people are and those that are doing it are doing it very poorly.

This is not a new problem with a long litany of publications spread over decades bemoaning the limited adoption of new technology-based pedagogical practices (e-learning). The dominant theoretical model used in research seeking to understand the adoption decisions of both staff and students has been the Technology Acceptance Model (TAM) (Šumak, Heričko, & Pušnik, 2011). TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. This presentation will explore and illustrate the perceived uselessness of TAM for understanding and responding to e-learning’s “teenage sex” problem using the BAD/SET mindsets (Jones & Clark, 2014) and experience from four years of teaching large, e-learning “rich” courses. The presentation will also seek to offer initial suggestions and ideas for addressing e-learning’s “teenage sex” problem.

References

Bichsel, J. (2012). Analytics in Higher Education: Benefits, Barriers, Progress and Recommendations. Louisville, CO. Retrieved from http://net.educause.edu/ir/library/pdf/ERS1207/ers1207.pdf

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/j.im.2006.03.007

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Davis, F. D. (1986). A Technology Acceptance Model for empirically testing new end-user information systems: Theory and results. MIT.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use and user acceptance of information technology. MIS Quarterly, 13(3), 319.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council. Retrieved from http://moourl.com/hpds8

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663). Retrieved from http://www.editlib.org/INDEX.CFM?fuseaction=Reader.ViewAbstract&paper_id=8792

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Introna, L. (2013). Epilogue: Performativity and the Becoming of Sociomaterial Assemblages. In F.-X. de Vaujany & N. Mitev (Eds.), Materiality and Space: Organizations, Artefacts and Practices (pp. 330–342). Palgrave Macmillan.

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272). Dunedin.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kunin, V., Goldovsky, L., Darzentas, N., & Ouzounis, C. a. (2005). The net of life: Reconstructing the microbial phylogenetic network. Genome Research, 15(7), 954–959. doi:10.1101/gr.3666505

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The Technology Acceptance Model: Past, Present, and Future. Communications of the AIS, 12. Retrieved from http://aisel.aisnet.org/cais/vol12/iss1/50

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Müller, M. (2015). Assemblages and Actor-networks: Rethinking Socio-material Power, Politics and Space. Geography Compass, 9(1), 27–41. doi:10.1111/gec3.12192

Najmul Islam, A. K. M. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Nistor, N. (2014). When technology acceptance models won’t work: Non-significant intention-behavior effects. Computers in Human Behavior, pp. 299–300. Elsevier Ltd. doi:10.1016/j.chb.2014.02.052

Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131. doi:10.1177/1469787405054237

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3). Retrieved from http://tojde.anadolu.edu.tr/tojde15/articles/sturgess.htm

Šumak, B., Heričko, M., & Pušnik, M. (2011). A meta-analysis of e-learning technology acceptance: The role of user types and e-learning technology types. Computers in Human Behavior, 27(6), 2067–2077. doi:10.1016/j.chb.2011.08.005

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

It’s not how bad you start, but how quickly you get better

Wood & Hollnagel (2006) start by presenting the Bounded Rationality syllogism

All cognitive systems are finite (people, machines, or combinations).
All finite cognitive systems in uncertain changing situations are fallible.
Therefore, machine cognitive systems (and joint systems across people and machines) are fallible. (p. 2)

From this they suggest that

The question, then, is not fallibility or finite resources of systems, but rather the development of strategies that handle the fundamental tradeoffs produced by the need to act in a finite, dynamic, conflicted, and uncertain world.

The core ideas of Cognitive Systems Engineering (CSE) shift the question from
overcoming limits to supporting adaptability and control
(p. 2)

Which has obvious links to my last post, “All models are wrong”.

This is why organisations annoy me with their fetish for developing the one correct model (or system) and requiring that everyone should and can follow that one correct model.

Refining a visualisation

Time to refine the visualisation of students by postcodes started earlier this week. Have another set of data to work with.

  1. Remove the identifying data.
  2. Clean the data.
    I had to remind myself the options for the sort comment – losing it. The following provide some idea of the mess.
    [code lang=”sh”][/code]
    :1,$s/”* Sport,Health&PE+Secondry.*”/HPE_Secondary/
    :1,$s/”* Sport, Health & PE+Secondry.*”/HPE_Secondary/
    :1,$s/Health & PE Secondary/HPE_Secondary/
    :1,$s/* Secondary.*/Secondary/
    :1,$s/* Secondry.*/Secondary/
    :1,$s/* Secondy.*/Secondary/
    :1,$s/Secondary.*/Secondary/
    :1,$s/* Secdary.*/Secondary/
    :1,$s/* TechVocEdu.*/TechVocEdu/
  3. Check columns
    Relying on a visual check in Excel – also to get a better feel for the data.

  4. Check other countries
    Unlike the previous visualisation, the plan here is to recognise that we actually have students in other countries. The problem is that the data I’ve been given doesn’t include country information. Hence I have to manually enter that data. Giving for one of the programs, the following.

    4506 Australia
    8 United Kingdom
    3 Vietnam
    3 South Africa
    3 China
    2 Singapore
    2 Qatar
    2 Japan
    2 Hong Kong
    2 Fiji
    2 Canada
    1 United States of America
    1 Taiwan
    1 Sweeden
    1 Sri Lanka
    1 Philippines
    1 Papua New Guinea
    1 New Zealand
    1 Kenya
    1 Ireland

And all good.

Does learning about teaching in formal education match this?

Riel and Pollin (2004) talk about a view of learning that sees learning occurring

through engagement in authentic experiences involving the active manipulation and experimentation with ideas and artifacts – rather than through an accumulation of static knowledge (p. 17)

They cite people such as Bruner and Dewey supporting that observation.

When I read that, I can’t but help reflect on what passes for “learning about teaching” within universities.

Authentic experience

Does such learning about teaching occur “through engagement in authentic experiences”?

No.

Based on my experiences at two institutions, it largely involves

  • Accessing face-to-face and online instructions on how-to use a specific technology.
  • Attending sessions talking about different teaching methods or practices.
  • Being told about the new institutionally mandated technology or practice.
  • For a very lucky few, engaged with an expert in instructional design or instructional technology about the design of the next offering of a course.

Little learning actually takes place in the midst of teaching – the ultimate authentic experience.

Active manipulation

Does such learning allow and enable the “active manipulation and experimentation with ideas and artifacts”?

No.

Based on my experience, the processes, policies, and tools used to teach within universities are increasingly set in stone. Clever folk have identified the correct solution and you shall use them as intended.

Active manipulation and experimentation is frowned upon as inefficient and likely to impact equity and equality.

Most of the technological environments (whether they be open source or proprietary) are fixed. Any notion of using some technology that is not officially approved, or modifying an existing technology is frowned upon.

Does this contribute to the limitations of university e-learning?

If, learning occurs through authentic experience and active manipulation, and the university approach to learning about teaching (especially with e-learning) doesn’t effectively support either of these requirements, then is there any wonder that the quality of university e-learning is seen as having a few limitations?

References

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge: Cambridge University Press.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php