Assembling the heterogeneous elements for (digital) learning

Year: 2017 Page 1 of 3

Are learning analytics leading us towards a utopian or dystopian future, and what can we as practitioners do to influence this?

What follows will eventually be a summary of my contribution to a the ASCILITE’2017 panel titled Are learning analytics leading us towards a utopian or dystopian future, and what can we as practitioners do to influence this?. Below you’ll find a summary of my prediction and the argument that underpins it, suggestions for more reading, the slides, and references.

Argument

The argument made in the following is that Vision #3 from the LACE Visions for the Future of Learning Analytics captures a likely future for learning analytics in Australian Higher Education. That is,

In 2025, analytics are rarely used in education

I don’t necessarily agree with some of the features of this particular vision. However, it does include predictions of many problems, poor implementation and limited use of learning analytics. Based how Australian higher education currently approaches the use of digital technology in learning and teaching, this future appears likely (to me).

This is because the approach universities take to implementing learning analytics will focus on the development/adoption of an institutional learning analytics tool(s) and encouraging/managing the adoption of that tool within the institution. An approach that tends to what Cavallo (2004) described as an “explicitly topdown and hierarchical, and implicitly view education as a series of depersonalised, decontextualised steps carried out by willing, receptive, non-transforming agents” (p. 96). An approach that assumes that there is the one learning analytics tool that can be scaled across the institution. One size fits all.

It’s an approach that fails to effectively engage with Gasevic et al (2015) describe as a significant tension between course (unit) specific models and general models. A tension that echoes the reusability paradox (Wiley, n.d) in that general models “represent a cost effective &…efficient approach” (Gasevic et al, 2015, p. 83), but at the cost of pedagogical value. In learning and teaching, one size does NOT fit all.

In terms of what can be done, the suggestion is to focus on an approach designed to help find the right size for each. An approach that effectively engages with the significant tension of the reusability paradox and aims to work toward maximising pedagogical value.

In terms of specifics, the following offers some early suggestions. First, avoid taking a deficit model of teachers around both digital technology and learning and teaching. Adopt alternative ontological perspectives as the basis for planning. Focus on creating an environment and digital technology platforms that encourage the co-development of contextually specific, embedded and protean learning analytics interventions. Preferably linked with activities focused on helping provide teaching staff with the opportunity to “experience powerful personal experiences” (Cavallo, 2004, p. 102) around how teaching as design combined with learning analytics can respond to their problems and desires.

In addition, the approach should focus on enabling and encouraging teacher DIY learning analytics. DIY learning analytics involves teachers customising in different ways learning analytics to fit their context. Not only is this a way to increase the pedagogical value of learning analytics but may be the only way to achieve learning analytics at scale, as Gunn et al (2005) write

…only when the…end users of technology add their requirements, experience and professional practice that mainstream integration is achieved (p. 190)

More reading

Some suggestions for more reading include

  • How to organise a child’s birthday party is a YouTube video sharing a learning story examining how different perspectives influence how to organise.
  • Cavallo (2004) makes the case for the limitations of the traditional approach in the context of schooling.
  • Gunn et al (2005) make the case that supporting teachers in repurposing learning objects is essential to ensuring adoption and sustainability of learning objects.
  • Jones and Clark (2014) outline the two different mindsets and illustrate the difference in the context of learning analytics.
  • Jones et al (2017) report on example of teacher DIY learning analytics (originally described in Jones and Clark, 2014) and draw some implications for the institutional implementation of learning analytics.
  • Learning analytics, complex adaptive systems and meso-level practitioners: A way forward offers early plans for using an alternative ontology to address the question of learning analytics within higher education.

Slides

View below or download the Powerpoint slides.

References

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96–112.

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43–62). New York: Springer.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Gunn, C., Woodgate, S., & O’Grady, W. (2005). Repurposing learning objects: a sustainable alternative? ALT-J, 13(3), 189–200.
Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Tyack, D., & Cuban, L. (1995). Tinkering towards utopia: A century of public school reform. Cambridge, MA: Harvard University Press.

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Teacher DIY learning analytics – implications & questions for institutional learning analytics

The following provides a collection of information and resources associated with a paper and presentation given at ALASI 2017 – the Australian Learning Analytics Summer Institute in Brisbane on 30 November, 2017. Below you’ll find an abstract, a recording of a version of the presentation, the presentation slides and the references.

The paper examines the DIY development and use of a particular application of learning analytics (known as Know thy student) within a single course during 2015 and 2016. The paper argues that given limitations about what is known about the institutional implementation of learning analytics that examining teacher DIY learning analytics can reveal some interesting insights. The paper identifies three implications and three questions.

Three implications

  1. Institutional learning analytics currently falls short of an important goal.

    If the goal of learning analytics is that “of getting key information to a human being who can use it” (Baker, 2016, p. 607) then institutional learning analytics is falling short, and not just at a specific institution.

  2. Embedded, ubiquitous, contextual learning analytics encourages greater use and enables emergent practice.

    This case suggests that learning analytics interventions designed to provide useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable significant levels of usage, including usage that was unplanned, emerged from experience, and changed practice.

    In this case, Know thy student was used by the teacher on 666 different days (~91% of the days that the tool was available) to find out more about ~90% of the enrolled students. Graphical representations below.

  3. Teacher DIY learning analytics is possible.

    Know thy student was implemented by a single academic using a laptop, widely available software (including some coding), and existing institutional data sources.

Three questions

  1. Does institutional learning analytics have an incomplete focus?

    Research and practice around the institutional implementation of learning analytics tends to appear to have a focus on “at scale”. Learning analytics that can be used across multiple courses or an entire institution. That focus appears to be at the expense of course or learning design specific, which appear to be more useful.

  2. Does the institutional implementation of learning analytics have an indefinite postponement problem?

    Aspects of Know thy student are specific to the particular learning design within a single course. The implementation of such a specific requirement would appear unlikely to have ever been undertaken by existing institutional learning analytics implementation. It would have been indefinitely postponed.

  3. If and how do we enable teacher DIY learning analytics?

    This case suggests that teacher DIY learning analytics is possible and potentially overcomes limitations in current institutional implementation of learning analytics. However, it’s also not without its challenges and limitations. Should institutions support teacher DIY learning analytics? How might that be done?

Usage

The following heat map shows the number of times Know thy student was used on each day during 2015 and 2016.

Know thy student usage clicks per day

The following bar graph contains 761 “bars”. Each bar represents a unique student enrolled in this course. The size of the bar shows the number of times Know thy student was used for that particular student. (One student was obviously used for testing purposes during the development of the tool)

Know thy student usage clicks per student

Abstract

The paper on which it is based has the following abstract.

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Presentation recording

The following is a recording of a talk given at CQUni a couple of weeks after ALASI. It uses the same slides as the original ALASI presentation, however, without a time limit the description is a little expanded.

Slides

Also view and download here.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124–129.

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Díaz, O., & Arellano, C. (2015). The Augmented Web: Rationales, Opportunities, and Challenges on Browser-Side Transcoding. ACM Trans. Web, 9(2), 8:1–8:30. https://doi.org/10.1145/2735633

Dron, J. (2014). Ten Principles for Effective Tinkering (pp. 505–513). Presented at the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Association for the Advancement of Computing in Education (AACE).

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Germonprez, M., Hovorka, D., & Collopy, F. (2007). A theory of tailorable technology design. Journal of the Association of Information Systems, 8(6), 351–367.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43. https://doi.org/10.3102/0013189X12463051

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Comput. Surv., 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University. Retrieved from https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf

Levi-Strauss, C. (1966). The Savage Mind. Weidenfeld and Nicolson.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017, from http://blog.ascilite.org/what-academics-really-want-out-of-learning-analytics/

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable Systems: Pressing the Issues with Buttons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 175–182). New York, NY, USA: ACM. https://doi.org/10.1145/97243.97271

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable Game Design and the Development of a Checklist for Getting Computational Thinking into Public Schools. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 265–269). New York, NY, USA: ACM. https://doi.org/10.1145/1734263.1734357

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London. Retrieved from http://tel.ioe.ac.uk/wpcontent/%0Duploads/2013/11/BeyondPrototypes.pdf

Sinha, R., & Sudhish, P. S. (2016). A principled approach to reproducible research: a comparative review towards scientific integrity in computational research. In 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (pp. 1–9). https://doi.org/10.1109/ETHICS.2016.7560050

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Wiliam, D. (2006). Assessment: Learning communities can use it to engineer a bridge connecting teaching and learning. JSD, 27(1).

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119(7), 1974–2040.

Improving teacher awareness, action and reflection on learner activity

The following post contains the content from a poster designed for the 2017 USQ Toowoomba L&T celebration event. It provides some rationale for a technology demonstrator at USQ based on the Moodle Activity Viewer.

What is the problem?

Learner engagement is a key to learner success. Most definitions of learner engagement include “actively participating, interacting, and collaborating with students, faculty, course content and members of the community” (Angelino & Natvig, 2009, p. 3).

70% of USQ students study online. By mid-November 2017, 26,754 students had been active in USQ’s Moodle LMS.

In online learning, the absence of visual cues makes teacher awareness of student activity difficult (Govaerts, Verbert, & Duval, 2011).  Richardson (2011) identifies “the role which teaching staff play in inspiring, challenging and engaging students” as “perhaps the most woefully neglected aspect of quality in higher education” (p. 2)

Learning analytics (LA) is the “use of (big) data to provide actionable intelligence for learners and teachers” (Ferguson, 2014). However, current tools provide poor data aggregation, poor visualisation capabilities and have other limitations that inhibit teachers’ ability to: understand student activity; respond appropriately; and, reflect on course design (Dawson & McWilliam, 2008; Corrin et al, 2013; Jones, & Clark, 2014).

How will it be addressed?

Teachers can be supported through tools that help them “analyse, appraise and improve practices in their everyday activity systems” (Knight et al, 2016, p. 337).

This Technology Demonstrator has implemented and will customise and scaffold the use of the Moodle Activity Viewer (MAV) within the USQ activity system.

The MAV provides a useful and easy to use tool that provides representations of student activity from within all Moodle learning spaces. It provides affordances to support teacher intervention and further analysis.

MAV - How many students

MAV’s overlay answering the question how many and what percentage of students have accessed each Moodle activity & resource?

What are the expected outcomes?

The project aims to explore two questions:

  1. If and how does the provision of contextual, useful, and easy to use representations of online learner activity help teachers analyse, appraise and improve their practices?
  2. If and how does this change in teacher activity influence learner activity and learning outcomes?

MAV - How many clicks

MAV’s overlay answering the question how many times have those students clicked on each Moodle activity & resource?

Want to learn more?

Ask for a demostration of MAV during the poster session.

USQ staff can learn more* about and start using MAV from http://tiny.cc/aboutmav and http://tiny.cc/installmav

* (Only from a USQ campus or via the USQ VPN)

MAV - How many students in a forum

MAV’s overlay answering the question how many and what percentage of students have read posts in this introductory activity?

MAV - Who accessed and how to contact them

MAV’s student access dialog providing details of and enabling teacher contact with the students who have accessed the “Fix my class IWB” forum?

References

Angelino, L. M., & Natvig, D. (2009). A Conceptual Model for Engagement of the Online Learner. Journal of Educators Online, 6(1), 1–19.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Govaerts, S., Verbert, K., & Duval, E. (2011). Evaluating the Student Activity Meter: Two Case Studies. In Advances in Web-Based Learning – ICWL 2011 (pp. 188–197). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25813-8_20

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Proceedings of the 31st Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE 2014) (pp. 262–272). Sydney, Australia: Macquarie University.

Knight, P., Tait, J., & Yorke, M. (2006). The professional learning of teachers in higher education. Studies in Higher Education, 31(3), 319–339. https://doi.org/10.1080/03075070600680786

Introducing the Moodle Activity Viewer (MAV) & digital reno

What follows are the resources associated with a workshop being run at the University of Southern Queensland. As the title suggests, the aim is to get USQ folk started using the Moodle Activity Viewer to explore usage of Moodle activities and resources, and to briefly introduce the idea of digital renovation.

Apart from the presentation slides and references below, other related resources include:

  • Instructions for installing the MAV for USQ staff.

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

  • Additional details on other USQ digital reno tools

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

Slides

References

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics

David Jones, Hazel Jones, Colin Beer, Celeste Lawson, Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics, To appear in the proceedings of the 2017 Australian Learning Analytics Summer Institute (ALASI 2017)

Abstract

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Introduction

Learning analytics has been receiving attention since the late noughties. The promise of data driven decision making and the nature of the higher education environment – decreasing funding, increasing focus on quality, increasing use of technology enhanced learning (TEL) – is seen as making the institutional adoption of learning analytics an imperative for institutions of higher education (Macfadyen, Dawson, Pardo, & Gasevic, 2014, p. 17). By 2017, there appears to have been sufficient time and resources invested to realise the affordances learning analytics offers to education at the whole-of-institution scale (Colvin, Dawson, Wade, & Gašević, 2017), especially given predictions in 2012 that it was one year away from mainstream adoption within the Australian Higher Education sector (Johnson, Adams, & Cummins, 2012). However, there are only a small number of institutions that have demonstrated impact on learning and teaching outcomes through large-scale learning analytics programs (Ferguson, Clow, et al., 2014) and there are concerns that there remains limited evidence of the effectiveness of learning analytics at scale, or sufficient understanding to guide successful implementation (Colvin et al., 2017; Ferguson, Macfadyen, et al., 2014).

To address this concern there is a growing conceptual literature offering various models and frameworks to guide learning analytics adoption. Colvin et. al (2017) categorise and analyse this literature and argue that “while the models afford insight, they do not capture the breadth of factors that shape LA implementations” (p. 284). As a result these models are unable to provide those responsible for institutional implementation of learning analytics “the nuanced, situated, fine-grained insight they require to guide them through learning analytics implementation” (Colvin et al., 2017, p. 284). Such a restriction could be addressed through empirical research that examines the “burgeoning, albeit nascent implementations found across higher education institutions” (Colvin et al., 2017, p. 285). Research by Colvin et al (2015) offers one valuable contribution, however, there are limitations. One such limitation is the focus on the perspectives from one set of participants involved in learning analytics projects: senior leaders charged with responsibility for implementation. While an important source of insight, this focus perhaps echoes the lack of human-centeredness that pervades learning analytics implementation (Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017) and tends “to privilege the administrator rather than the student – or even the instructor” (Kruse & Pongsajapan, 2012, p. 4). This limitation raises questions such as:

What is the experience of students and teachers using institutional learning analytics? How might an understanding of their experience inform the institutional implementation of learning analytics?

It is these questions that this paper seeks to explore, with a particular focus on the experience of teaching staff. To do this, it describes a single teacher’s experience developing and using a do-it-yourself (DIY) approach to learning analytics. The paper starts by describing this approach and then draws from it three implications and three questions for institutional implementation of learning analytics.

Know thy student

During 2015 and 2016 one of the authors developed and used a DIY learning analytics tool (Know thy student) within a third-year Bachelor of Education course. Offered twice a year, the course had an annual enrolment of 400+ students. Two-thirds of these students studied via online only, and less than 15% were ever likely meet the course examiner in person. The design of the course focused explicitly on making significant use of a Moodle course site and sought to encourage: significant active student online engagement; formative assessment; student reflection via individual blogs; and, use of social bookmarking. Know thy student was developed to address limitations in existing institutional systems and enable more meaningful responses to student queries. The tool was inspired by and built on top of the Moodle Activity Viewer (MAV) developed at CQUniversity (Jones and Clark, 2014). While the tool interacted with, and extracted information from a number of institutional systems, it could only be used via the implementer’s laptop to interact with the specific course site.

When in use, Know thy student modified every page of the course site viewed by the teacher. It added a [details] link where ever a link to a user profile appeared, as illustrated in Figure 1.

Forum post + more student details

Figure 1 – Modified course page

Clicking on one of the [details] link would open a new pop up window (Figure 2) to provide access to information about the student. The pop-up window provided information in three separate tabs, including: personal details (Figure 2); activity completion (Figure 3); and, blog posts (Figure 4). Know thy student provided the examiner with ubiquitous and embedded access to course specific information about each student enrolled in the course.

Student background

Figure 2 – Personal details

Across four offerings of the course in 2015 and 2016 the teacher used the tool 3,100 separate times to access information on 761 different students. Representing 89.5% of the enrolled students. For one student, the tool was used 32 separate times. The median number of uses per student was three.

Initially, most of this use was generated when answering student questions on course discussion forums. However, the embedded and ubiquitous availability of the [details] link enabled other unplanned uses. For example, the course home page provided a list of all course participants who had been recently logged into the course site. As designed, Know thy students would add a [details] link to this list. This modification to the learning environment encouraged the development of a practice where the teacher would use that link to proactively learn more about students. In turn, this led to an increase in engaging with students via their blog posts and other means. Since the tool was simple and easily within grasp it provided a platform that encouraged more meaningful and unexpected connections with hundreds of students.

Implications and questions for learning analytics implementation

Analysis and discussion about the case have led the authors to suggest three implications about and three questions for the institutional implementation of learning analytics. Given the exploratory nature of this research there are tenative suggestions and each implication and question in turn generates additional questions for further investigation.

Implication #1: Institutional learning analytics currently falls short of an important goal

Baker (2016) identifies a common goal shared by learning analytics systems, that “of getting key information to a human being who can use it” (p. 607). This case shows that at least one institution’s approach to learning analytics is falling short of this goal, and there are indications that this problem is not limited to a single institution. Almost 10 years ago, Dawson & McWilliam (2008) comment on how poor the LMS data aggregation and visualisation tools of the day were in helping academics understand student learning behaviour. In 2013, focus groups of academics from the University of Melbourne identified a common need to be better able to correlate from different institutional systems (Corrin, Kennedy, & Mulder, 2013). A recent unpublished experiment at another institution by one of the co-authors of this paper identified that gathering relevant information for ten post-graduate students took over an hour and required the use of five separate information systems owned by three separate institutional departments. This reinforces the observation from Liu (2017) that academics “rarely have the data that they actually want in a place and form where it can actually be used”.

How widespread is this apparent failure? What are the factors contributing to this apparent failure? What can be done to address it?

Student activity completion

Figure 3 – Activity completion

Implication #2: Embedded, ubiquitous, contextual learning analytics enable emergent practice

Experience from this case suggests that providing useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable unplanned and effective interventions. In this case, being able to access student and course specific information throughout the learning environment enabled the teacher to adopt the unplanned practice of proactively connecting with students. Arguably, this may fit with characterisations of teachers as bricoleurs focused on making do with and creatively repurposing the tools that are at hand (Hatton, 1989). Providing contextually appropriate tools, however, is difficult given the sheer diversity involved in education where “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029).

Does the provision of embedded, ubiquitous and contextual learning analytics increase and encourage greater adoption and bricolage by teachers with learning analytics? What impact would that have on the learning experience? Given the inherent diversity in education, how can institutional learning analytics provide contextually appropriate learning analytics?

Sentiment analysis of blog posts

Figure 4 – Sentiment analysis of blog posts

Implication #3: Teacher DIY learning analytics is possible

This case shows that technically literate academics are able to leverage available technologies to implement and use teacher DIY learning analytics. The notion of end-user development is not new with “[m]ost programs today … written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support” (Ko et al., 2011, p. 21). Such work can be seen as undesirable due to concerns about inefficiency, error, support, scalability, privacy and security. However, it can also address limitations and flaws in provided systems (Koopman & Hoffman, 2003).

How is DIY learning analytics viewed in relation to the institutional implementation of learning analytics? Is it something to be prevented, or enabled and encouraged? Given technology trends, can it be prevented?

Question #1: Does institutional learning analytics have an incomplete focus?

The common response to seeing the Know thy student tool is to ask if and how it can be reused in other courses. Such a response aims to understand if and how this particular learning analytics tool can “make the leap from the focused and particular to the broad and general” (Lonn et al., 2013, p. 235). This echoes what is seen as the core goal for most learning analytics project “to move from small-scale research towards broader institutional implementation” (Ferguson, Macfadyen, et al., 2014, p. 120). However, if “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029), then how can a broad and general focus effectively respond to diverse contextual requirements? How can the institutional implementation of learning analytics address concerns that it is focused at an “institutional scale rather than a human scale” (Kruse & Pongsajapan, 2012)? Should and can its focus be expanded to include both the human and institutional scale?

Question #2: Does the institutional implementation of learning analytics have an indefinite postponement problem?

In seeking to move learning analytics beyond a research project to institutional scale Lonn et al ( 2013) partnered with a university’s Information Technology (IT) service. A first step in their project involved the IT service performing a feasibility of the project and placing “it in their timeline of priorities” (p. 236) and subsequently the project “was delayed due to existing projects … that were a higher priority for the institution” (Lonn et al., 2013, p. 238). Given the typical prioritisation scheme used by a university IT service, a tool like Know thy student which focuses on a need from a single course is unlikely to ever be of sufficient priority to be actioned at the institutional level. It will be indefinitely postponed.

Would learning analytics that are specific to the learning designs within a single course ever be implemented by institutional IT? Would such a project be indefinitely postponed? What impact does this have on the institutional implementation of learning analytics? Should and can this problem be addressed?

Question #3: If and how do we enable teacher DIY learning analytics?

The above has suggested that teacher (and perhaps student) DIY learning analytics may make a useful contribution to institutional learning analytics implementation. However, there are numerous significant questions around if and how it can be achieved, including: whether or not it can be integrated sustainably into institutional implementation. and whether or not teaching staff have sufficient data and technical literacy to effective contribute?

In terms of institutional implementation, Colvin et al (2017) provide recommendations necessary for sustainable learning analytics adoption that could offer useful guidance. In addition, there are projects like that described by Liu et al (2017) that are actively using such recommendations to support a level of teacher DIY learning analytics. The challenge is that enabling and encouraging teacher DIY learning analytics appears to represent a mindset that is incommensurable with the assumptions underpinning the majority of contemporary institutional practices (Jones & Clark, 2014). There is also research suggesting that the convergent and generative characteristics of pervasive digital technology requires the development of radically different approaches to corporate IT infrastructures and organisational strategic frameworks (Yoo, Boland, Lyytinen, & Majchrzak, 2012).

The low digital fluency of teaching staff has been identified as a significant challenge impeding the adoption of digital technology within higher education (Johnson, Adams Becker, Estrada, & Freeman, 2014). If low digital fluency is challenging the effective use of digital technologies by teaching staff, then it does raise questions about the likelihood of teacher DIY learning analytics. However, research in end user development suggests that such DIY practices are already happening and that such practices have positive impacts on the quantity and quality of adoption of digital technologies (Ko et al., 2011; Koopman & Hoffman, 2003). Finally, Scanlon et al (2013) observes that the complexity of technology-enhanced learning – such as learning analytics – means that accepting “’user-driven’ contributions from both teachers and students” (p. 34) may be necessary “to allow for effective intervention” and in order to understand the complexity of practices that is the “context for any particular TEL innovation” (p. 34).

Conclusion

This paper has briefly described a single case of teacher DIY learning analytics, which raises a number of implications and questions for the institutional implementation of learning analytics. It is suggested that empirical research moving beyond those in charge of the institutional implementation of learning analytics to those living with such systems can deepen the understanding of current experience with such systems and subsequently contribute improvements. From this case it appears that current approaches are failing to meet a potentially important goal of “getting key information to a human being who can use it” (Baker, 2016, p. 607). The paper has asked whether or not this may be due to learning analytics over-emphasising the broad at the expense of the specific or contextual. It may also be due to the nature of how institutional IT projects are prioritised leading to indefinite postponement of contextually specific projects. The case illustrates that technological trends are making teacher DIY learning analytics are possible, if only in very limited situations, and has provided an indication that ubiquitous, embedded and contextual learning analytics can enable and encourage positive and unplanned usage. Suggesting that enabling and encouraging teacher DIY learning analytics in the form of more generative institutional learning analytics implementations may offer an interesting and fruitful direction.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance, 41–41.

Ferguson, R., Macfadyen, L. P., Clow, D., Tynan, B., Alexander, S., & Dawson, S. (2014). Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. https://doi.org/10.18608/jla.2014.13.7

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis (No. 9780984660155). Austin, Texas.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Computing Surveys, 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London.

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Exploring options for teacher DIY learning analytics


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

A few of us recently submitted a paper to ALASI’2017 that examined a “case study” of a teacher (me) engaging in a bit of DIY learning analytics. The case was used to drawing a few tentative conclusions and questions around the institutional implementation of learning analytics. The main conclusion is that teacher DIY learning analytics is largely ignored at the institutional level and that there appears to be a need and value to support it. The question is how (and then if supported, what happens)?

This post is the start of an exploration of some technologies that combined may offer some of the affordances necessary to supporting teacher DIY learning analytics. The collection of technologies and the approach owes a significant amount of inspiration to Tony Hirst, especially in this post in which he writes

What I care about are some of the features that Docker has, and how I can use those features to make my own life easier, … supporting personal, DIY, BYOA (“bring your own app”) IT that works at an individual level in the form of end-user applications, or personal digital workbenches

The plan/hope here is that Docker combined with some other technologies can provide a platform to enable a useful combination of do-it-with (DIW) and do-it-yourself (DIY) paths for the institutional implementation of learning analytics. The follow is mostly documenting ad hoc exploration of the technologies.

In the end, I’ve been able to get working a Jupyter notebook working as a JSON API and started explorer docker containers. Laid the ground work for the next step which will be to explore how and if some of this can be combined to integrate some of the work Hazel is doing with some of the Indicators work from earlier in the year.

Learning more – Juypter notebook JSON api

Tony provides a description of using Jupyter Notebooks to provide a JSON API. Potentially this provides a way for DIY teachers to create their own MAV-like server.

Tony’s exploration is informed by this from some aspect of IBM that aims to introduce the Jupyter kernel gateway (github repo)

The README.md from github repo mentions serving HTTP requests from “annotated notebook cells”. Suggesting that the method of annotation will be important. The IBM example code that each API call is handled by a particular block starting with an appropriately formatted comment i.e.

single-line comments containing a HTTP verb … followed by a parameterised URL path

Have a simple example working.

Deploying – user experience

The IBM bit then goes about using Docker to to deploy this API. But before I do that. Lets get some experience at the user en with Tony’s example.

  1. Install VirtualBox
    Question: Is this something a standard user can do?
  2. Install vagrant
  3. command line to install a vagrant plugin

    Question: Too much? But can probably be worked around.

  4. Download the repo as a zip file.

    Had to figure out to go back to the repo “home” to get the download option (long time between drinks doing this).

  5. Run the vagrant file

    Ok, it’s downloading the file from the vagrant server (from the ouseful area on Vagrant).

    It’s a 1.66Gb file. That size could potentially be an issue, suggesting the need for a local copy. Especially given the slow download.

    An hour or two later and it is up and running. There’s a GUI linux box running on my Mac.

Don’t know a great deal about the application that is the focus, but it appears to work. It’s a 3D application, so the screen refresh isn’t all that fast. But as a personal server for DIY teacher analytics, it should work fine, at least in terms of speed.

Running it a second time includes a check to see if it’s up to date and then up it pops.

The box appears to have Perl, Python and Juypter installed.

Deploying – developing a docker/container/images

This raises the question of the best option for creating and sharing a docker/container/insert appropriate term – I’ll go with images – that has Jupyter notebooks and the kernel_gateway tool running. At this stage, this purpose seems best served by a headless virtual machine with browser-based communication the method for interacting with Jupyter notebooks.

Tony appears to do exactly this (using OpenRefine) using Kitematic in this post. Later in the post the options appear to include

  • Sharing images publicly via the Dockerhub registry
  • Use a private Dockerhub registry (one with the free plan)
  • On a local computer
  • Run your own image registry
  • And, I assume use an alternative.

Tony sees using the command line a draw back for running your own. Perhaps not the biggest problem in my case. But what is the best approach?

Dockerhub and its ilk do appear to provide extra help (e.g. official repositories you can build upon).

One set of alternatives appear largely focused on supporting central IT, not the end user. Echoing a concern expressed by Tony.

Intro from another alternative suggests that docker is becoming more generic. Time to look and read further afield.

Intro to containers

From Medium

  • Containers abstract the OS etc to make it simple to deploy
  • Containers usually measured in 10s of megabytes
  • Big distinction made between containers and virtual machines, perhaps boils down to “containers virtualise the OS; virtual machines the hardware”

    Though interesting, the one tried above required the downloading of a virtual machine first. Update: That appears to be because I’m running Mac OS X. If I were on a Linux box, I probably wouldn’t have needed that.

  • The following seem to resonate most with the needs of teacher DIY learning analytics
    • Using containers can decrease the time needed for development, testing, and deployment of applications and services.
    • Testing and bug tracking also become less complicated since you there is no difference between running your application locally, on a test server, or in production.
    • Container-based virtualization are a great option for microservices, DevOps, and continuous deployment.
  • Docker is based on Linux and open source, is the big player.
  • Spends some attention on container orchestration – appears to be focused on enterprise IT.

Following offers a creative intro to Kubernetes

Starts with the case for containers (Docker), but then moves onto orchestration and the need for Kubernetes. Puts containers into a pod, perhaps with more than one if tightly coupled. Goes onto to explain the other features provided by Kubernetes.

And intro to Docker

Rolling my own

Possible technology options

Do the following and I have a web server running in Docker that I can access from my Mac OS browser.

AA17-00936:docker david$ docker run -d -p 80:80 --name webserver nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
afeb2bfd31c0: Pull complete 
7ff5d10493db: Pull complete 
d2562f1ae1d0: Pull complete 
Digest: sha256:af32e714a9cc3157157374e68c818b05ebe9e0737aac06b55a09da374209a8f9
Status: Downloaded newer image for nginx:latest
f1f6925acc31f80faf726358f8de5712458ff3649d2c0626bf3bb37f11d1b070
AA17-00936:docker david$

Dig into tutorials and have a play

Docker share a git repo for tutorials and labs. Which are quite good and useful.

Getting set up with some advice above.

Running your first container includes some simple commands. e.g. to show details of installed images. Showing that they can be quite small.

Question: To have folk install Docker, or do the VM route as above?

AA17-00936:docker david$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              2d696327ab2e        11 days ago         122MB
nginx               latest              da5939581ac8        2 weeks ago         108MB
alpine              latest              76da55c8019d        2 weeks ago         3.97MB
hello-world         latest              05a3bd381fc2        2 weeks ago         1.84kB

Web apps with docker, which also starts looking at the process of rolling your own.

This is where discussion of different types of images commence

  • Base (e.g. an OS) and child images which add functionality to a base image
  • Official images – sactioned by docker
  • user images

Process can be summarised as

  • Create the app (example is using a Python web framework – Flask)
  • Add in a Dockerfile – text file of commands for the Docker daemon when creating an image
  • Build the image

    Does require an account on the Docker cloud

    And there it goes getting all the pre-reqs etc. Quite quick.

And successful running.

Docker Swarm running multiple copies, including on the cloud. Given the use case I’m interested in is people running their own…not a priority.

It does provide a look at Docker Compose files and a more complex application – multiple containers and two networks. Given my focus on using Jupyter Notebooks and perhaps the kernel gateway, this may be simplified a bit.

Seems we’re at the stage of actually trying to do something real.

Create a Docker image – TDIY

Jupyter Notebook, kernel gateway and a simple collection of notebooks – perhaps with greasemonkey script

Misc. related stuff

Bit on microservices (microservice architectural style) pointing out the focus on

principles of loose coupling and high cohesion of services

and in turn a number of characteristics

  • Applications are made up of small independent services

    Is TDIY LA about allowing teachers to create applications by combining these services?

  • Services are independently modifiable and (re)deployable

    But by whom?

  • Decentalised data management: each service can have its own database

    What about each user?

Goes on to list a range of advantages, but the disadvantages include

  • inefficiency – remote calls, network latency, potential duplication etc.

    But going local might help address some of this.

  • Developing a user case could need the cooperation of multiple teams

    This is the biggest barrier to implementation within an instituiton. But raises the spectre of shadow systems, kludges etc.

  • complications in debugging, communication

Microservices and containers covers some of the alternatives.

Seems docker is the place — it’s bought Kitematic and apparently not loved it – a risk for basing the DIY approach on it.

Another part of the story is that you can build your own images and either share them publicly via the Dockerhub registry, keep them locally on your own computer, post them to a private Dockerhub repository (you get a single private repository as part of the Dockerhub free plan, or can pay for more…), or run your own image registry.

Dockerhub is probably the option I want to use here because of the focus on being open, of being cross institutional etc.

Learning, learning analytics and multiple levels: The problem of starvation

In which I play with some analytics and use some literature in an attempt to understand why the institutional implementation of learning analytics as a starvation problem (like most institutional attempts to leverage digital technologies). In this context, I’m using the definition of starvation from computer science.

Multiple time scales of human behaviour and appropriate methods

In a section titled “Data from learning on multiple levels: learning is complex”, Reiman (2016) writes

Nathan & Alibali (2010) distinguish between learning in milliseconds and below (biological), seconds (cognitive), minutes to hours (rational), days to months (socio-cultural), and years and beyond (organizational) (p. 134)

Digging into Nathan & Alibali (2010) reveals the following table titled “Time scales of human behaviour and the corresponding areas of study and research methods” (which is in turn adapted from elsewhere).

Time Scales of Human Behavior and the Corresponding Areas of Study and Research Methods

What I find interesting about this is that it places the types of activities a teacher would do at a very different time scale than what an organisation would do. It also suggests that there are very different methods to be used at these very different levels. Suggesting that using a method in the wrong scale will be less than appropriate.

To perhaps draw a long bow, might this suggest that the methods currently being used to support institutional implementation of learning analytics might work fine at the organisational level, but perhaps a little less so at the level of teaching practice? Or perhaps, the limitation arises from the inability to move up and down levels quickly enough.

(Aside: Especially when you consider that the table above doesn’t (for me) capture the full complexity of reality. Capturing the time scales is important, but I think it fails to capture the fact that as you move down the level of study you are increasing the quantity and diversity of units of study. i.e. At a system level you might be talking about the Australian higher education sector and its use of learning analytics. There are ~38 universities in Australia that could be studied at the organisational level. Within those organisations there are likely to be 1000s of courses, at least 100s of teaching staff, and 10000s of students. Each very diverse.)

A case in point

A team I’m a part of has created a online resource that has been used by staff. From the institutionally provided system I can grab the raw logs and perhaps generate the odd image. But I can’t do easily do the sort of analysis I’d like to do. What works at the institutional level, doesn’t work at the individual course/case level.

But I can engage in some DIY. Jupyter notebooks, Python, plotly, a bit of faffing around with CSV files and….

  • Get a list of the people who accessed the resources and how many times they did and turn it into a graph.
  • Do the same with the different parts of the resource.

    Though it’s a bit more difficult given the limited data in the CSV file

  • And plot the usage against days.

None of this is technically difficult. It’s also something that could be useful to others. But they don’t have the knowledge and the institution doesn’t appear to provide the service.

This particular need is unlikely to receive direct attention. It may well get solved when some enterprise bit of kit gets new functionality. But, if that happens, chances are it will go under-used as users aren’t aware of the capability and the technical folk aren’t aware of the need.

A solution?

What if there were methods by which the institution could move up and down the layers? Dip down into the level of hours and days, build some stuff, see it used and then scale up the useful stuff so its “enterprise”?

References

Reimann, P. (2016). Connecting learning analytics with learning research: the role of design-based research. Learning: Research and Practice, 2(2), 130–142. https://doi.org/10.1080/23735082.2016.1210198

Nathan, M. J., & Wagner Alibali, M. (2010). Learning sciences. Wiley Interdisciplinary Reviews: Cognitive Science, 1(3), 329–345. https://doi.org/10.1002/wcs.54

Further developing research workflow

An attempt to briefly document an exploration into possibilities for enhancing my digital workflow.

Main personal outcome is the modification of my research workflow to incorporate Zotfile to improve file naming and also to allow sharing of PDF files with my mobile. Sharing done via Google drive to the PDF Expert app and back again. Zotfile very nicely auto-extacting annotations and highlights from the PDF into Zotero note file.

Still need some further refinement, but good enough to get me reading and highlighting more.

Why?

I’m reasonably happy with the move to Zotero for citation management, but have some weaknesses and challenges in my workflow I’d like to address, including:

  • Need to more active in annotating, sensemaking and sharing thinking on papers.
  • Add the ability to do this on both mobile and laptop platforms.
  • Scaffold/guide the use of the tools.

Changes to make?

From the below, the following might be useful changes to make

  • Modify file naming of PDFs to a fixed style.

    Zotfile appears to do this for Zotero.

  • Have files (and Zotero information, and annotations) available via a cloud service

    Zotfile will allow this with whichever cloud service

    This post has more specific advice. Includes mention of being able to explicitly send specific files to the tablet, as well as changing the location where files are stored

    Upgrade to the 5.0 version of Zotero, download the Zotfile xpi file and install using Zotero. Change preferences.

    Nice – the renaming attachment (existing) works nicely.

    Send to tablet uses a Zotero category, configured to a Google drive.

    Annotation done on the phone. Used Zotfile to bring it back into Zotero and it automatically extracted the annotations into Zotero notes. Very nice.

    And the annotations and highlights I made are visible in Adobe on the laptop. Win all around.

  • Previous or other as form of back up
  • Find a iOS PDF viewer/annotater

    PDF Expert 6 seems to be thought highly of. Apparently it supports WebDAV. Opening up the question of whether or not this could be used with the Zotero support for WebDAV? Article talking about setting up your own Webdav Zotero server, using PHP code last modified 6 years ago.

    Trying Google drive using the Zotero send to tablet option. Configure the app with Google account works smoothly. Now to try it out with a journal article I wanted to read – the size of the iPhone might be an issue here.

    The worked okay, could get use to it. Size of the phone is an issue.

  • Experiment with extract annotations from PDFs into Zotero

    Zotfile again.

  • Pay more attention to auto imported citation information

    Develop/find a cheatsheet of changes to make *** need to find this ***

  • Pay more attention to how to annotate/highlight

What are others doing?

First task is to look at what others are doing.

Introducing digital workflows for academic research on Mac

First in a series of blog posts by Francis Hittinger. Quite detailed, though Mac focused and from 3 years ago, interesting to see how well the technologies suggested have aged.

  • PDF chaos – Digital workflow basics

    Uses GTD movement to argue the need for a system for capturing unfinished tasks, to lighten the load. Resonates with me. Starts with PDF workflow.

    Make some case for naming, folders etc. But I have a feeling that there’s too much focus on computer abstractions and not enough on abstractions meaningful to the task/human.

    1. Always OCR PDFs
    2. Split dual-paged PDFs
  • Sente for PDF management

    This is where the specific technology focus might prove a challenge for me. Sente is a Mac specific tool similar to Zotero. I’m not likely to change ATM. Maybe lessons I can learn here though.

    Largely looks at getting the PDF and the citation into citation management software. Suggestions for alternatives to Google for the citation might be useful.

  • Sente management #2

    More discussion of Sente’s capabilities, some general use if practices abstracted, include a status in the research pipeline for papers, use of tags.

  • Sente #3

    More ways to get references into Sente

  • Sente #4

    Focuses on the Sente annotation feature, apparently powerful and cross platform. Some good material on the importance of note taking and even mentions Xanadu and a pandoc/markdown workflow. Identifies some limitations of analog note taking and then launches into using Sente for notetaking, including links to suggestions for note taking workflow.

And it appears that the series was never ended. Never got to the potentially interesting stuff

Pandoc and Zotero

Github tutorial. A more open approach. This post offers a description of a specific researcher’s approach. Offers this rationale for markdown

Microsoft Word uses the paradigm of the word processor: printed words on a sheet of paper. Markdown treats writing as code: iterative and malleable.

Leads up to a post explaining the benefits of using github when writing. The posts includes various applications etc that help.

Another post linking markdown with Scrivener and Zotero via pandoc.

As someone who uses vim to write code and HTML documents, there’s much in this that resonates with me. But I still wonder if I have the time to make this radical a change? If someone with my background pauses, it does seem to suggest that this approach is not for everyone. Which, given a recent attempt to focus on collaboration with others, decreases the likelihood of adoption this approach.

Correct auto citation import

Google is flaky (forgetting place for books). Better to use Worldcat (OCLC) or official academic library catalogue – Stanford and University of Wisconsin

  • Right sort of publication
  • pages
  • DOI
  • Sentence case

Zotero workflows

Post explaining a workflow that uses Zotero and Emacs enabled by orgmode and Zotxt. Some interesting stuff, especially Zotxt which appears to provide a potential API to your local Zotero library.

Workflow – Zotero – ZotFile – Dropbox – ZotPad – Notability (PDF viewer/annotator)

Academic note taking

A academic notetaking workflow take.

Page that mentions use of Zotfile for extracting annotations from PDF documents and into Zotero notes. Might be useful with Zotxt.

More detailed look at Zotero and ZotFile, covers installing, configuring and then using it to rename and organise PDFs and extracting annotations.

Using Zotfile seems key to moving PDFs to a cloud service.

Content outlines

Blog post defining and explaining content outlines, in particular getting the structure right.

Available tools

Zotero

Zotero has a “for mobile” page.

Includes mention of Papership as a PDF annotation tool that integrates with Zotero account. Suggesting a need to have more storage on my Zotero account. Wondering how to make use of other cloud services I already have? This post mentions using WebDAV to sync with Google drive (what about one drive?)

Zotfile (open source) may help with that. Can link with a PDF reader app, focus here seems to be on getting a app that will sync with the particular cloud storage service you’re using. Zotfile also appears likely to help with naming of PDF files.

Some information from Cornell University library on zotero syncing/storage management

What’s changed in academic staff development?

The following is my initial response to this exercise from the week 3 learning path. It’s an exercise intended to get folk thinking about what practices, if any, have emerged in their disciplinary teaching context from when they were undergraduates until now. It asks them to consider some of the emerging practices mentioned in the Horizon and New Generation Pedagogy reports. It also asks them to consider if any of them are visible in “good practice” within the discipline.

As per the exercise instructions, the following is not a formal academic document. It’s a bit of writing to think. The exercise is intended to encourage folk start framing thoughts that will become the basis for an assessment task.

The following also tends to be specific to my context.

My Discipline?

I’m currently onto my third or fourth discipline. My journey in higher education has gone through computer science/information technology; information systems, teacher education; and what I’ll call academic staff development (i.e. helping other academic staff teach).

I’ll stick with my current “discipline” – academic staff development.

What was it like?

When I first stated teaching in higher education (in the Information Technology discipline) back in the early 1990s I was teaching in a dual-mode university. i.e. my students studied via two modes (on-campus and via distance education). In those days, distance education meant the production of slabs of print-based material that was posted out to students before the semester started. A process that in the early 1990s relied about an production-line type process for generating the print material.

My recollections of academic staff development in those days mostly involved the distance education folk running sessions or distributing print-based material designed to help academics develop the knowledge/skills to develop good print-based material.  I don’t remember too many workshops or presentations, but I remember huge folders of print material.

There were the occasional presentation on a teaching related topic and there were even some early forays into what might be characterised as communities of practice. e.g. I was involved with a computer-mediated communications working group in the early 1990s (pre Internet Services Provider days) that eventually developed some print material to help staff and students using CMC in learning and teaching.

There were also grants to fund innovative developments associated with L&T (I got one of those) and there were also teaching awards (I got one of those).

What’s changed?

To be brutally honest. Not much.  Perhaps the major change is that there are no longer any big sets of folders of print material. All that is now online. The nature of the online material and how you access has changed somewhat. There’s been a recent move to more contextual material.  But it’s still fairly kludgy and much of it is still in a print format (i.e. PDF documents).

There is still a reliance on presentations and workshops. Though these are increasingly available via Zoom and a couple of weeks ago a remote participant did engage with the institutional L&T orientation using a Kubi telepresence robot.

There are still L&T grants (some announced last week) and awards (announcing real soon).

However, there has been a shift in focus away from “academic staff development”. Seen as something done to teaching staff. Towards the idea of professional learning and professional learning opportunities. Moving the focus toward designing contexts/environments/opportunities for teaching staff to engage in professional learning.

What about ideas from the Next Generation Pedagogy report?

The Next Generation Pedagogy report offers five signposts on the roadmap to innovative pedagogy

  • Intelligent pedagogy – using technology to enhance learning, including beyond institutional confines.
    Technology use in academic staff development (in my context, but in a lot of others as well) is still somewhat limited. There’s no use of learning analytics to understand the teaching experience. Technology is largely used to supplement existing face-to-face approaches, rather than do something radically different.  Though aspects of this might be coming. The idea of untethered faculty development is indicative of early moves in this space.On the other hand, the academic staff who are our learners now have access to the abundance of resources that are on the Internet. There are staff drawing heavily on these, but there appears to be many that are not.
  • Distributed pedagogy – ownership of learning is shared amongst different stakeholders allowing students to source learning from competing providers
    There are aspects of this happening in how learning and teaching operates. e.g. TurnitIn is external and offers some staff development. This is happening more to support University students in their learning, than to support University teaching staff.
  • Engaging pedagogy – encouraging active participation from learners.
    There are early signs of this – e.g. the shift away from academic staff development in the broader field.  Locally, the approach used in our L&T orientation has moved away from experts leading sessions to participative, co-construction/solving of problems. But more could be done.
  • Agile pedagogy – flexibility/customisation of the student experience.
    There are attempts to do this, but not directly support by systems and processes.
  • Situated pedagogy – contextualisation to maximise real-world relevance.
    There are signs of this (e.g. how workshops are run) and approaches like Teaching@Sydney allow for more contextualisation. As do some move to contextualising access to resources.  But still fairly limited.Currently much of it relies on someone doing the customising/situating/personalising for the learner.

And the Horizon report

The 2017 Horizon report is the other source examined. It offers the following key trends

  • Advancing cultures of innovation
    Not so much. Innovation is suggested to be a good thing, but a “culture that promotes experiementation” it is not yet.
  • Deeper learning approaches – project-based, inquiry learning
    There are glimmers of this, but there’s also a strong pragmatic need amongst teaching staff.  I need to know how to do X now.
  • Growing focus on measuring learning
    In terms of external quality indicators (such as QILT) and quantitative measures such as pass/fail rates and results on student evaluation of teaching, this is increasing. Perhaps increasing beyond where it should be. However, there remains little use of learning analytics and other more interesting approaches for measuring the learning and learning needs of teaching staff.
  • Redesigning learning spaces
    Moves around this for students, but not so much for teaching staff.
  • Blended learning designs
    Much of staff development appears to stick with the face-to-face methods. Even when it moves online it is to video-conferencing in an attempt to continue with face-to-face, rather than explore the blend of affordances that both online and face-to-face might offer.
  • Collaborative learning
    One of the Horizon Report “predictions” that Audrey Watters labels as not even wrong. Communities of Practice and Learning Communities have been a feature of academic staff development, more broadly and locally (even back in the early 1990s). However, I’m not sure how truly collaborative those approaches have been.

What’s relevant now?

Many of the above offer interesting possibilities, some are inevitable, and some have always been a feature.

Institutional academic staff development has yet to scratch the surface in terms of how digital technology could be used. It does appear to be increasingly “strategic” in its intent. This may make it more difficult to be agile, situated and engaging.  Three signposts that could be very relevant.

Situating staff development within the context of the member of teaching staff strikes me as very relevant. Expanding upon the idea of professional learning opportunities and encouraging active participation from teaching staff seems very relevant. Providing examples and scaffolds around how to do this.

 

My current context and some initial issues

Semester is about to start and I’m back teaching. This semester I’m part of a team of folk designing and teaching a brand new, never been taught course – EDU8702 – Scholarship in Higher Education: Reflection and Evaluation. The course is part of the Graduate Certificate in Tertiary Teaching.

In the course, we are asking the participants to focus on a specific context into which they are (or will) teach. That context will form part of an teacher-led inquiry into learning and teaching that will underpin the whole course. Early on in the course we are asking the to briefly summarise the context they’ll focus on and generate an initial set of issues of interest that might form the basis for their inquiry. Get them thinking and sharing and provide a foundation for refinement over the semester.

The plan is that we’ll model what we ask, hence this blog post is my example.

Context

My current context is within a central learning and teaching unit at a University. My role is charged with helping teaching staff at the institution work toward and be recognised for “educational excellence and innovation”. i.e. we’re part of a team to helping teaching staff become better teachers and thus improve the quality of student learning. To that end we, amongst other things

  • Teach into the institution’s Graduate Certificate in Tertiary Teaching.
  • Develop a range of professional learning opportunities (PLO), including L&T orientation, workshops, small group sessions, online resources etc.
  • Develop and support programs of L&T scholarships and awards.

Issues

As a group that’s still forming a bit, there are a range of practical issues.

However, there are also a collection of issues that arise from the “discipline” of professional learning for teaching staff, some of these include:

  • Preaching to the choir.

    A perception that the people who engage with the professional learning opportunities we provide, are perhaps not those who might benefit most.

  • Difficulty of demonstrating impact.

    It can be very hard to prove that what is done, improves the quality of learning and teaching.

  • Perceived relevance of what we offer

    Often the focus can be on developing well-designed workshops and resources, rather than try to understand authentic, contextual needs.

  • A tendency to focus on designing a learning intervention when performance support might suit better.
  • How best to modify what we do to respond to an era of information abundance.

    A lot of traditional professional development arose from a time of scarce information. Developing a workshop/resource on topic X specifically for institution Y made sense, because there was no other way to get access. Chances are today you could find a long list of workshop/resources on topic X. Should you still develop yet another resource on topic X?

There are also some issues around the course we’re teaching

  • Limited insight into how the participants are, their backgrounds and reasons for enrolling.
  • The current small number of participants.
  • How to design an effective course within this context and within current constraints.

Learning analytics, quality indicators and meso-level practitioners

When it comes to research I’ve been a bit of failure, especially when measured against some of the more recent strategic and managerial expectations. Where are those quartile 1 journal articles? Isn’t your h-index showing a downward trajectory?

The concern generated by these quantitative indicators not only motivated the following ideas for a broad research topic, but also is one of the issues to explore within the topic. The following outlines early attempts to identify a broader research topics that is relevant enough for current sector and institutional concerns; provides sufficient space for interesting research and contribution; aligns nicely (from one perspective) with my day job; and, will likely provide a good platform for a program of collaborative research.

The following:

  1. explains the broad idea for research topic within the literature; and,
  2. describes the work we’ve done so far including two related examples of the initial analytics/indicators we’ve explored.

The aim here is to be generative. We want to do something that generates mutually beneficial collaborations with others. If you’re interested, let us know.

Research topic

As currently defined the research topic is focused around the design and critical evaluation of the use and value of a learning analytics platform to support meso-level practitioners in higher education to engage with quality indicators of learning and teaching.

Amongst the various aims, are an intent to:

  • Figure out how to design and implement an analytics platform that is useful for meso-level practitioners.
  • Develop design principles for that platform informed by the analytics research, but also ideas from reproducible research and other sources.
  • Use and encourage the use by others of the platform to:
    1. explore what value (if any) can be extracted from a range of different quality indicators;
    2. design interventions that can help improve L&T; and,
    3. to enable for a broader range of research – especially critical research – around the use of quality indicators and learning analytics for learning and teaching.

Quality Indicators

The managerial turn in higher education has increased the need for and use of various indicators of quality, especially numeric indicators (e.g. the number of Q1 journal articles published, or not). Kinash et al (2015) state the quantifiable performance indicators are important to universities because they provide “explicit descriptions of evidence against which quality is measured” (p. 410). Chalmers (2008) offers the following synthesized definition of performance indicators

measures which give information and statistics context; permitting comparisons between fields, over time and with commonly accepted standards. They provide information about the degree to which teaching and learning quality objectives are being met within the higher education sector and institutions. (p. 10)

However, the generation and use of these indicators is not without issues.

There is also a problem with a tendency to rely on quantitative indicators. Quantitative indicators provide insight into “how much or how many, but say little about quality” (Chalmers & Gardiner, 2015, p. 84). Ferguson and Clow (2017) – writing in the context of learning analytics – argue the good quality qualitative research needs to support good-quality quantitative research because “we cannot understand the data unless we understand the context”. Similarly, Kustra et al (2014) suggest that examining the quality of teaching requires significant qualitative indicators to “provide deeper interpretation and understanding of the measured variable”. Qualitative indicators are used by Universities to measure performance in terms of processes and outcomes,however, “because they are more difficult to measure and often produce tentative results, are used less frequently” (Chalmers & Gardiner, 2015, p. 84)

Taking a broader perspective there are problems such as Goodhart’s law and performativity. As restated by Strathern (1997), Goodhart’s Law is ‘When a measure becomes a target, it ceases to be a good measure’ (p. 308) Elton (2004) describes Goodhart’s Law as “a special case of Heisenberg’s Uncertainty Principle in Sociology, which states that any observation of a social system affects the system both before and after the observation, and with unintended and often deleterious consequences” (p. 121). When used for control and comparison purposes (e.g league tables) indicators “distort what is measured, influence practice towards what is being measured and cause unmeasured parts to get neglected” (Elton, 2004, p. 121).

And then there’s the perception that quality indicators and potentially this whole research project becomes an unquestioning part of part of performativity and all of the issues that generates. Ball (2003) outlines the issues and influence of the performative turn in institutions. He describes performativity as

a technology, a culture and a mode of regulation that employs judgements, comparisons and displays as means of incentive, control, attrition and change ^ based on rewards and sanctions (both material and symbolic). The performances (of individual subjects or organizations) serve as measures of productivity or output, or displays of ‘quality’, or ‘moments’ of promotion or inspection (Ball, 2003, p. 216)

All of the above (and I expect much more) all point to there being interesting and challenging questions to explore and answer around quality indicators and beyond. I do hope that any research we do around this topic engages with the necessary critical approach to this research. As I re-read this post now I can’t help but see echoes of a previous discussion Leigh and I have had around inside out, outside in, or both. This approach is currently framed as an inside out approach. An approach where those inside the “system” are aware of the constraints and work to address those. The question remains whether this is possible.

Learning analytics

Siemens and Long (2011) define LA as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (p. 34). The dominant application of learning analytics has focused on “predicting student learning success and providing proactive feedback” (Gasevic, Dawson and Siemens, 2015), often driven by an interest in increasing student retention and success. Colvin et al (2016) found two distinct trajectories of activity in learning analytics within Australian higher education. The first were ultimately motivated by measurement and retention implemented specific retention related learning analytics programs. The second saw retention as a consequence of the broader learning and teaching experience and “viewed learning analytics as a process to bring understanding to learning and teaching practices” (Colvin et al, 2016, p. 2).

Personally, I’m a fan of the second trajectory and see supporting that trajectory as a major aim for this project.

Not all that surprisingly, learning analytics has been applied to the question of quality indicators. Dawson and McWilliam (2008) explored the use of “academic analytics” to

address the need for higher education institutions (HEIs) to develop and adopt scalable and automated measures of learning and teaching performance in order to evaluate the student learning experience (p. 1)

Their findings included (emphasis added):

  • “LMS data can be used to identify significant differences in pedagogical approaches adopted at school and faculty levels”
  • “provided key information for senior management for identifying levels of ICT adoption across the institution, ascertaining the extent to which teaching approaches reflect the strategic institutional priorities and thereby prioritise the allocation of staff development resources
  • refining the analysis can identify “further specific exemplars of online teaching” and subsequently identify “‘hotspots’ of student learning engagement”; “provide lead indicators of student online community and satisfaction”; and, identify successful teaching practices “for the purposes of staff development activities and peer mentoring”

Macfadyen and Dawson (2012) provide examples of how learning analytics can reveal data that offer “benchmarks by which the institution can measure its LMS integration both over time, and against comparable organizations” (p. 157). However, the availability of such data does not ensure use in decision making. Macfadyen and Dawson (2012) also report that the availability of patterns generated by learning analytics did not generate critical debate and consideration of the implications of such data by the responsible organisational committee and thus apparently failed to influence institutional decision-making.

A bit more surprising, however, is that in my experience there doesn’t appear to have been a concerted effort to leverage learning analytics for these purposes. Perhaps this is related to findings from Colvin et al (2016) that even with all the attention given to learning analytics there continues to be: a lack of institutional exemplars; limited resources to guide implementation; and perceived challenges in how to effectively scale learning analytics across an institution. There remains little evidence that learning analytics has been helpful in closing the loop between research and practice, and made an impact on university-wide practice (Rogers et al, 2016).

Even if analytics is used, there are other questions such as the role of theory and context. Gasevic et al (2015) argue that while counting clicks may provide indicators of tool use it is unlikely to reveal insights of value for practice or the development of theory. If learning analytics is to achieve an a lasting impact on student learning and teaching practice it will be necessary to draw of appropriate theoretical models (Gasevic et al, 2015). Rogers et al (2016) illustrate how such an approach “supports an ever-deepening ontological engagement that refines our understanding and can inform actionable recommendations that are sensitive to the situated practice of educators” (p. 245). If learning analytics aims to enhance to learning and teaching, it is crucial that it engages with teachers and their dynamic contexts (Sharples et al., 2013). Accounting for course and context specific instructional conditions and learning designs are increasingly seen as imperatives for the use of learning analytics (Gaservic et al, 2015; Lockyer et al, 2013)

There remain many other questions about learning analytics. Many of those questions are shared with the use of quality indicators. There is also the question of how learning analytics can harnessed via means that are sustainable, scale up, and at the same time provide contextually appropriate support. How can the tensions between the need for institutional level quality indicators of learning and teaching, and the inherently contextually specific nature of learning and teaching?

Meso-level practitioners

The limited evidence of impact from learning analytics on learning and teaching practice may simply be a mirror of the broader difficulty that universities have had with other institutional learning technologies. Hannon (2013) explains that when framed as a technology project the implementation of institutional learning technologies “risks achieving technical goals accompanied by social breakdowns or failure, and with minimal effect on teaching and learning practices” (p. 175). Breakdowns that arise, in part, from the established view of enterprise technologies. A view that sees enterprise technologies as unable to be changed, and “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (Rushkoff, 2010, p. 15).

Jones et al (2006) describe the term meso-level to describe the “level that was intermediate between small scale, local interaction, and large-scale policy and institutional procesess” (p. 37). Hannon (2013) describes meso-level practitioners as the “teaching academics, learning technologists, and academic developers” (p. 175) working between the learning and teaching coal-face and the institutional context defined by an institution’s policies and technological systems. These are the people who can see themselves as trying to bridge the gaps between the institutional/technological vision (macro-level) and the practical coal-face realities (micro-level). These are the people who are often required to help “optimise humans for machinery”, but who would generally prefer to do the reverse. Hannon (2013) also observes thateEven though there has been significant growth in the meso-level within contemporary higher education, research has continued to largely focused on the macro or micro levels.

My personal experience suggests that the same can be said about the design and use of learning analytics. Most institutional attempts are focused at either the macro or micro level. The macro level focused largely on large-scale student retention efforts. The micro level focused on the provision of learning analytics dashboards and other tools to teaching staff and students. There has been some stellar work by meso-level practitioners in developing supports for the micro-level (e.g. Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017). However, much of this work has been in spite of the affordances and support offered by the macro-level. Not enough of the work, beyond the exceptions already cited, appears to have actively attempted to help optimise the machinery for the humans. In addition, there doesn’t appear to be a great deal of work – beyond the initial work from almost 10 years ago – focused on if and how learning analytics can help meso-level practitioners in the work that they do.

As a result there are sure to be questions to explore about meso-level practitioners, their experience and impact on higher education. Leigh Blackall has recently observed that the growth in meso-level practitioners in the form of “LMS specialists and ed tech support staff” comes with the instruction that they “focus their attentions on a renewed sense of managerial oversight”. Implicating meso-level practitioners in questions related to performativity etc. Leigh also positions these meso-level practitioners as examples of disabling professions. Good pointers to some of the more critical questions to be asked about this type of work.

Can meso-level practitioners break out, or are we doomed to be instruments of performativity? What might it take to break free? How can learning analytics be implemented in a way that allows it to be optimised for the contextually specific needs of the human beings involved, rather than require the humans to be optimised for the machinery? Would such a focus improve the quality of L&T?

What have we done so far?

Initial work has focused on developing an open, traceable, cross-institutional platform for exploring learning analytics. In particular, exploring how recent ideas such as reproducible research and insights from learning analytics might help design a platform that enables meso-level practitioners to break some of the more concerning limitations of current practice.

We’re particularly interested in ideas from Elton (2004) where bottom-up approaches might “be considerably less prone to the undesirable consequences of Goodhart’s Law” (p. 125). A perspective that resonates with our four paths idea for learning analytics. i.e. That it’s more desirable and successful to follow the do-it-with learners and teachers or learner/teacher DIY paths.

The “platform” is seen as an enabler for the rest of the research program. Without a protean technological platform – a platform we’re able to tailor to our requirements – it’s difficult to see how we’d be able to effectively support the deeply contextual nature of learning and teaching or escape broader constraints such as performativity. This also harks back to my disciplinary background as a computer scientist. In particular, the computer scientist as envisioned by Brooks (1996) as a toolsmith whose delight “is to fashion powertools and amplifiers for minds” (p. 64) and who “must partner with those who will use our tools, those whose intelligences we hope to amplify” (p. 64).

First steps

As a first step, we’re revisiting our earlier use of Malikowski, Thompson & Theis (2007) to look at LMS usage (yea, not that exciting, but you have to start somewhere). We’ve developed a set of Python classes that enable the use of the Malikowski et al (2007) LMS research model. That set of classes has been used to develop a collection of Jupyter notebooks that help explore LMS usage in a variety of ways.

The theory is that these technologies (and the use of github to share the code openly) should allow anyone else to perform these same analysis with their LMS/institution. So far, the code is limited to working only with Moodle. However, we have been successful in sharing code between two different installations of Moodle. i.e. one of us can develop some new code, share it via github, and the other can run that code over their data. A small win.

The Malikowski et al (2007) model groups LMS features by the following categories: Content, Communication, Assessment, Evaluation and Computer-Based Instruction. It also suggests that tool use occurs in a certain order and with a certain frequency. The following figure (click on it to see a larger version) is a representation of the Malikowski model.

Malikowski Flow Chart

Looking for engagement?

Dawson and McWilliam (2008) suggested that academic analytics could be used to identify “potential “hotspots” of student learning engagement” (p. 1). Assuming that the number of times students click within an LMS course is a somewhat useful proxy for engagement (a big question), then this platform might allow you to:

  1. Select a collection of courses.

    This might be all the courses in a discipline that scored well (or poorly) on some other performance indicator, all courses in a semester, all large first year courses, all courses in a discipline etc.

  2. Visualise the number of total student clicks within each course clicked on LMS functionality in each of the Malikowski categories.
  3. Visualise the number of clicks per student within each course in each Malikowski category.

These visualisations might then provide a useful indication of something that is (or isn’t) happening. An indication that would not have been visible otherwise and is worthy of further exploration via other means (e.g. qualitative).

The following two graphs were generated by our platform and are included here to provide a concrete example of the above process. Some features of the platform that the following illustrates

  • It generates artefacts (e.g. graphs, figures) that can be easily embedded anywhere on the web (e.g. this blog post). You don’t have to be using out analytics platform to see the artefacts.
  • It can anonymise data for external display. For example, courses in the following artefacts have been randomly given people’s names rather than course codes/names.

Number of total student clicks

The first graph shows a group of 7 courses. It shows the number of students enrolled in each course (e.g. the course Michael has n=451) and the bars represent the total number of clicks by enrolled students on the course website. The clicks are grouped according to the Malikowski categories. If you roll your mouse over one of the bars, then you should see displayed the exact number of clicks for each category.

For example, the course Marilyn with 90 students had

  • 183,000+ clicks on content resources
  • 27,600+ clicks on communication activities
  • 5659 clicks on assessment activities
  • and 0 for evaluation of CBI

Total number of clicks isn’t all that useful for course comparisons. Normalising to clicks per enrolled student might be useful.


 

 

 

 

Clicks per student

The following graph uses the same data as above, however, the number of clicks is now divided by the number of enrolled students. A simple change in analysis that highlights differences between courses.

2000+ clicks on content per student certainly raises some questions about the Marilyn course. Whether that number is good, bad, or meaningless would require further exploration.

 


 

 

 

 

What’s next?

We’ll keep refining the approach, some likely work could include

  • Using different theoretical models to generate indicators.
  • Exploring how to effectively supplement the quantitative with qualitative.
  • Exploring how engaging with this type of visualisation might be a useful as part of professional learning.
  • Exploring if these visualisations can be easily embedded within the LMS, allowing staff and students to see appropriate indicators in the context of use.
  • Exploring various relationships between features quantitatively.

    For example, is there any correlation between results on student evaluation and Malikowski or other indicators? Correlations between disciplines or course design?

  • Combining the Malikowski model with additional analysis to see if it’s possible to identify significant changes in the evolution of LMS usage over time.

    e.g. to measure the impact of organisational policies.

  • Refine the platform itself.

    e.g. can it be modified to support other LMS?

  • Working with a variety of people to explore what different questions they might wish to answer with this platform.
  • Using the platform to enable specific research project.

And a few more.

Want to play? Let me know. The more the merrier.

References

Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065

Brooks, F. (1996). The Computer Scientist as Toolsmith II. Communications of the ACM, 39(3), 61–68.

Chalmers, D. (2008). Indicators of university teaching and learning quality.

Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46, 81–91. https://doi.org/10.1016/j.stueduc.2015.02.002

Colvin, C., Wade, A., Dawson, S., Gasevic, D., Buckingham Shum, S., Nelson, K., … Fisher, J. (2016). Student retention and learning analytics : A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching. Retrieved from http://he-analytics.com/wp-content/uploads/SP13-3249_-Master17Aug2015-web.pdf

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Elton, L. (2004). Goodhart’s Law and Performance Indicators in Higher Education. Evaluation & Research in Education, 18(1–2), 120–128. https://doi.org/10.1080/09500790408668312

Ferguson, R., & Clow, D. (2017). Where is the Evidence?: A Call to Action for Learning Analytics. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 56–65). New York, NY, USA: ACM. https://doi.org/10.1145/3027385.3027396

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64–71. https://doi.org/10.1007/s11528-014-0822-x

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Jones, C., Dirckinck‐Holmfeld, L., & Lindström, B. (2006). A relational, indirect, meso-level approach to CSCL design in the next decade. International Journal of Computer-Supported Collaborative Learning, 1(1), 35–56. https://doi.org/10.1007/s11412-006-6841-7

Kinash, S., Naidu, V., Knight, D., Judd, M.-M., Nair, C. S., Booth, S., … Tulloch, M. (2015). Student feedback: a learning and teaching performance indicator. Quality Assurance in Education, 23(4), 410–428. https://doi.org/10.1108/QAE-10-2013-0042

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. https://doi.org/10.1177/0002764213479367

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Rogers, Tim, Dawson, Shane, & Gasevic, Dragan. (2016). Learning Analytics and the Imperative for Theory-Driven Research. In The SAGE Handbook of E-learning Research (2nd ed., pp. 232–250).

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Sharples, M., Mcandrew, P., Weller, M., Ferguson, R., Fitzgerald, E., & Hirst, T. (2013). Innovating Pedagogy 2013: Open University Innovation Report 2 (No. 9781780079370). Milton Keynes: UK. Retrieved from http://www.open.ac.uk/blogs/innovating/

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5). Retrieved from http://moourl.com/j6a5d

Strathern, M. (1997). “Improving ratings”: audit in the British University system. European Review, 5(3), 305–321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4

Nudging up MyOpinion response rates using a gamified leaderboard

The following information is taken from and adds to the contents of a poster by Alice Brown and I for a USQ L&T Celebration Event. It describes the need for a Student Evaluation of Teaching leaderboard, how it works, and the results of some early applications (12 to 15% increases in response rates in individual courses, resulting in response rates that are double the institutional average – around 50%, rather than 25%).

Challenge – Course evaluation ‘buy-in’ and responding to MyOpinion

Many academics struggle with getting ‘buy-in’ from students in terms of providing feedback to institutional Student Evaluation of Teaching (SET) surveys (labelled MyOpinion at USQ). While efforts to prompt students to respond might include: various forms of communication, announcements, reference to how a course has reflected and acted on feedback in future updates, and the other promotion features (including USQ’s use of a big yellow button), increasing the percentage of response rates still tends to be a challenge (average response rates in SET fluctuating between 30% and 50%) (Bennett & De Bellis, 2010; Spooren et al, 2013).

Nudging, gamification & leaderboards

  • Nudge – “any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives” (Thaler & Sunstein, 2008, p. 6).
  • Deterding et al (2011) define gamification as “the use of game design elements in non-game contexts” (p. 10) with the intent to “motivate and increase user activity” (p. 9)
  • Leaderboards are amongst the most popular of games mechanics found in case studies using gamification in education (Dicheva et al, 2015).

A leaderboard is a game design element consisting of a visual display that ranks players according to their accomplishments; when used in an educational setting it serves as a way for students to directly compare their own performance with that of others (Christy & Fox, 2014, p. 67)

Question – Can a MyOpinion leaderboard help increase student response rates?

Initially trialled in 2015 in the course EDC3100, and then extended into several other courses in 2016, two academics wanted to explore whether providing a ‘nudge’ through the integration of a MyOpinion leaderboard could increase response rates.

How does a MyOpinion leaderboard work?

The following image (click on it to see a bigger version) illustrates how the leaderboard works. See below for more explanation.

How a MyOpinion leaderboard works

The MyOpinion leaderboard works like this:

  1. Course examiner checks response rates.

    While the MyOpinion survey is live, the course examiner checks the MyOpinion site every day or so. At this stage, the course examiner can only see the number of responses. Nothing more.

  2. Course examiner updates a Google spreadsheet

    If the current response rate increases, the course examiner updates a previously configured Google spreadsheet. The spreadsheet contains data about all the relevant offerings of the courses, including: the number of enrolled students; and, the number of MyOpinion responses. The spreadsheet automatically calculates the percentage response rate. The current course offering is indicated by a yes in the appropriate column.

  3. Visitors to the course home page see the leaderboard

    Every time someone visits the course home page, the data in the Google Spreadsheet is transformed into a table that ranks different course offerings based on the percentage response rate. This is done via a small bit of common Javascript that can be embedded into almost any web page.

  4. Additional nudges are employed

    The visibility of the leaderboard may not be sufficient. Typically course examiners have used other means to nudge students to complete MyOpinion. Often using the leaderboard data to spark any ‘competitive nature’.

  5. Students complete the MyOpinion survey

    Thereby changing the response rate and starting the cycle all over again.

Results

As the graphs below show, each of the courses that have trialled the leaderboard have:

  1. Increased response rates by at least 12-15%; and,
  2. Have achieved response rates around or more than double the institutional average.

    (Institutional average currently only available for 2015 and 2016)

Roll your mouse pointer over the graph elements to see additional information.

What’s next?

  • Improve support for other types of leaderboard.

    A single USQ course may have 3 or 4 different groups of students based on campus. At least one staff member has experimented with use the leadboard to rank response rates from different student groups within the current offering of the course.

  • Improve advice for using the leaderboard

    Current advice is accessible via the more information section below. There are a number of ways this could be improved.

  • Promote more broadly to academics

    Currently the leaderboard has largely been promoted within School of Teacher Education and Early Childhood.

  • Explore options to automate leaderboard updating

    In theory, if an API were available for MyOpinion response rates, there would be no need for manual updating of the Google spreadsheet (or the Google spreadsheet).

  • Integrate a range of other communication strategies
  • Evaluate

More information

  • How to implement the leaderboard in your USQ course?
    • There are instructions that USQ course examiners can use to implement a MyOpinion leaderboard.
    • There is also a video (only visible to USQ staff) where Alice and I walk through the process of setting up the leaderboard in her course.
  • Background on the origins of the leaderboard

    This post provides some background on how and why the semi-automated leaderboard approach was created.

References

Bennett, T., & De Bellis, D. (2010). The Move to a System of Flexible Delivery Mode (Online v Paper) Unit of Study Student Evaluations at Flinders University. Management Issues and the Study of Initial Changes in Survey Volume, Response Rate and Response Level. Journal of Institutional Research, 15(1), 41–53.

Christy, K. R., & Fox, J. (2014). Leaderboards in a virtual classroom: A test of stereotype threat and social comparison explanations for women’s math performance. Computers & Education, 78, 66–77. https://doi.org/10.1016/j.compedu.2014.05.005

Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From Game Design Elements to Gamefulness: Defining “Gamification.” In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (pp. 9–15). New York, NY, USA: ACM. https://doi.org/10.1145/2181037.2181040

Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in Education: A Systematic Mapping Study. Journal of Educational Technology & Society, 18(3), 75–88.

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the Validity of Student Evaluation of Teaching: The State of the Art. Review of Educational Research, 83(4), 598–642. https://doi.org/10.3102/0034654313496870

Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth and happiness. New York: Penguin.

Thaler, R., & Sunstein, C. (2008). Nudge: Improving decisions about health, wealth and happiness. New York: Penguin.

Emedding plotly graphs in WordPress posts


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Last year I started using with Perl to play with analytics around Moodle Book usage. This year, @beerc and I have been starting to play with Jupyter Notebooks and Python to play with analytics for meso-level practitioners (Hannon, 2013). Plotly provides a fairly useful platform for generating graphs of various types and sharing the data. Works well with a range of languages and Jupyter Notebooks.

Question here is how well it works with WordPress. WordPress has some (understandable) constraints around embedding external HTML in WordPress posts/pages. But there is a large set of community contributed plugins to WordPress that help with this, including a couple that apparently work with Plotly.

  • wp-plotly designed to embed a Plotly hosted graph by providing the plotly URL. Doesn’t appear to work with the latest version of WordPress. No go
  • Plot.wp provides a WordPress shortcode for Plotly (plotly and /plotly with square brackets) into which you place Plotly json data and hey presto graph. Has a github repo and actually works with the latest version of WordPress.

How to produce JSON from Python

I’m a Python newbie. Don’t really grok it the way I did Perl. I assumed it should be possible to auto-generate the json from the Python code, but how.

Looks like this will work in a notebook, though it does appear to need the resulting single quotes converted into double quotes and two sets of double quotes removed to be acceptable JSON.

#.. Python code to produce plotly figure ready to be plotted
import json
jsonData['data'] = json.dumps( fig['data'])
jsonData['layout'] = json.dumps( fig['layout'])
jsonData

For the graph I’m currently playing with, this ends up with

{"layout": {"yaxis": {"range": [0, 100], "title": "% response rate"}, "title": "EDC3100 Semester 2 MyOpinion % Response Rate", "xaxis": {"ticktext": ["2014 (n=106)", "2015 (n=88)nLeaderboard", "2016 (n=100)nLeaderboard"], "title": "Year", "tickvals": ["2014", "2015", "2016"]}}, 
  "data": [{"type": "bar", "name": "EDC3100", "x": ["2014", "2015", "2016"], "y": [34, 48, 49]}, {"type": "scatter", "name": "USQ average", "x": ["2015", "2016"], "y": [26.83, 23.52]}]}

And the matching graph produced by plotly follows. Roll over the graph to see some “tooltips”.

References

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Bye, Bye Mendeley?

So, I have a problem. What was a wonderful open source product was bought by a big publisher. I’m an open kind of guy so this has always been a bit disquieting. It recently got a bit worse.

https://twitter.com/djplaner/status/863934554062573568

Time to start actively searching.

@matbury recommended the alternative I was most aware of

https://twitter.com/matbury/status/863963530235703300

Then I stumbled across this comparison of citatation management software from the University of Toronto. What struck my particularly about this comparison was the following entry under the Zotero column

Also works with Google Docs

As it happens, I’m increasingly using Google docs for collaborative creation for both work and research. Working with Google docs is a big plus. The question here is how well does Zotero work with Google docs? How much time/agony will be involved in moving?

The following documents on-going experimentation in doing so. It reveals that it doesn’t look like it will be too painful and that as I’m in the early days of a couple of group writing projects, probably a good time to make the move and test it out on some authentic tasks.

Looking increasingly like bye bye Mendeley. Would probably prefer an approach that didn’t rely on a single server for syncing/group sharing, but Zotero looks good enough for now.

Install Zotero

First step is to install Zotero. Apparently there is now both a standalone (which works with browser extensions) and a Firefox version. Download the stand alone version for the Mac.

Installing the stand alone version it picked up an old Zotero data directory from the days of early experimentation in the noughties.

Installation also takes you back to the Zotero site to create an account on the service, because

It’s a free way to sync and access your library from anywhere, and it lets you join groups and back up all your attached files.

I’ll do that (not without some minor reservations, wonder if anyone is working on a distributed method for sharing citations/references?). Now time to configure browser extension and stand alone Zotero with that account (and restart Word for that integration to work).

That seems to be working. There is a surprising amount of material in the old Zotero directory, back from the thesis days.

How to import my Mendeley library? A Google search reveals this advice. Let’s try that.

Creates a 1.6 Mb file for 3808 documents. The advice is that

Unfortunately, Zotero won’t retain your Mendeley folders or groups, but it does import the PDFs of any attached journal articles.

Losing the folders is a bit of a bugger. But I suppose I can retain Mendeley for when I need to find something.

It’s probably taken somewhere around 10 minutes to import that collection into Zotero.

All seems to be there, but of course there are some things that have been lost, including

  • As mentioned the folder/group structure that I’d used in Mendeley to organise references into structures relevant to the work I was doing.
  • I’ve also lost the annotations and notes that I’d started making using Mendeley’s internal features. Had been worried about that. Appears Zotero doesn’t support annotations on the PDFs, but then there are the Adobe methods for that, which might be a better long term approach.

Citation management and Google docs

According to this page working with Google docs is via the same mechanism used for plain-text documents.

  • Add a bibliography, by selecting the list of items and drag them into the document.

    Suggesting a need to maybe create a folder or similar for every reference you want to use in a Google doc. Suggesting collaborative use would require all the collaborators to agree on processes, perhaps using a common Zotero group owned resource?

  • Add a citation, hold the shift key before dragging.

Apparently there are requests in to Google to allow a better integration.

Let’s see how the existing method (more detail here) works. The following image shows the result. The first line shows the results of adding a bibliography, the second adding a citation. The citation format is not what I’d prefer. So the question is whether it can be changed.

Google docs / Zotero usage

There is some other advice on how to do this a bit more effectively. But it doesn’t quite meet the expectations about what citation management really working with Google docs might be. But it is a step up. What about other features?

What’s it like with Word?

I still end up writing most my formal publications in Word, so that experience is important.

New Word document – Add-ins tab has now been modified to include icons for Zotero. Insert new citation, pops up a dialog box to configure document preferences. Assume this is because the first time I’ve used Zotero with this document.

Much like Mendeley another dialog box pops up, you enter author name or other, choose from some options and hey presto citation is inserted.

Insert bibliography and done.

All very similar. Might have a few wrinkles of difference, but I don’t imagine that (given the size of the Zotero community) wouldn’t be something that can’t be figured out.

Online library, groups etc.

With the Zotero account comes an online library, as shown in the following image. Seems the current situation is

  • 300Mb for free and never (big call) expires
  • 2Gb for $USD20 a year.
  • 6Gb for $USD60 a year.
  • Unlimited for $USD120 a year.

Zotero library online

There are options to control the privacy of the online library: public the entire library; public notes; hide from search engines. I’ve just changed those settings so that the library (not the notes) is public. You can see it here

The group functionality supports collaboration and explicitly mentions web-based bibliographies for classes. This could be interesting.

The group page mentions that there are no limits on groups and outlines a range of features and uses. One of the limits, I assume will be the amount of storage space online.

Early steps in developing a design system/model for Professional Learning Opportunities

A big responsibility for the new team I work with is the design, implementation and revision of Professional Learning Opportunities (PLOs) for teaching staff at our current institution. The PLO term has been gifted to us as part of the restructure process/documents that created the team. It’s a term I quite like since I’ve chosen to interpret it as covering a huge range of possibilities beyond just face-to-face, synchronous, physical professional development. This is good because the team has been charged with doing something different.

This post is part of the process of coming up with something different. It links our thinking with some work being done elsewhere and is an attempt to think what else can we add. This post is also an example of the team walking the walk. i.e. if we’re aiming to help teaching staff become open and connected educators, then we need to be operating in ways that are open and connected.

Untethered Faculty Development – a starting “recipe’

A few weeks ago I stumbled across the idea of untethered faculty development from the folk at Teaching and Learning Innovations at CSU Channel Islands. It has some strong resonances with what we’d be talking about, but actually provided a concrete example. (For further inspiration it appears that they whole institution had adopted a Domain of One’s Own approach with Reclaim Hosting that was embedded within professional learning practice).

Then this week I stumbled across this 12 minute online presentation that offered some further insight into the why and how of untethered faculty development. A presentation that included an explanation of the following table of what they’ve done to untether faculty development from the constraints of synchronous and face-to-face.

This description is explained as being a “recipe” that is provided to all facilitators.

Before During After

Think invitation
An email with links & additional information for those that can’t attend. It’s a PLO in itself.

Develop resource site
Online site with all resources for the PLO.

Create dynamic agenda
Google docs where people can ask questions etc prior to the start.

Offer remote participation
Use of zoom to allow remote participation.

All materials digital

Engage remote participation

Use dynamic agenda

Collaborative notes
Can be combined with dynamic agenda. A place where participants can share their notes/thoughts.

Record
where possible

Finalise recording

Refresh resource site
Use the collaborative notes and other discussions to improve the resource site.

Write & share blog post
All facilitators asked to write a blog post that links to the resource site.

Follow up communication
Almost a “thick conclusion” to the session

What else?

We’ve already started doing aspects of this. For example, here’s the resource site for the 2017 teaching orientation. (It’s hosted on my blog because we didn’t have a space. We have just taken ownership of a WordPress site within our institution where we’ll be starting work.)

Even this limited practice has a few additional steps in it, for example

  1. Create a short URL for each resource site.

    e.g. http://bit.ly/2017orient is the short URL for the 2017 teaching orientation. This is so people can write down a URL and find the resource site.

    With a thick invitation, this might not be needed. But then again people forget and perhaps during the session they want to visit the resource site.

  2. Add an evaluation step to after.

    e.g. the 2017 teaching orientation resource site links to the results of a simple evaluation of the session.

    I imagine the CSU-CI folk evaluate their PLOs. Absence of this step probably says more about it’s connection to the idea of untethering faculty development.

But what else could be added? What else should we do? What shouldn’t we do? These are questions being answered in this Google document by the team and anyone else who’ll want to. Feel free to add to the document.

Understanding systems conditions for sustainable uptake of learning analytics

My current institution is – like most other universities – attempting to make some use of learning analytics. The following uses a model of system conditions for sustainable uptake of learning analytics from Colvin et al (2016) to think about how/if those attempts might be enhanced. This is done by

  1. summarising the model;
  2. explaining how the model is “wrong”; and,
  3. offering some ideas for future work.

My aim here is mainly a personal attempt to make sense of what I might be able to do around learning analytics (LA) given the requirements of my current position. Requirements that include:

  • to better know my “learner”;

    In my current role I’m part of a team responsible for providing professional learning for teaching staff. My belief is that the better we know what the teaching staff (our “learners”) are doing and experiencing, the better we can help. A large part of the learning and teaching within our institution is supported by digital technologies. Meaning that learning analytics (LA) is potentially an important tool.

    How can we adopt LA to better understand teaching staff?

  • to help teaching staff use LA;

    A part of my work also involves helping teaching academics develop the knowledge/skills to modify their practice to improve student learning. A part of that will be developing knowledge/skills around LA.

    How can we better support the adoption of/development of knowledge/skills around LA by teaching staff?

  • increasing and improving research.

    As academics we’re expected to do research. Increasingly, we’re expected to be very pragmatic about how we achieve outcomes. LA is still (at least for now?) a buzz word. Since we have to engage with LA anyway, we may as well do research. Also done a bit in the past, which needs building upon.

    How can we best make a contribution to research around LA?

The model

The following uses work performed by an OLT funded project looking at student retention and learning analytics. A project that took a broader view that resulted in:

Given the questions I asked in the previous section and my current conceptions it appears that much of my work will need to focus on helping encourage the sustainable uptake of LA within my institution. Hence the focus here on that model.

The model looks like this.

Model of system conditions for sustainble uptake of LA (Colvin et al, 2016)

At some level the aim here is to understand what’s required to to encourage educator uptake of learning analytics in a sustainable way. The authors define educator as (Colvin et al, 2016, p. 19)

all those charged with the design and delivery of the ‘products’ of the system, chiefly courses/subjects, encompassing administrative, support and teaching roles

The model identifies two key capabilities that drive “the flow rate that pushes and pulls educators along the educator uptake pipeline from ‘interested‘ to ‘implementing‘”. These are

  1. Strategic capability “that orchestrates the setting for learning analytics”, and
  2. Implementation capability “that integrates actionable data and tools with educator practices”.

There are two additional drivers of the “flow rate”

  1. Tool/data quality – the “tool or combination of tools that manage data inputs and generate outputs in the form of actionable feedback” (Colvin et al, 2016, p. 30).
  2. Research/learning – “the organisational learning capacity to monitor implementations and improve the quality of tools, the identification and extraction of underlying data and the ease of usability of the feedback interface” (Colvin et al, 2016, p. 30)

The overall aim/hope being to create a “reinforcing feedback loop” (Colvin et al, p. 30) between the elements acting in concert that drives uptake. Uptake is accelerated by LA meeting “the real needs of learners and educators”.

How the model is “wrong”

All models are wrong, but some are useful (one explanation for why there are so many frameworks and models within education research). At the moment, I see the above model as useful for framing my thinking, but it’s also a little wrong, but that’s to be expected.

After all, Box (1979) thought

it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Consequently, given that Colvin et al (2016) identify the implementation of LA as complex phenomenon “shaped by multiple interrelated dimensions traversing conceptual, operational and temporal domains…as a non-linear, recursive, and dynamic process..” (p. 22), it’s no great surprise that there are complexities not captured by the model (or my understanding and representation of it in this post).

The aim here is not to argue that (or how) the model is wrong. The aim is not to suggest places where the model should be expanded. Rather the aim is to identify the complexities around implementation that aren’t visible in the model (but which may be in the report) and to use that to identify important/interesting/challenging areas for understanding and action. i.e. for me to think about the areas that interest me the most.

“Complexifying” educator uptake

The primary focus (shown within a green box) of the model appears to be encouraging the sustainable uptake of LA by educators. There are at least two ways to make this representation a bit more complex.

Uptake

Uptake is represented as a two-step process moving from Interested to Implementing. There seems to be scope to explore more broadly than just those two steps.

What about awareness. Arguably, LA is a buzz word and just about everyone may be aware of LA. But are they? If they are aware, what is their conceptualisation of LA. Is it just a predictive tool? Is it even a tool?

Assuming they are aware, how many are actually already in the interested state?
I think @hazelj59 has done some research that might provide some answers about this.

Then there’s the 4 paths work that identifies at least two paths for implementing LA that aren’t captured here. These two paths involve doing it with (DIW) the educator, and enabling educator DIY. Rather than simply implementing LA, these paths see the teacher being involved with the construction of different LA. Moving into the tool/data quality and research/learning elements of the model.

educator

The authors define “educator” to include administrative, support and teaching roles. Yet the above model includes all educators in the one uptake process. The requirements/foci/capabilities of these different types of teaching roles are going to be very different. Some of these types of educators are largely invisible in discussions around LA. e.g. there are currently no moves to provide the type of LA that would be useful to my team.

And of course, this doesn’t even mention the question of the learner. The report does explicitly mention a focus on Supporting student empowerment with a focus on a conception of learners that includes their need to develop agency where LA’s role is to help students take responsibility for their learning.

Institutional data foundation: enabling ethics, privacy, multiple tools, and rapid innovation

While ethics isn’t mentioned in the model, the report does highlight discussion around ethical considerations as important. Ethical and privacy considerations are important.

When discussing tool/data quality the report mentions “an analytic tool or combination of tools that manage data inputs and generate outputs in the form of actionable feedback”. Given the complexity of LA implementation (see the above discussion) and the current realities of digital learning within higher education, it would seem unlikely that a single tool would ever be sufficient.

The report also suggests (Colvin et al, 2016, p. 22)

that the mature foundations for LA implementations were identified in institutions that adopted a rapid innovation cycle whereby small scale projects are initiated and outcomes quickly assessed within short time frames

Combined with the increasing diversity of data sources within an institution, these factors seem to suggest that having an institutional data foundation is a key enabler. Such a foundation could provide a common source for all relevant data to the different tools that are developed as part of a rapid innovation cycle. It might be possible to design the foundation so that it embeds institutional ethical, privacy, and other considerations.

Echoing the model, such a foundation wouldn’t need to be provided by a single tool. It might be a suite of different tools. However, the focus would be on encouraging the provision of a common data foundation used by tools that seek to manipulate that data into actionable insights.

Rapid innovation cycle and responding to context

The report argues that the successful adoption of LA(Colvin et al, 2016, pp. 22-23)

is dependent on an institution’s ability to rapidly recognise and respond to organisational culture and the concerns of all stakeholders

and argues that

the sector can further grow its LA capacity by encouraging institutions to engage in similarly diffuse, small-scale projects with effective evaluation that quickly identifies sites of success and potential impact (p 22)

This appears to be key, but how do you do it? How does an institution create an environment that actively encourages and enables this type of “small-scale projects with effective evaluation”?

My current institution currently has the idea of Technology Demonstrators that appears to resonate somewhat with this idea. However, I’m not sure that this project has currently solved the problem of “effective evaluation” or of how/when to scale beyond the initial project.

Adding in theory/educational research

In discussing LA, Rogers et al (2015, p. 233) argues

that effective interventions rely on data that is sensitive to context, and that the application of a strong theoretical framework is required for contextual interpretation

Where does the “strong theoretical framework” come from, if not educational and related literature/research? How do you include this?

Is this where some one/group needs to take on the role of data wrangler to support this process?

How do you guide/influence uptake?

The report assumes that once the elements in the above model are working in concert to form a reinforcing feedback loop that LA will increasingly meet the real needs of learners and educators. That this will in turn accelerate organisational uptake.

At least for me, this begs the question: How do they know – let alone respond to – the needs of learners and educators?

For me, this harks back to why I perceive that the Technology Acceptance Model (TAM) is useless. TAM views an individual’s intention to adopt a particular digital technology as being most heavily influenced by two factors: perceived usefulness, and perceived ease of use. i.e. if the LA is useful and easy to use, then uptake will happen.

The $64K question is what combination of features of an LA tool will be widely perceived by educators to be useful and easy to use? Islam (2014, p. 25) identifies the problem as

…despite the huge amount of research…not in a position to pinpoint…what attributes…are necessary in order to build a high level of satisfaction and which…generate dissatisfaction

I’ve suggested one possible answer but there are sure to be alternatives and they need to be developed and tested.

The “communities of transformation” approach appears likely to have important elements of a solution. Especially if combined with an emphasis on the DIW and DIY paths for implementing learning analytics.

The type of approach suggested in Mor et al (2015) might also be interesting.

Expanding beyond a single institution

Given that the report focuses on uptake of LA within an institution, the model focuses on factors within the institution. However, no institution is an island.

There are questions around how an institution’s approach to LA can be usefully influenced and influence what is happening within the literature and at other institutions.

Future work

Frame this future work as research questions

  1. How/can you encourage improvement in the strategic capability without holding up uptake?
  2. How can an institution develop a data foundation for LA?
  3. How to support rapid innovation cycles, including effective evaluation, that quickly identifies sites of success and potential impact?
  4. Can the rapid innovation cycles be done in a distributed way across multiple teams?
  5. Can a combination of technology demonstrators and an institutional data foundation provide a way foward?
  6. How to support/encourage DIW and DIY approaches to uptake?
  7. Might an institutional data foundation and rapid innovation cycles be fruitfully leveraged to create an environment that helps combine learning design, student learning, and learning analytics? What impact might this have?

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press.

Colvin, C., Wade, A., Dawson, S., Gasevic, D., Buckingham Shum, S., Nelson, K., … Fisher, J. (2016). Student retention and learning analytics : A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching. Retrieved from http://he-analytics.com/wp-content/uploads/SP13-3249_-Master17Aug2015-web.pdf

Page 1 of 3

Powered by WordPress & Theme by Anders Norén

css.php