Assembling the heterogeneous elements for (digital) learning

Category: learningAnalytics Page 1 of 4

Are learning analytics leading us towards a utopian or dystopian future, and what can we as practitioners do to influence this?

What follows will eventually be a summary of my contribution to a the ASCILITE’2017 panel titled Are learning analytics leading us towards a utopian or dystopian future, and what can we as practitioners do to influence this?. Below you’ll find a summary of my prediction and the argument that underpins it, suggestions for more reading, the slides, and references.

Argument

The argument made in the following is that Vision #3 from the LACE Visions for the Future of Learning Analytics captures a likely future for learning analytics in Australian Higher Education. That is,

In 2025, analytics are rarely used in education

I don’t necessarily agree with some of the features of this particular vision. However, it does include predictions of many problems, poor implementation and limited use of learning analytics. Based how Australian higher education currently approaches the use of digital technology in learning and teaching, this future appears likely (to me).

This is because the approach universities take to implementing learning analytics will focus on the development/adoption of an institutional learning analytics tool(s) and encouraging/managing the adoption of that tool within the institution. An approach that tends to what Cavallo (2004) described as an “explicitly topdown and hierarchical, and implicitly view education as a series of depersonalised, decontextualised steps carried out by willing, receptive, non-transforming agents” (p. 96). An approach that assumes that there is the one learning analytics tool that can be scaled across the institution. One size fits all.

It’s an approach that fails to effectively engage with Gasevic et al (2015) describe as a significant tension between course (unit) specific models and general models. A tension that echoes the reusability paradox (Wiley, n.d) in that general models “represent a cost effective &…efficient approach” (Gasevic et al, 2015, p. 83), but at the cost of pedagogical value. In learning and teaching, one size does NOT fit all.

In terms of what can be done, the suggestion is to focus on an approach designed to help find the right size for each. An approach that effectively engages with the significant tension of the reusability paradox and aims to work toward maximising pedagogical value.

In terms of specifics, the following offers some early suggestions. First, avoid taking a deficit model of teachers around both digital technology and learning and teaching. Adopt alternative ontological perspectives as the basis for planning. Focus on creating an environment and digital technology platforms that encourage the co-development of contextually specific, embedded and protean learning analytics interventions. Preferably linked with activities focused on helping provide teaching staff with the opportunity to “experience powerful personal experiences” (Cavallo, 2004, p. 102) around how teaching as design combined with learning analytics can respond to their problems and desires.

In addition, the approach should focus on enabling and encouraging teacher DIY learning analytics. DIY learning analytics involves teachers customising in different ways learning analytics to fit their context. Not only is this a way to increase the pedagogical value of learning analytics but may be the only way to achieve learning analytics at scale, as Gunn et al (2005) write

…only when the…end users of technology add their requirements, experience and professional practice that mainstream integration is achieved (p. 190)

More reading

Some suggestions for more reading include

  • How to organise a child’s birthday party is a YouTube video sharing a learning story examining how different perspectives influence how to organise.
  • Cavallo (2004) makes the case for the limitations of the traditional approach in the context of schooling.
  • Gunn et al (2005) make the case that supporting teachers in repurposing learning objects is essential to ensuring adoption and sustainability of learning objects.
  • Jones and Clark (2014) outline the two different mindsets and illustrate the difference in the context of learning analytics.
  • Jones et al (2017) report on example of teacher DIY learning analytics (originally described in Jones and Clark, 2014) and draw some implications for the institutional implementation of learning analytics.
  • Learning analytics, complex adaptive systems and meso-level practitioners: A way forward offers early plans for using an alternative ontology to address the question of learning analytics within higher education.

Slides

View below or download the Powerpoint slides.

References

Cavallo, D. (2004). Models of growth – Towards fundamental change in learning environments. BT Technology Journal, 22(4), 96–112.

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43–62). New York: Springer.

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Gunn, C., Woodgate, S., & O’Grady, W. (2005). Repurposing learning objects: a sustainable alternative? ALT-J, 13(3), 189–200.
Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Tyack, D., & Cuban, L. (1995). Tinkering towards utopia: A century of public school reform. Cambridge, MA: Harvard University Press.

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Teacher DIY learning analytics – implications & questions for institutional learning analytics

The following provides a collection of information and resources associated with a paper and presentation given at ALASI 2017 – the Australian Learning Analytics Summer Institute in Brisbane on 30 November, 2017. Below you’ll find an abstract, a recording of a version of the presentation, the presentation slides and the references.

The paper examines the DIY development and use of a particular application of learning analytics (known as Know thy student) within a single course during 2015 and 2016. The paper argues that given limitations about what is known about the institutional implementation of learning analytics that examining teacher DIY learning analytics can reveal some interesting insights. The paper identifies three implications and three questions.

Three implications

  1. Institutional learning analytics currently falls short of an important goal.

    If the goal of learning analytics is that “of getting key information to a human being who can use it” (Baker, 2016, p. 607) then institutional learning analytics is falling short, and not just at a specific institution.

  2. Embedded, ubiquitous, contextual learning analytics encourages greater use and enables emergent practice.

    This case suggests that learning analytics interventions designed to provide useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable significant levels of usage, including usage that was unplanned, emerged from experience, and changed practice.

    In this case, Know thy student was used by the teacher on 666 different days (~91% of the days that the tool was available) to find out more about ~90% of the enrolled students. Graphical representations below.

  3. Teacher DIY learning analytics is possible.

    Know thy student was implemented by a single academic using a laptop, widely available software (including some coding), and existing institutional data sources.

Three questions

  1. Does institutional learning analytics have an incomplete focus?

    Research and practice around the institutional implementation of learning analytics tends to appear to have a focus on “at scale”. Learning analytics that can be used across multiple courses or an entire institution. That focus appears to be at the expense of course or learning design specific, which appear to be more useful.

  2. Does the institutional implementation of learning analytics have an indefinite postponement problem?

    Aspects of Know thy student are specific to the particular learning design within a single course. The implementation of such a specific requirement would appear unlikely to have ever been undertaken by existing institutional learning analytics implementation. It would have been indefinitely postponed.

  3. If and how do we enable teacher DIY learning analytics?

    This case suggests that teacher DIY learning analytics is possible and potentially overcomes limitations in current institutional implementation of learning analytics. However, it’s also not without its challenges and limitations. Should institutions support teacher DIY learning analytics? How might that be done?

Usage

The following heat map shows the number of times Know thy student was used on each day during 2015 and 2016.

Know thy student usage clicks per day

The following bar graph contains 761 “bars”. Each bar represents a unique student enrolled in this course. The size of the bar shows the number of times Know thy student was used for that particular student. (One student was obviously used for testing purposes during the development of the tool)

Know thy student usage clicks per student

Abstract

The paper on which it is based has the following abstract.

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Presentation recording

The following is a recording of a talk given at CQUni a couple of weeks after ALASI. It uses the same slides as the original ALASI presentation, however, without a time limit the description is a little expanded.

Slides

Also view and download here.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124–129.

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Díaz, O., & Arellano, C. (2015). The Augmented Web: Rationales, Opportunities, and Challenges on Browser-Side Transcoding. ACM Trans. Web, 9(2), 8:1–8:30. https://doi.org/10.1145/2735633

Dron, J. (2014). Ten Principles for Effective Tinkering (pp. 505–513). Presented at the E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, Association for the Advancement of Computing in Education (AACE).

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Germonprez, M., Hovorka, D., & Collopy, F. (2007). A theory of tailorable technology design. Journal of the Association of Information Systems, 8(6), 351–367.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43. https://doi.org/10.3102/0013189X12463051

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-horizon-report-higher-ed

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Comput. Surv., 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University. Retrieved from https://cndls.georgetown.edu/m/documents/thoughtpaper-krusepongsajapan.pdf

Levi-Strauss, C. (1966). The Savage Mind. Weidenfeld and Nicolson.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017, from http://blog.ascilite.org/what-academics-really-want-out-of-learning-analytics/

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

MacLean, A., Carter, K., Lövstrand, L., & Moran, T. (1990). User-tailorable Systems: Pressing the Issues with Buttons. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 175–182). New York, NY, USA: ACM. https://doi.org/10.1145/97243.97271

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable Game Design and the Development of a Checklist for Getting Computational Thinking into Public Schools. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education (pp. 265–269). New York, NY, USA: ACM. https://doi.org/10.1145/1734263.1734357

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London. Retrieved from http://tel.ioe.ac.uk/wpcontent/%0Duploads/2013/11/BeyondPrototypes.pdf

Sinha, R., & Sudhish, P. S. (2016). A principled approach to reproducible research: a comparative review towards scientific integrity in computational research. In 2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) (pp. 1–9). https://doi.org/10.1109/ETHICS.2016.7560050

Wiley, D. (n.d.). The Reusability Paradox. Retrieved from http://cnx.org/content/m11898/latest/

Wiliam, D. (2006). Assessment: Learning communities can use it to engineer a bridge connecting teaching and learning. JSD, 27(1).

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. L. (2006). The Generative Internet. Harvard Law Review, 119(7), 1974–2040.

Improving teacher awareness, action and reflection on learner activity

The following post contains the content from a poster designed for the 2017 USQ Toowoomba L&T celebration event. It provides some rationale for a technology demonstrator at USQ based on the Moodle Activity Viewer.

What is the problem?

Learner engagement is a key to learner success. Most definitions of learner engagement include “actively participating, interacting, and collaborating with students, faculty, course content and members of the community” (Angelino & Natvig, 2009, p. 3).

70% of USQ students study online. By mid-November 2017, 26,754 students had been active in USQ’s Moodle LMS.

In online learning, the absence of visual cues makes teacher awareness of student activity difficult (Govaerts, Verbert, & Duval, 2011).  Richardson (2011) identifies “the role which teaching staff play in inspiring, challenging and engaging students” as “perhaps the most woefully neglected aspect of quality in higher education” (p. 2)

Learning analytics (LA) is the “use of (big) data to provide actionable intelligence for learners and teachers” (Ferguson, 2014). However, current tools provide poor data aggregation, poor visualisation capabilities and have other limitations that inhibit teachers’ ability to: understand student activity; respond appropriately; and, reflect on course design (Dawson & McWilliam, 2008; Corrin et al, 2013; Jones, & Clark, 2014).

How will it be addressed?

Teachers can be supported through tools that help them “analyse, appraise and improve practices in their everyday activity systems” (Knight et al, 2016, p. 337).

This Technology Demonstrator has implemented and will customise and scaffold the use of the Moodle Activity Viewer (MAV) within the USQ activity system.

The MAV provides a useful and easy to use tool that provides representations of student activity from within all Moodle learning spaces. It provides affordances to support teacher intervention and further analysis.

MAV - How many students

MAV’s overlay answering the question how many and what percentage of students have accessed each Moodle activity & resource?

What are the expected outcomes?

The project aims to explore two questions:

  1. If and how does the provision of contextual, useful, and easy to use representations of online learner activity help teachers analyse, appraise and improve their practices?
  2. If and how does this change in teacher activity influence learner activity and learning outcomes?

MAV - How many clicks

MAV’s overlay answering the question how many times have those students clicked on each Moodle activity & resource?

Want to learn more?

Ask for a demostration of MAV during the poster session.

USQ staff can learn more* about and start using MAV from http://tiny.cc/aboutmav and http://tiny.cc/installmav

* (Only from a USQ campus or via the USQ VPN)

MAV - How many students in a forum

MAV’s overlay answering the question how many and what percentage of students have read posts in this introductory activity?

MAV - Who accessed and how to contact them

MAV’s student access dialog providing details of and enabling teacher contact with the students who have accessed the “Fix my class IWB” forum?

References

Angelino, L. M., & Natvig, D. (2009). A Conceptual Model for Engagement of the Online Learner. Journal of Educators Online, 6(1), 1–19.

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Ferguson, R. (2014). Learning analytics FAQs. Education. Retrieved from https://www.slideshare.net/R3beccaF/learning-analytics-fa-qs

Govaerts, S., Verbert, K., & Duval, E. (2011). Evaluating the Student Activity Meter: Two Case Studies. In Advances in Web-Based Learning – ICWL 2011 (pp. 188–197). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25813-8_20

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Proceedings of the 31st Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE 2014) (pp. 262–272). Sydney, Australia: Macquarie University.

Knight, P., Tait, J., & Yorke, M. (2006). The professional learning of teachers in higher education. Studies in Higher Education, 31(3), 319–339. https://doi.org/10.1080/03075070600680786

Introducing the Moodle Activity Viewer (MAV) & digital reno

What follows are the resources associated with a workshop being run at the University of Southern Queensland. As the title suggests, the aim is to get USQ folk started using the Moodle Activity Viewer to explore usage of Moodle activities and resources, and to briefly introduce the idea of digital renovation.

Apart from the presentation slides and references below, other related resources include:

  • Instructions for installing the MAV for USQ staff.

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

  • Additional details on other USQ digital reno tools

    Note: can only be accessed when on a USQ campus network (or the USQ VPN).

Slides

References

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in Learning Technology, 21, 1–13. https://doi.org/10.3402/rlt.v21i0.19909

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus.

Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics

David Jones, Hazel Jones, Colin Beer, Celeste Lawson, Implications and questions for institutional learning analytics implementation arising from teacher DIY learning analytics, To appear in the proceedings of the 2017 Australian Learning Analytics Summer Institute (ALASI 2017)

Abstract

Learning analytics promises to provide insights that can help improve the quality of learning experiences. Since the late 2000s it has inspired significant investments in time and resources by researchers and institutions to identify and implement successful applications of learning analytics. However, there is limited evidence of successful at scale implementation, somewhat limited empirical research investigating the deployment of learning analytics, and subsequently concerns about the insight that guides the institutional implementation of learning analytics. This paper describes and examines the rationale, implementation and use of a single example of teacher do-it-yourself (DIY) learning analytics to add a different perspective. It identifies three implications and three questions about the institutional implementation of learning analytics that appear to generate interesting research questions for further investigation.

Introduction

Learning analytics has been receiving attention since the late noughties. The promise of data driven decision making and the nature of the higher education environment – decreasing funding, increasing focus on quality, increasing use of technology enhanced learning (TEL) – is seen as making the institutional adoption of learning analytics an imperative for institutions of higher education (Macfadyen, Dawson, Pardo, & Gasevic, 2014, p. 17). By 2017, there appears to have been sufficient time and resources invested to realise the affordances learning analytics offers to education at the whole-of-institution scale (Colvin, Dawson, Wade, & Gašević, 2017), especially given predictions in 2012 that it was one year away from mainstream adoption within the Australian Higher Education sector (Johnson, Adams, & Cummins, 2012). However, there are only a small number of institutions that have demonstrated impact on learning and teaching outcomes through large-scale learning analytics programs (Ferguson, Clow, et al., 2014) and there are concerns that there remains limited evidence of the effectiveness of learning analytics at scale, or sufficient understanding to guide successful implementation (Colvin et al., 2017; Ferguson, Macfadyen, et al., 2014).

To address this concern there is a growing conceptual literature offering various models and frameworks to guide learning analytics adoption. Colvin et. al (2017) categorise and analyse this literature and argue that “while the models afford insight, they do not capture the breadth of factors that shape LA implementations” (p. 284). As a result these models are unable to provide those responsible for institutional implementation of learning analytics “the nuanced, situated, fine-grained insight they require to guide them through learning analytics implementation” (Colvin et al., 2017, p. 284). Such a restriction could be addressed through empirical research that examines the “burgeoning, albeit nascent implementations found across higher education institutions” (Colvin et al., 2017, p. 285). Research by Colvin et al (2015) offers one valuable contribution, however, there are limitations. One such limitation is the focus on the perspectives from one set of participants involved in learning analytics projects: senior leaders charged with responsibility for implementation. While an important source of insight, this focus perhaps echoes the lack of human-centeredness that pervades learning analytics implementation (Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017) and tends “to privilege the administrator rather than the student – or even the instructor” (Kruse & Pongsajapan, 2012, p. 4). This limitation raises questions such as:

What is the experience of students and teachers using institutional learning analytics? How might an understanding of their experience inform the institutional implementation of learning analytics?

It is these questions that this paper seeks to explore, with a particular focus on the experience of teaching staff. To do this, it describes a single teacher’s experience developing and using a do-it-yourself (DIY) approach to learning analytics. The paper starts by describing this approach and then draws from it three implications and three questions for institutional implementation of learning analytics.

Know thy student

During 2015 and 2016 one of the authors developed and used a DIY learning analytics tool (Know thy student) within a third-year Bachelor of Education course. Offered twice a year, the course had an annual enrolment of 400+ students. Two-thirds of these students studied via online only, and less than 15% were ever likely meet the course examiner in person. The design of the course focused explicitly on making significant use of a Moodle course site and sought to encourage: significant active student online engagement; formative assessment; student reflection via individual blogs; and, use of social bookmarking. Know thy student was developed to address limitations in existing institutional systems and enable more meaningful responses to student queries. The tool was inspired by and built on top of the Moodle Activity Viewer (MAV) developed at CQUniversity (Jones and Clark, 2014). While the tool interacted with, and extracted information from a number of institutional systems, it could only be used via the implementer’s laptop to interact with the specific course site.

When in use, Know thy student modified every page of the course site viewed by the teacher. It added a [details] link where ever a link to a user profile appeared, as illustrated in Figure 1.

Forum post + more student details

Figure 1 – Modified course page

Clicking on one of the [details] link would open a new pop up window (Figure 2) to provide access to information about the student. The pop-up window provided information in three separate tabs, including: personal details (Figure 2); activity completion (Figure 3); and, blog posts (Figure 4). Know thy student provided the examiner with ubiquitous and embedded access to course specific information about each student enrolled in the course.

Student background

Figure 2 – Personal details

Across four offerings of the course in 2015 and 2016 the teacher used the tool 3,100 separate times to access information on 761 different students. Representing 89.5% of the enrolled students. For one student, the tool was used 32 separate times. The median number of uses per student was three.

Initially, most of this use was generated when answering student questions on course discussion forums. However, the embedded and ubiquitous availability of the [details] link enabled other unplanned uses. For example, the course home page provided a list of all course participants who had been recently logged into the course site. As designed, Know thy students would add a [details] link to this list. This modification to the learning environment encouraged the development of a practice where the teacher would use that link to proactively learn more about students. In turn, this led to an increase in engaging with students via their blog posts and other means. Since the tool was simple and easily within grasp it provided a platform that encouraged more meaningful and unexpected connections with hundreds of students.

Implications and questions for learning analytics implementation

Analysis and discussion about the case have led the authors to suggest three implications about and three questions for the institutional implementation of learning analytics. Given the exploratory nature of this research there are tenative suggestions and each implication and question in turn generates additional questions for further investigation.

Implication #1: Institutional learning analytics currently falls short of an important goal

Baker (2016) identifies a common goal shared by learning analytics systems, that “of getting key information to a human being who can use it” (p. 607). This case shows that at least one institution’s approach to learning analytics is falling short of this goal, and there are indications that this problem is not limited to a single institution. Almost 10 years ago, Dawson & McWilliam (2008) comment on how poor the LMS data aggregation and visualisation tools of the day were in helping academics understand student learning behaviour. In 2013, focus groups of academics from the University of Melbourne identified a common need to be better able to correlate from different institutional systems (Corrin, Kennedy, & Mulder, 2013). A recent unpublished experiment at another institution by one of the co-authors of this paper identified that gathering relevant information for ten post-graduate students took over an hour and required the use of five separate information systems owned by three separate institutional departments. This reinforces the observation from Liu (2017) that academics “rarely have the data that they actually want in a place and form where it can actually be used”.

How widespread is this apparent failure? What are the factors contributing to this apparent failure? What can be done to address it?

Student activity completion

Figure 3 – Activity completion

Implication #2: Embedded, ubiquitous, contextual learning analytics enable emergent practice

Experience from this case suggests that providing useful contextual data appropriately embedded ubiquitously throughout the learning environment can enable unplanned and effective interventions. In this case, being able to access student and course specific information throughout the learning environment enabled the teacher to adopt the unplanned practice of proactively connecting with students. Arguably, this may fit with characterisations of teachers as bricoleurs focused on making do with and creatively repurposing the tools that are at hand (Hatton, 1989). Providing contextually appropriate tools, however, is difficult given the sheer diversity involved in education where “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029).

Does the provision of embedded, ubiquitous and contextual learning analytics increase and encourage greater adoption and bricolage by teachers with learning analytics? What impact would that have on the learning experience? Given the inherent diversity in education, how can institutional learning analytics provide contextually appropriate learning analytics?

Sentiment analysis of blog posts

Figure 4 – Sentiment analysis of blog posts

Implication #3: Teacher DIY learning analytics is possible

This case shows that technically literate academics are able to leverage available technologies to implement and use teacher DIY learning analytics. The notion of end-user development is not new with “[m]ost programs today … written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support” (Ko et al., 2011, p. 21). Such work can be seen as undesirable due to concerns about inefficiency, error, support, scalability, privacy and security. However, it can also address limitations and flaws in provided systems (Koopman & Hoffman, 2003).

How is DIY learning analytics viewed in relation to the institutional implementation of learning analytics? Is it something to be prevented, or enabled and encouraged? Given technology trends, can it be prevented?

Question #1: Does institutional learning analytics have an incomplete focus?

The common response to seeing the Know thy student tool is to ask if and how it can be reused in other courses. Such a response aims to understand if and how this particular learning analytics tool can “make the leap from the focused and particular to the broad and general” (Lonn et al., 2013, p. 235). This echoes what is seen as the core goal for most learning analytics project “to move from small-scale research towards broader institutional implementation” (Ferguson, Macfadyen, et al., 2014, p. 120). However, if “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (Mishra & Koehler, 2006, p. 1029), then how can a broad and general focus effectively respond to diverse contextual requirements? How can the institutional implementation of learning analytics address concerns that it is focused at an “institutional scale rather than a human scale” (Kruse & Pongsajapan, 2012)? Should and can its focus be expanded to include both the human and institutional scale?

Question #2: Does the institutional implementation of learning analytics have an indefinite postponement problem?

In seeking to move learning analytics beyond a research project to institutional scale Lonn et al ( 2013) partnered with a university’s Information Technology (IT) service. A first step in their project involved the IT service performing a feasibility of the project and placing “it in their timeline of priorities” (p. 236) and subsequently the project “was delayed due to existing projects … that were a higher priority for the institution” (Lonn et al., 2013, p. 238). Given the typical prioritisation scheme used by a university IT service, a tool like Know thy student which focuses on a need from a single course is unlikely to ever be of sufficient priority to be actioned at the institutional level. It will be indefinitely postponed.

Would learning analytics that are specific to the learning designs within a single course ever be implemented by institutional IT? Would such a project be indefinitely postponed? What impact does this have on the institutional implementation of learning analytics? Should and can this problem be addressed?

Question #3: If and how do we enable teacher DIY learning analytics?

The above has suggested that teacher (and perhaps student) DIY learning analytics may make a useful contribution to institutional learning analytics implementation. However, there are numerous significant questions around if and how it can be achieved, including: whether or not it can be integrated sustainably into institutional implementation. and whether or not teaching staff have sufficient data and technical literacy to effective contribute?

In terms of institutional implementation, Colvin et al (2017) provide recommendations necessary for sustainable learning analytics adoption that could offer useful guidance. In addition, there are projects like that described by Liu et al (2017) that are actively using such recommendations to support a level of teacher DIY learning analytics. The challenge is that enabling and encouraging teacher DIY learning analytics appears to represent a mindset that is incommensurable with the assumptions underpinning the majority of contemporary institutional practices (Jones & Clark, 2014). There is also research suggesting that the convergent and generative characteristics of pervasive digital technology requires the development of radically different approaches to corporate IT infrastructures and organisational strategic frameworks (Yoo, Boland, Lyytinen, & Majchrzak, 2012).

The low digital fluency of teaching staff has been identified as a significant challenge impeding the adoption of digital technology within higher education (Johnson, Adams Becker, Estrada, & Freeman, 2014). If low digital fluency is challenging the effective use of digital technologies by teaching staff, then it does raise questions about the likelihood of teacher DIY learning analytics. However, research in end user development suggests that such DIY practices are already happening and that such practices have positive impacts on the quantity and quality of adoption of digital technologies (Ko et al., 2011; Koopman & Hoffman, 2003). Finally, Scanlon et al (2013) observes that the complexity of technology-enhanced learning – such as learning analytics – means that accepting “’user-driven’ contributions from both teachers and students” (p. 34) may be necessary “to allow for effective intervention” and in order to understand the complexity of practices that is the “context for any particular TEL innovation” (p. 34).

Conclusion

This paper has briefly described a single case of teacher DIY learning analytics, which raises a number of implications and questions for the institutional implementation of learning analytics. It is suggested that empirical research moving beyond those in charge of the institutional implementation of learning analytics to those living with such systems can deepen the understanding of current experience with such systems and subsequently contribute improvements. From this case it appears that current approaches are failing to meet a potentially important goal of “getting key information to a human being who can use it” (Baker, 2016, p. 607). The paper has asked whether or not this may be due to learning analytics over-emphasising the broad at the expense of the specific or contextual. It may also be due to the nature of how institutional IT projects are prioritised leading to indefinite postponement of contextually specific projects. The case illustrates that technological trends are making teacher DIY learning analytics are possible, if only in very limited situations, and has provided an indication that ubiquitous, embedded and contextual learning analytics can enable and encourage positive and unplanned usage. Suggesting that enabling and encouraging teacher DIY learning analytics in the form of more generative institutional learning analytics implementations may offer an interesting and fruitful direction.

References

Baker, R. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/s40593-016-0105-0

Colvin, C., Dawson, S., Wade, A., & Gašević, D. (2017). Addressing the Challenges of Institutional Adoption. In C. Lang, G. Siemens, A. F. Wise, & D. Gaševic (Eds.), The Handbook of Learning Analytics (1st ed., pp. 281–289). Alberta, Canada: Society for Learning Analytics Research (SoLAR).

Corrin, L., Kennedy, G., & Mulder, R. (2013). Enhancing learning analytics by understanding the needs of teachers. In Electric Dreams. Proceedings ascilite 2013 (pp. 201–205).

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance, 41–41.

Ferguson, R., Macfadyen, L. P., Clow, D., Tynan, B., Alexander, S., & Dawson, S. (2014). Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. https://doi.org/10.18608/jla.2014.13.7

Hatton, E. (1989). Levi-Strauss’s Bricolage and Theorizing Teachers’ Work. Anthropology and Education Quarterly, 20(2), 74–96.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition (No. 9780989733557). Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis (No. 9780984660155). Austin, Texas.

Jones, D., & Clark, D. (2014). Breaking BAD to bridge the reality/rhetoric chasm. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 262–272).

Ko, A. J., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., … Wiedenbeck, S. (2011). The State of the Art in End-user Software Engineering. ACM Computing Surveys, 43(3), 21:1–21:44. https://doi.org/10.1145/1922649.1922658

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Kruse, A., & Pongsajapan, R. (2012). Student-Centered Learning Analytics (CNDLS Thought Papers). Georgetown University.

Liu, D. Y.-T. (2017). What do Academics really want out of Learning Analytics? – ASCILITE TELall Blog. Retrieved August 27, 2017

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing. https://doi.org/10.1007/978-3-319-52977-6_5

Lonn, S., Aguilar, S., & Teasley, S. D. (2013). Issues, Challenges, and Lessons Learned when Scaling Up a Learning Analytics Intervention. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 235–239). New York, NY, USA: ACM. https://doi.org/10.1145/2460296.2460343

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Scanlon, E., Sharples, M., Fenton-O’Creevy, M., Fleck, J., Cooban, C., Ferguson, R., … Waterhouse, P. (2013). Beyond prototypes: Enabling innovation in technology‐enhanced learning. London.

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Emedding plotly graphs in WordPress posts


Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Last year I started using with Perl to play with analytics around Moodle Book usage. This year, @beerc and I have been starting to play with Jupyter Notebooks and Python to play with analytics for meso-level practitioners (Hannon, 2013). Plotly provides a fairly useful platform for generating graphs of various types and sharing the data. Works well with a range of languages and Jupyter Notebooks.

Question here is how well it works with WordPress. WordPress has some (understandable) constraints around embedding external HTML in WordPress posts/pages. But there is a large set of community contributed plugins to WordPress that help with this, including a couple that apparently work with Plotly.

  • wp-plotly designed to embed a Plotly hosted graph by providing the plotly URL. Doesn’t appear to work with the latest version of WordPress. No go
  • Plot.wp provides a WordPress shortcode for Plotly (plotly and /plotly with square brackets) into which you place Plotly json data and hey presto graph. Has a github repo and actually works with the latest version of WordPress.

How to produce JSON from Python

I’m a Python newbie. Don’t really grok it the way I did Perl. I assumed it should be possible to auto-generate the json from the Python code, but how.

Looks like this will work in a notebook, though it does appear to need the resulting single quotes converted into double quotes and two sets of double quotes removed to be acceptable JSON.

#.. Python code to produce plotly figure ready to be plotted
import json
jsonData['data'] = json.dumps( fig['data'])
jsonData['layout'] = json.dumps( fig['layout'])
jsonData

For the graph I’m currently playing with, this ends up with

{"layout": {"yaxis": {"range": [0, 100], "title": "% response rate"}, "title": "EDC3100 Semester 2 MyOpinion % Response Rate", "xaxis": {"ticktext": ["2014 (n=106)", "2015 (n=88)nLeaderboard", "2016 (n=100)nLeaderboard"], "title": "Year", "tickvals": ["2014", "2015", "2016"]}}, 
  "data": [{"type": "bar", "name": "EDC3100", "x": ["2014", "2015", "2016"], "y": [34, 48, 49]}, {"type": "scatter", "name": "USQ average", "x": ["2015", "2016"], "y": [26.83, 23.52]}]}

And the matching graph produced by plotly follows. Roll over the graph to see some “tooltips”.

References

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Preparing my digital "learning space"

The following documents the (hopefully) last bit of extra work I have to undertake to prepare the digital “learning space” for EDC3100, ICT and Pedagogy. It’s work that has taken most of my working day. At a time when I can’t really afford it.  But it’s time I have to spend if I want to engage effectively in one of the most fundamental activities in teaching – know thy student.

End result

The work I’ve done today allows me to easily access from within the main digital learning space for EDC3100 (the Moodle course site) three different types of additional information about individual students.

It’s also an example of how the BAD mindset is able to work around the significant constraints caused by the SET mindset and in the process create shadow systems, which in turn illustrates the presence of a gap (i.e. yawning chasm) between what is provided and what is required.

The shadow system gapAdapted from Behrens and Sedera (2004)

What are they studying? What have they done before?

This student is studying Early Childhood education. They’ve completed 21 prior courses, but 5 of those were exemptions. I can see their GPA (blurred out below). They are studying via the online mode and is located in Queensland.

Screen Shot 2016-03-04 at 1.17.07 pm

How much of the course activities they’ve completed and when

This particular student is about half way through the first week’s material. They made that progress about 5 days ago. Looks like the “sharing, reflecting and connecting” resource took a while for them to complete. More so than the others – almost two hours

Screen Shot 2016-03-04 at 1.17.15 pm

What they’ve written on their blog and how they are “feeling”?

This student has written two blog posts. Both are fairly positive in the sentiment they express. Through the second is a little less positive in outlook.

Screen Shot 2016-03-04 at 1.26.04 pm

Reasons for the post

There are a number of reasons for this post:

  1. Reinforce the point about the value of an API infrastructure for sharing information between systems (and one that’s open to users).
  2. Document the huge gap that exists between the digital learning spaces universities are providing and what is actually required to implement useful pedagogies – especially when it comes to what Goodyear and Dimitriatdis (2013) call “design for orchestration” – providing support for the teacher’s work at learn time.
  3. Make sure I document the process to reduce the amount of work I have to do next time around.
  4. Demonstrate to the EDC3100 participants some of the possibilities with digital technologies, make them aware of some of what happens in the background of the course, and illustrate the benefits that can come from manipulating digital technologies for pedagogical purposes.
  5. Discover all the nastly little breaks in the routine caused by external changes (further illustrating the unstable nature of digital technologies).

What will I be doing

I’ll be duplicating a range of institutional data sources (student records and Moodle) so that I can implement a range of additional pedagogical supports, including:

Hopefully, I’ll be able to follow the process vaguely outlined from prior offerings. (Yep, that’s right. I have to repeat this process for every course offering, would be nice to automate).

Create new local Moodle course

I have a version of Moodle running on my laptop. I need to create a new course on that Moodle which will the local store for information about the students in my course.

Need to identify:

  • USQ moodle course id – 8036
  • local course id – 15
    Create the course in Moodle and get the id
  • group id – 176
    Create the group in the course
  • context id – 1635
    select * from mdl_context where instanceid=local_course_id  and contextlevel=50
  • course label – EDC3100_2016_S1
    One of the values defined when creating the course.
  • Update MoodleUsers::TRANSLATE_PARAMETERS
  • Update ActivityMapping::TRANSLATE_PARAMETERS
  • enrolid – 37
    select * from mdl_enrol where courseid=local_course_id and enrol=’manual’;

Create BIM activity in new course

Need to identify

  • bim id – 9

Enrol students in the course

Ahh, returning to Webfuse scripts, the sad, depleted remnants of my PhD.

~/webfuse/lib/BAM/3100/3100_support/participants/parse.pl is a script that will parse the Moodle participants web page, extract data about the enrolled users, and insert them appropriately into the database for my local Moodle course.

Initial test, no-one showing up as a participant. But add myself as teacher.

  1. Figure out that the “show all participants” option is hidden down the very bottom of the page.
  2. Save the page to my laptop
  3. Edit the parse.pl script to update course details
  4. Test that it parses the HTML file (in case changes have been made by the institution or by the new version of Moodle) – looking good.
  5. The finding of old students appears to be working.
    Oh nice, easy way to identify repeating students.  Need to save that data.
  6. Run the script
  7. Fix the errors
    • Duplicate key inserting into groups
    • missing required parameter COURSE_ID 111
      Complaint from MoodleUsers class – need to update TRANSLATE_PAREMETERS above
    • Particpants still not appearing, something missing — have to update the script. Done.

Took a while, but that should further automate the process for next time.

Add some extras

The above step only adds in some basic information about the student (USQ Moodle ID, email address). TO be useful I need to be able to know the sector/specialisation of the student, their postal code etc.

This information comes from a spreadsheet generated from the student records. And the data added into a “special” table in the Moodle database. This year I’m using a different method to obtain the spreadsheet, meaning that the format is slightly different. The new process was going to be automated to update each night, but that doesn’t appear to be working yet. But I have a version, will start with that.

  1. Compare the new spreadsheet content
    Some new fields: transferred_units, acad_load. Missing phone number.
  2. Add columns to extras table.
  3. Update the parsing of the file

Seems to be working

Activity data

This is to identify what activities are actually on the study desk.

Another script that parses a Moodle web page to extract data. Currently re-writing some of the activities, wonder how that will work. Actually, seem to have designed for it.  Does a replace of the list, not an update

~~/activites/parseActivity.pl

  1. Add in the course id for the new course
  2. ??? may be update the script to handle that parameterised section titles

Seems to be working

Activity completion data

Now to find out which activities each student has completed. Another script, this time parsing a CSV file produced by Moodle.

~/activities/parseCompletion.pl

  1. Update the script with new course data
  2. Unable to find course id – update ActivityMapping.pm
  3. Having problems again with matching activity names
    1. EDC3100 Springfield resources
      it shouldn’t be there. Turn off activity completion and get new CSV file
    2. For “.”???.
      First field is a . should be empty May need to watch this.
  4. Parses okay – try checkStudents
    Getting a collection of missing students.

    1. Are they in the local database at all? – no
    2. Have they withdrawn, but still in activity completion – yes.
  5. Seems to have worked

Student blog data

Yet another scraping of a Moodle web page.   ~/BIM/parseBIM.pl

  1. Update the config
  2. Check the parsing of the file
    1. Only showing a single student – the last one in the list
      For some reason, the table rows are missing a class. Only the lastrow has a class. Given I wrote the BIM code, this might be me. The parsing code assumes no class means it’s the header row.  But seems to work.
  3. Check the conversion process
    1. Crashed and burned at me – no Moodle id – hard code my exclusion
  4. Check insertion
  5. Do insertion
  6. Check BIM activity
  7. Check mirror for individual student – done
  8. Run them all – looks like there might be a proxy problem with the cron version.  Will have to do this at home – at least wait until it finishes.

Greasemonkey script

This is the user interface end of the equation.  What transforms all of the above into something useful.

/usr/local/www/mav

  • gmdocs/moreStudentDetails.user.js
    • Add the Moodle course id – line 331
  • phpdocs/api/getUserDetails.php
    • map the USQ and local Moodle ids
    • map USQ course id to BIM
    • add in the hard coded week data
    • Modify the module mapping (hard coded to the current course) — actually probably don’t need to do this.
  • Download the modified version of the greasemonkey client – http://localhost:8080/fred/mav/moreStudentDetails.user.js
  • Test it
    • Page is being updated with details link
    • Personal details being displayed
    • Activity completion not showing anything
      • Check server
        • Getting called – yes
        • Activity completion string is being produced
        • But the completion HTML is empty – problem in displayActivityStructure
        • That’s because the structure to display (from updateActivityStructure) is empty – which is actually from getActivityMapping
        • getActivityMapping
          • **** course id entered incorrectly
    • Blog posts showing error message
      Problem with type with the course id
  • Can I add in the extra bits of information – load, transferred courses
    • Client

Sentiment analysis

This is the new one, run the blog posts through indico sentiment analysis

~/BIM/sentiment.pl

  • update the BIM id

 

 

References

Behrens, S. & Sedera, W. (2004) Why do shadow systems exist after an ERP implementation? Lessons from a case study. IN WEI, C.-P. (Ed.) The 8th Pacific Asia Conference on Information Systems. Shanghai, China.

 

 

 

 

Sentiment analysis of student blog posts

In June last year I started an exploration into the value of sentiment analysis of student blog posts. This morning I’ve actually gotten it to work. There may be some value, but further exploration is required. Here’s the visible representation of what I’ve done.

The following is a screen shot of the modified “know thy student” kludge I’ve implemented for my course. The window shows some details for an individual student from second semester last year (I’ve blurred out identifying elements). The current focus is on the blog posts the student has written.
Sentiment analysis of blog posts

Each row in the above corresponds to an individual blog post. It used to show how long ago the post was written, the post’s title, and provide a link to the blog post. The modified version has the background colour for the cell modified to represent the sentiment of the blog post content. A red background indicates a negative post, a green background indicates a positive post, and a yellow background indicates somewhere in the middle.

The number between 0 and 1 shown next to the post title is the result provided by the Indico sentiment analysis function. The method use to perform the sentiment analysis.

Does this help?

Does this provide any help? Can it be useful?

An initial quick skim of posts from different students seemed to indicate mostly all green. Was the sentiment analysis revealing anything useful? Was it working?

In the following I examine what is revealed by the sentiment analysis by paying close attention to an individual student, the one shown in the image above.

Red blog post – reveal target for intervention?

The “red” blog post from the image above included words like “epic fail”. It tells the story of how the student had problems getting the new software for the course working. It shows as the third post the student made in the semester. The start of this course can be frustrating for students due to technical problems. This particular student didn’t report any of these problems on the course discussion forums.

Given that the course is totally online and there are ~100 students in this offering, there’s little chance for me to have known about these problems otherwise. Had the sentiment analysis been in place during the offering and if it was represented effectively, I might have been able to respond and that response might have been helpful.

Yellow blog post – a problem to address?

The yellow post above is a reflection on the students experience on Professional Experience, in a school, in front of a classroom, actually teaching. It is a reflection on how the student went through an emotional roller coaster on prac (not unusual), how her mentor really helped (also not unusual, but a little less so), but also how the various exemptions she received contributed to her problems.

Very positive blog posts – loved resources?

A number of the posts from this student are as positive as they can get – 1.0. Interestingly, almost all of them are descriptions of useful resources and include phrases like

what a wonderful learning tool …lovely resource…wonderful resource for teachers

What’s next?

Appears that the following are required/might be useful

  1. Explore different representations and analysis
    So far I’ve only looked at the student by student representation. Another forms of analysis/representation would seem potentially useful. Are there differences/patterns across semester, between students that are the same/different on certain characteristics, between different offerings of the course etc.How can and should this representation be made visible to the students?
  2. Set this in place for Semester 1.
    In a couple of weeks the 300+ student version of this course runs. Having the sentiment analysis working live during that semester could be useful.
  3. Explore useful affordances.
    One of the points of the PIRAC framework is that this form of learning analytics is only as useful as the affordances for action that it supports. What functionality can be added to this to help me and the students take action in response?

Reflection

I’ve been thinking about doing this for quite some time. But the business of academic life has contributed to a delay.  Getting this to work actually only required three hours of free time. But perhaps more importantly, it required the breathing space to get it done. That said, I still did the work on a Sunday morning and probably would not have had the time to do it within traditional work time.

 

What might a project combining LX Design and Analaytics look like?

In a bit more than an hour I’ll be talking to @catspyjamasnz trying to nut out some ideas for a project around LX Design and Learning Analytics. The following is me thinking out loud and working through “my issues”.

What is LX Design

I’ve got some vague ideas which I need to work on. Obviously start with a Google search.

Oh dear, the top result is for Learning Experience Design TRADEMARK which is apparently

a synthesis of instructional design, educational pedagogy, neuroscience, social sciences, design thinking, and UI/UX—is critical for any organization looking to compete in the modern educational marketplace.

While I won’t dwell on this particular approach, it does link to some of my vague qualms about LX design. First, there’s a danger of it becoming too much of another collection of meaningless buzzwords used to label the same old practice as conforming to the latest buzzwords. Mainly because the people adopting don’t fully understand it and fail transform their practice. Old wine, new bottles.

Second, there’s the problem of the “product focus” in learning. Where the focus is on building the best product, which troubles me. Perhaps this says more about my biases, but I worry that LX Design will become just another tool (perhaps a very good tool) applied within the dominant SET mindset within institutional e-learning (which is my context). Which not surprisingly is one of my concerns about the direction of learning analytics.

And talking about old wine in new bottles, this post suggests that

Although LXD is a relatively new term in the field of design, there are some established best practices emerging as applied to creating online learning interfaces:

Mmm, not much there that I’d class as something that LXD has provided to the world. e.g. Donald Clark’s current sequence of “10” posts, including “10 essential rules on use of GRAPHICS in online learning”.

Needs and wants of the user?

This overview of User Experience Design (UX Design) – the foundation on which LX design is built – suggests

The term “user experience” was coined by Dr. Donald Norman, a cognitive science researcher who was also the first to describe the importance of user-centered design (the notion that design decisions should be based on the needs and wants of users).

As I wrote last week I’m not convinced that the “needs and wants of users” is always the best approach. Especially if we’re talking about something very new that the user doesn’t yet understand.

Which begs the question:

Who is the user in a learning experience?

The obvious answer from a LX design perspective is that the user is the learner. That the focus should be on the learner has been the broadly accepted in higher education for some time now. But then all models are wrong, but some are useful. In critiquing the raise of the term Technology Enhanced Learning, Bayne (2014) draws on a range of publications by Biesta to critique the focus on learning and learners. I’ve just skimmed this argument for this post, but there is potentially something interesting and useful here.

Beyond this more theoretical question about the value of a “learner focus”, I’d also like to mention something a little closer to home. The context in which I’m framing this post is within higher education’s practice of formal learning. A practice that currently still assumes that there is some value in having a teacher involved in the learning experience. Where “teacher” may not be a single individual, but actually be a small team with diverse roles. Which leads me to the proposition that the “teacher” is also a user within a learning experience.

As I’m employed as a teacher within higher education, I can speak to the negative impact of the blindingly obvious, almost complete lack of user experience design around the tools and systems teachers are required to engage with around learning and teaching. Given the low quality of those tools, it’s no surprise to me that most learning in higher education has some flaws.

This is one of the reasons behind the 4 paths for learning analytics focusing on the teacher (as designer of learning, if you must) and not the learner.

Increasingly, I wonder if the focus on being learner centered is arising from a frustration with the perceived lack of quality of the learning experiences produced by teachers combined with a deficit model of teachers. Which brings me to this quote from Bayne (2014)

points us toward a need to move beyond anthropocentrism and the focus on the individual, toward a greater concern with the networks, ecologies and sociomaterial contexts of our engagement with education and technology.

Impact of LX design for teachers?

What would happen to the quality of learning overall, if LX design were applied to the systems and processes that teachers use to design, implement, support, and revise learning and teaching? Would this help teachers learn more about how to teach better?

Learning analytics

I assume the link between LX design and learning analytics is that learning analytics can provide the data to better inform LX design. In particular, what Lockyer et al (2013) call “process analytics” would be useful

These data and analyses provide direct insight into learner information processing and knowledge application (Elias, 2011) within the tasks that the student completes as part of a learning design. (p. 1448)

One of the problems @beerc and I have with learning analytics is that it really only ever focuses on two bits of the PIRAC framework i.e. information and representation. It hardly ever does anything about affordances or change. This is why dashboards suck and are a broken metaphor. A dashboard without the ability to do anything to control the car are no value whatsoever.

My questions about LXD

  1. Just another FAD? Old wine in new bottles?
  2. Another tool reinforcing the SET mindset? Especially the product focus.
  3. Does LX design have a problem because it doesn’t include complex adaptive systems theory? It appears to treat learner experience design as a complicated problem, rather than a complex problem.
  4. The “meta-learning” problem – can it be applied to teachers learning how to teach?
  5. Where does it fit on the spectrum of: sage on the stage, guide on the side, and meddler in the middle?
  6. How to make it useful for the majority of teachers and learners?
  7. What type of affordances can/should analytics provide LX design to help all involved?

References

Bayne, S. (2014). What’s the matter with Techology Enhanced Learning? Learning, Media & Technology, 40(1), 5–20. doi:10.1080/17439884.2014.915851.Available

Reading – Embracing Big Data in Complex Educational Systems: The Learning Analytics Imperative and the Policy Challenge

The following is a summary and ad hoc thoughts on Macfadyen et al (2014).

There’s much to like in the paper. But the basic premise I see in the paper is that to fix the problems of the current inappropriate teleological processes used in institutional strategic planning and policy setting is an enhanced/adaptive teleological process. The impression I take from the paper is that it’s still missing the need for institutional to be enabling actors within institutions to integrate greater use of ateleological processes (see Clegg, 2002). Of course, Clegg goes onto do the obvious and develop a “dialectical approach to strategy” that merges the two extremes.

Is my characterisation of the adaptive models presented here appropriate?

I can see very strong connections with the arguments made in this paper between institutions and learning analytics and the reasons why I think e-learning is a bit like teenage sex.

But given the problems with “e-learning” (i.e. most of it isn’t much good in pedagogical terms) what does that say about the claim that we’re in an age of “big data” in education. If the pedagogy of most e-learning is questionable, is the data being gathered any use?

Conflating “piecemeal” and “implementation of new tools”

The abstract argues that there must be a shift “from assessment-for-accountability to assessment-for-learning” and suggests that it won’t be achieved “through piecemeal implementation of new tools”.

It seems to me that this is conflating two separate ideas, they are

  1. piecemeal; and,

    i.e. unsystematic or partial measures. It can’t happen bit-by-bit, instead it has to proceed at the whole of institutional level. This is the necessary step in the argument that institutional change is (or must be) involved.

    One of the problems I have with this is that if you are thinking of educational institutionals as complex adaptive systems, then that means they are the type of system where a small (i.e. piecemeal change) could potentially (but not always) have a large impact. In a complex system, a few very small well directed changes may have a large impact. Or alternatively and picking up on ideas I’ve heard from Dave Snowden, implementing large amounts of very small projects and observing the outcomes may be the only effective way forward. By definition a complex system is one where being anything but piecemeal may be an exercise in futility. As you can never understand a complex system, let alone being able to guess the likely impacts of proposed changes..

    The paper argues that systems of any type are stable and resistant to change. There’s support for this argument. I need to look for dissenting voices and evaluate.

  2. implementation of new tools.

    i.e. the build it and they will come approach won’t work. Which I think is the real problem and is indicative of the sort of simplistic planning processes that the paper argues against.

These are two very different ideas. I’d also argue that while these alone won’t enable the change, they are both necessary for the change. I’d also argue that institutional change (by itself) is also unlikely to to achieve the type of cultural change required. The argument presented in seeking to explain “Why e-learning is a bit like teenage sex” is essentially this. That institutional attempts to enable and encourage changed in learning practice toward e-learning fail because they are too focused on institutional concerns (large scale strategic change) and not enough on enabling elements of piecemeal growth (i.e. bricolage).

The Reusability Paradox and “at scale”

I also wonder about considerations raised by the reusability paradox in connection with statements like (emphasis added) “learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale“. Can the “smart algorithms” of LA marry the opposite ends of the spectrum – pedagogical value and large scale reuse? Can the adaptive planning models bridge that gap?

Abstract

In the new era of big educational data, learning analytics (LA) offer the possibility of implementing real–time assessment and feedback systems and processes at scale that are focused on improvement of learning, development of self–regulated learning skills, and student success. How- ever, to realize this promise, the necessary shifts in the culture, techno- logical infrastructure, and teaching practices of higher education, from assessment–for–accountability to assessment–for–learning, cannot be achieved through piecemeal implementation of new tools. We propose here that the challenge of successful institutional change for learning analytics implementation is a wicked problem that calls for new adaptive forms of leadership, collaboration, policy development and strategic planning. Higher education institutions are best viewed as complex systems underpinned by policy, and we introduce two policy and planning frameworks developed for complex systems that may offer institutional teams practical guidance in their project of optimizing their educational systems with learning analytics.

Introduction

First para is a summary of all the arguments for learning analytics

  • awash in data (I’m questioning)
  • now have algorithms/methods that can extract useful stuff from the data
  • using these methods can help make sense of complex environments
  • education is increasingly complex – increasing learner diversity, reducing funding, increasing focus on quality and accountability, increasing competition
  • it’s no longer an option to use the data

It also includes a quote from a consulting company promoting FOMO/falling behind if you don’t use it. I wonder how many different fads they’ve said that about?

Second para explains what the article is about – “new adaptive policy and planning approaches….comprehensive development and implementation of policies to address LA challenges of learning design, leadership, institutional culture, data access and security, data privacy and ethical dilemmas, technology infrastructure, and a demonstrable gap in institutional LA skills and capacity”.

But based on the idea of Universities as complex adaptive systems. That “simplistic approaches to policy development are doomed to fail”.

Assessment practices: A wicked problem in a complex system

Assessment is important. Demonstrates impact – positive and negative – of policy. Assessment still seen too much as focused on accountability and not for learning. Diversity of stakeholders and concerns around assessment make substantial change hard.

“Assessment practice will continue to be intricately intertwined both with learning
and with program accreditation and accountability measures.” (p. 18). NCLB used as an example of the problems this creates and mentions Goodhart’s law.

Picks up the on-going focus on “high-stakes snapshot testing” to provide comparative data. Mentions

Wall, Hursh and Rodgers (2014) have argued, on the other hand, that the perception that students, parents and educational leaders can only obtain useful comparative information about learning from systematized assessment is a false one.

But also suggests that learning analytics may offer a better approach – citing (Wiliam, 2010).

Identifies the need to improve assessement practices at the course level. Various references.

Touches on the difficulties in making these changes. Mentions wicked problems and touches on complex systems

As with all complex systems, even a subtle change may be perceived as difficult, and be resisted (Head & Alford, 2013).

But doesn’t pick up the alternate possibility that a subtle change that might not be seen as difficult could have large ramifications.

Learning analytics and assessment-for-learning

This paper is part of a special issue on LA and assessment. Mentions other papers that have show the contribution LA can make to assessment.

Analytics can add distinct value to teaching and learning practice by providing greater insight into the student learning process to identify the impact of curriculum and learning strategies, while at the same time facilitating individual learner progress (p. 19)

The argument is that LA can help both assessment tasks: quality assurance, and learning improvement.

Technological components of the educational system and support of LA

The assumption is that there is a technological foundation for – storing, managing, visualising and processing big educational data. Need for more than just the LMS. Need to mix it all up and this “institutions are recognizing the need to re–assess the concept of teaching and learning space to encompass both physical and virtual locations, and adapt learning experiences to this new context (Thomas, 2010)” (p. 20) Add to that the rise of multiple devices etc.

Identifies the following requirements for LA tools (p. 21) – emphasis added

  1. Diverse and flexible data collection schemes: Tools need to adapt to increasing data sources, distributed in location, different in scope, and hosted in any platform.
  2. Simple connection with institutional objectives at different levels: information needs to be understood by stakeholders with no extra effort. Upper management needs insight connected with different organizational aspects than an educator. User–guided design is of the utmost importance in this area.
  3. Simple deployment of effective interventions, and an integrated and sustained overall refinement procedure allowing reflection

Some nice overlaps with the IRAC framework here.

It does raise interesting questions about what are institutional objectives? Even more importantly how easy it is or isn’t to identify what those are and what they mean at the various levels of the institution.

Interventions An inset talks about the sociotechnical infrastructure for LA. It mentions the requirement for interventions. (p. 21)

The third requirement for technology supporting learning analytics is that it can facilitate the deployment of so–called interventions, where intervention may mean any change or personalization introduced in the environment to support student success, and its relevance with respect to the context. This context may range from generic institutional policies, to pedagogical strategy in a course. Interventions at the level of institution have been already studied and deployed to address retention, attrition or graduation rate problems (Ferguson, 2012; Fritz, 2011; Tanes, Arnold, King, & Remnet, 2011). More comprehensive frameworks that widen the scope of interventions and adopt a more formal approach have been recently proposed, but much research is still needed in this area (Wise, 2014).

And then this (pp. 21-22) which contains numerous potential implications (emphasis added)

Educational institutions need technological solutions that are deployed in a context of continuous change, with an increasing variety of data sources, that convey the advantages in a simple way to stakeholders, and allow a connection with the underpinning pedagogical strategies.

But what happens when the pedagogical strategies are very, very limited?

Then makes this point as a segue into the next section (p. 22)

Foremost among these is the question of access to data, which needs must be widespread and open. Careful policy development is also necessary to ensure that assessment and analytics plans reflect the institution’s vision for teaching and strategic needs (and are not simply being embraced in a panic to be seen to be doing something with data), and that LA tools and approaches are embraced as a means of engaging stakeholders in discussion and facilitating change rather than as tools for measuring performance or the status quo.

The challenge: Bringing about institutional change in complex systems

“the real challenges of implementation are significant” (p. 22). The above identifies “only two of the several and interconnected socio-technical domains that need to be addressed by comprehensive institutional policy and strategic planning”

  1. influencing stakeholder understanding of assessment in education
  2. developing the necessary institutional technological infrastructure to support the undertaking

And this has to be done whilst attending to business as usual.

Hence not surprising that education lags other sectors in adoption analytics. Identifies barriers

  • lack of practical, technical and financial capacity to mind big data

    A statement from the consulting firm who also just happens to be in the market of selling services to help.

  • perceived need for expensive tools

Cites various studies showing education institutions stuck at gathering and basic reporting.

And of course even if you get it right…

There is recognition that even where technological competence and data exist, simple presentation of the facts (the potential power of analytics), no matter how accurate and authoritative, may not be enough to overcome institutional resistance (Macfadyen & Dawson, 2012; Young & Mendizabal, 2009).

Why policy matters for LA

Starts with establishing higher education institutions as a “superb example of complex adaptive systems” but then suggests that (p. 22)

policies are the critical driving forces that underpin complex and systemic institutional problems (Corvalán et al., 1999) and that shape perceptions of the nature of the problem(s) and acceptable solutions.

I struggle a bit with that observation and even more with this argument (p. 22)

we argue that it is therefore only through implementation of planning processes driven by new policies that institutional change can come about.

Expands on the notion of CAS and wicked problems. Makes this interesting point

Like all complex systems, educational systems are very stable, and resistant to change. They are resilient in the face of perturbation, and exist far from equilibrium, requiring a constant input of energy to maintain system organization (see Capra, 1996). As a result, and in spite of being organizations whose business is research and education, simple provision of new information to leaders and stakeholders is typically insufficient to bring about systemic institutional change.

Now talks about the problems more specific to LA and the “lack of data-driven mind-set” from senior management. Links this to earlier example of institutional research to inform institutional change (McINtosh, 1979) and links to a paper by Ferguson applying those findings to LA, from there and other places factors identified include

  • academics don’t want to act on findings from other disciplines;
  • disagreements over qualitative vs quantitative approaches;
  • researchers & decision makers speak different languages;
  • lack of familiarity with statistical methods
  • data not presented/explained to decision makers well enough.
  • researchers tend to hedge an dquality conclusions.
  • valorized education/faculty autonomy and resisted any administrative efforts perceived to interfere with T&L practice

Social marketing and change management is drawn upon to suggest that “social and cultural change” isn’t brought about by simply by giving access to data – “scientific analyses and technical rationality are insufficient mechanisms for understanding and solving complex problems” (p. 23). Returns to

what is needed are comprehensive policy and planning frameworks to address not simply the perceived shortfalls in technological tools and data management, but the cultural and capacity gaps that are the true strategic issues (Norris & Baer, 2013).

Policy and planning approaches for wicked problems in complex systems

Sets about defining policy. Includes this which resonates with me

Contemporary critics from the planning and design fields argue, however, that these classic, top–down, expert–driven (and mostly corporate) policy and planning models are based on a poor and homogenous representation of social systems mismatched with our contemporary pluralistic societies, and that implementation of such simplistic policy and planning models undermines chances of success (for review, see Head & Alford, 2013).

Draws on wicked problem literature to expand on this. Then onto systems theory.

And this is where the argument about piecemeal growth being insufficient arises (p. 24)

These observations not only illuminate why piecemeal attempts to effect change in educational systems are typically ineffective, but also explains why no one–size–fits–all prescriptive approach to policy and strategy development for educational change is available or even possible.

and perhaps more interestingly

Usable policy frameworks will not be those which offer a to do list of, for example, steps in learning analytics implementation. Instead, successful frameworks will be those which guide leaders and participants in exploring and understanding the structures and many interrelationships within their own complex system, and identifying points where intervention in their own system will be necessary in order to bring about change

One thought is whether or not this idea is a view that strikes “management” as “researchers hedging their bets”? Mentioned above as a problem above.

Moves onto talking “adaptive management strategies” (Head and Alford, 2013) which offer new means for policy and planning that “can allow institutions to respond flexibly to ever-changing social and institutional contexts and challenges” which talk about

  • role of cross-institutional collaboration
  • new forms of leadership
  • development of enabling structures and processes (budgeting, finance, HR etc)

Interesting that notions of technology don’t get a mention.

Two “sample policy and planning models” are discussed.

  1. Rapid Outcome Mapping Approach (ROMA) – from international development

    “focused on evidence-based policy change”. An iterative model. I wonder about this

    Importantly, the ROMA process begins with a systematic effort at mapping institutional context (for which these authors offer a range of tools and frameworks) – the people, political structures, policies, institutions and processes that may help or hinder change.

    Perhaps a step up, but isn’t this still big up front design? Assumes you can do this? But then some is better than none?

    Apparently this approach is used more in Ferguson et al (2014)

  2. “cause-effect framework” – DPSEEA framework

    Driving fource, Pressure, State, Exposure, Effect (DPSEEA) a way of identifying linkages between forces underpinning complex systems.

Ferguson et al (2014) apparently map “apparently successful institutional policy and planning processes have pursued change management approaches that map well to such frameworks”. So not yet informed by? Of course, there’s always the question of the people driving those systems reporting on their work?

I do like this quote (p. 25)

To paraphrase Head and Alford (2013), when it comes to wicked problems in complex systems, there is no one– size–fits–all policy solution, and there is no plan that is not provisional.

References

Ferguson, R., Clow, D., Macfadyen, L., Essa, A., Dawson, S., & Alexander, S. (2014). Setting Learning Analytics in Context : Overcoming the Barriers to Large-Scale Adoption. Journal of Learning Analytics, 1(3), 120–144. doi:10.1145/2567574.2567592

Macfadyen, L. P., Dawson, S., Pardo, A., & Gasevic, D. (2014). Embracing big data in complex educational systems: The learning analytics imperative and the policy challenge. Research and Practice in Assessment, 9(Winter), 17–28.

The four paths for implementing learning analytics and enhancing the quality of learning and teaching

The following is a place holder for two presentations that are related. They are:

  1. “Four paths for learning analytics: Moving beyond a management fashion”; and,

    An extension of Beer et al (2014) (e.g. there are four paths now, rather than three) that’s been accepted to Moodlemoot’AU 2015.

  2. “The four paths for implementing learning analytics and enhancing the quality of learning and teaching”;

    A USQ research seminar that is part a warm up of the Moot presentation, but also an early attempt to extend the 4 paths idea beyond learning analytics and into broader institutional attempts to improve learning and teaching.

Eventually the slides and other resources from the presentations will show up here. What follows is the abstract for the second talk.

Slides for the MootAU15 presentation

Only 15 minutes for this talk. Tried to distill the key messages. Thanks to @catspyjamasnz the talk was captured on Periscope

Slides for the USQ talk

Had the luxury of an hour for this talk. Perhaps to verbose.

Abstract

Baskerville and Myers (2009) define a management fashion as “a relatively transitory belief that a certain management technique leads rational management progress” (p. 647). Maddux and Cummings (2004) observe that “education has always been particularly susceptible to short-lived, fashionable movements that come suddenly into vogue, generate brief but intense enthusiasm and optimism, and fall quickly into disrepute and abandonment” (p. 511). Over recent years learning analytics has been looming as one of the more prominent fashionable movements in educational technology. Illustrated by the apparent engagement of every institution and vendor in some project badged with the label learning analytics. If these organisations hope to successfully harness learning analytics to address the challenges facing higher education, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

Building on an earlier paper (Beer, Tickner, & Jones, 2014) this session will provide a conceptual framework to aid in moving learning analytics projects beyond mere fashion. The session will identify, characterize, and explain the importance of four possible paths for learning analytics: “do it to” teachers; “do it for” teachers; “do it with” teachers; and, teachers “DIY”. Each path will be illustrated with concrete examples of learning analytics projects from a number of universities. Each of these example projects will be analysed using the IRAC framework (Jones, Beer, & Clark, 2013) and other lenses. That analysis will be used to identify the relative strengths, weaknesses, and requirements of each of the four paths. The analysis will also be used to derive implications for the decision-makers, developers, instructional designers, teachers, and other stakeholders involved in both learning analytics, and learning and teaching.

It will be argued that learning analytics projects that follow only one of the four paths are those most likely to be doomed to mere fashion. It will argue that moving a learning analytics project beyond mere fashion will require a much greater focus on the “do it with” and “DIY” paths. An observation that is particularly troubling when almost all organizational learning analytics projects appear focused primarily on either the “do it to” or “do it for” paths.

Lastly, the possibility of connections between this argument and the broader problem of enhancing the quality of learning and teaching will be explored. Which paths are used by institutional attempts to improve learning and teaching? Do the paths used by institutions inherently limit the amount and types of improvements that are possible? What implications might this have for both research and practice?

References

Baskerville, R. L., & Myers, M. D. (2009). Fashion waves in information systems research and practice. MIS Quarterly, 33(4), 647–662.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511–533.

Learning analytics is better when…..?

Trying to capture some thinking that arose during an institutional meeting re: learning analytics. The meeting was somewhat positive, but – as is not uncommon – there seemed to be some limitations around what learning analytics actually is and what it might look like. Wondering if the following framing might help it draws on points made by numerous people about learning analytics and some strong echoes of the (P)IRAC framework

Learning analytics better when it

  1. knows more about the learning environment;
    (learning environment includes learners, teachers, learning designs etc.)
  2. is accessible from within the learning environment;
    i.e. learners and teachers don’t need to remove themselves from the learning environment to access the learning analytics.
  3. provides affordances for action within the learning environment;
    If no change results from the learning analytics, then there is little value in it.
  4. can be changed by people within the learning environment.
    i.e. learners and teachers (and perhaps others) can modify the learning analytics for their own (new) purposes.

The problem is that I don’t think that institutional considerations of learning analytics pay much atention to these four axes and this may explain limited usage and impact arising from the tools.

All four axes tend to require knowing a lot about the specifics of the learning environment and being able to respond to what you find in that environment in a contextually appropriate way.

The more learning analytics enables this, the more useful it is. The more useful it is, the more it used and the more impact it will have.

A few examples to illustrate.

Data warehouse

  1. What does it know about the learning environment? Limited
    Generally will know who the learners are, what they are studying, where they are from etc. May know what they have done within various institutional systems.
    Almost certainly knows nothing about the learning design.
    Probably knows who’s teaching what they’ve taught before.
  2. Accessible from the learning environment? Probably not
    Access it via a dashboard tool which is separate from the learning environment. i.e. not going to be emedded within the discussion forum tool, or the wiki tool.
    A knowledgeable user of the tool may well set up their own broader environment so that the data warehouse is integrated into it.
  3. Affordances for action? NONE
    It can display information, that’s it.
  4. Change? Difficult and typically the same for everyone
    Only the data warehouse people can change the representation of the information the warehouse provides. They probably can’t change the data that is included in the data warehouse without buy in from external system owners. IT governance structures need to be traversed.

Moodle reports

  1. What does it know about the learning environment? Limited
    Know what the students have done within Moodle. But does not typically know of anything outside Moodle.
  2. Accessible from the learning environment? Somewhat
    If you’re learning within Moodle, you can get to the Moodle reports. But the Moodle reports are a separate module (functionality) and thus aspects of the Moodle reports cannot be easily included into other parts of the Moodle learning environment and certainly cannot be integrated into non-Moodle parts of the learning environment.
  3. Affordances for action? Limited
    The closest is that some reports provide the ability to contact digitally students who meet certain criteria. However, the difficulty of using the reports suggests that the actual “affordances” are somewhat more limited.
  4. Change? Difficult, limited to Moodle
    Need to have some level of Moodle expertise and some greater level of access to modify reports. Typically would need to go through some level of governance structure. Probably can’t be change to access much outside of Moodle.

“MAV-enabled analytics”

A paper last year describes the development of MAV at CQU and some local tinkering I did using MAV i.e. “MAV-enabled analytics”.

  1. What does it know about the learning environment? Limited but growing
    As described, both MAV (student clicks on links in Moodle) and my tinkering (student records data) draw on low level information. In a month or so my on-going tinkering has the tool including information about student completion of activities in my course site and what the students have written on their individual blogs. Hopefully that will soon be extended with SNA and some sentiment analysis.
  2. Accessible from the learning environment? Yes
    Both are analytics tools are embedded into the Moodle LMS – the prime learning environment for this context.
  3. Affordances for action? Limited but growing
    My tinkering offers little. MAV @ CQU is integrated with other systems to support a range of actions associated with contacting and tracking students. Both systems are very easy to use, hence increasing the affordances.
  4. Change? Slightly better than limited.
    MAV has arisen from tinkering and thus new functionality can be added. However, it requires someone who knows how MAV and its children work. It can’t be changed by learners/teachers. However, as I am the teacher using the results of my tinkering, I can change it. However, I’m constrained by time and system access.

Adding some learning process analytics to EDC3100

In Jones and Clark (2014) we drew on Damien’s (Clark) development of the Moodle Activity Viewer (MAV) as an example of how bricolage, affordances and distribution (the BAD mindset) can add some value to institutional e-learning. My empirical contribution to that paper was talking about how I’d extended MAV so that when I was answering a student query in a discussion forum I could quickly see relevant information about that student (e.g. their major, which education system they would likely be teaching into etc).

A major point of that exercise was that it was very difficult to actually get access to that data at all. Let alone get access to that data within the online learning environment for the course. At least if I had to wait upon the institutional systems and processes to lumber into action.

As this post evolved, it’s become also an early test to see if the IRAC framework can offer some guidance in designing the extension of this tool by adding some learning process analytics. The result of this post

  1. Defines learning process analytics.
  2. Applies that definition to my course.
  3. Uses the IRAC framework to show off the current mockup of the tool and think about what other features might be added.

Very keen to hear some suggestions on the last point.

At this stage, the tool is working but only the student details are being displayed. The rest of the tool is simply showing the static mockup. This afternoon’s task is to start implementing the learning process analytics functionality.

Some ad hoc questions/reflections that arise from this post

  1. How is the idea of learning process analytics going to be influenced by the inherent tension between the tendency for e-learning systems to be generic and the incredible diversity of learning designs?
  2. Can generic learning process analytics tools help learners and teachers understand what’s going on in widely different learning designs?
  3. How can you the diversity of learning designs (and contexts) be supported by learning process analytics?
  4. Can a bottom-up approach work better than a top-down?
  5. Do I have any chance of convincing the institution that they should provide me with
    1. Appropriate access to the Moodle and Peoplesoft database; and,
    2. A server on which to install and modify software?

Learning process analytics

The following outlines the planning and implementation of the extension of that tool through the addition of process analytics. Schneider et al (2012) (a new reference I’ve just stumbled across) define learning process analytics

as a collection of methods that allow teachers
and learners to understand what is going on in a learning scenario, i.e. what participants work(ed) on, how they interact(ed), what they produced(ed), what tools they use(ed), in which physical and virtual location, etc. (p. 1632)

and a bit later on learning scenario and learning process analytics are defined as

as the measurement and collection of learner actions and learner productions, organized to provide feedback to learners, groups of learners and teachers during a teaching/learning situation. (p. 1632)

This is a nice definition in terms of what I want to achieve. My specific aim is to

collect, measure, organise and display learner actions and learner productions to provide feedback to the teacher during a teaching/learning situation

Two main reasons for the focus on providing this information to the teacher

  1. I don’t have the resources or the technology (yet) to easily provide this information to the learners.
    The method I’m using here relies on servers and databases residing on my computer (a laptop). Not something I can scale to the students in my class. I could perhaps look at using an external server (the institution doesn’t provide servers) but that would be a little difficult (I haven’t done it before) and potentially get me in trouble with the institution (not worth the hassle just yet).

    As it stands, I won’t even be able to provide this information to the other staff teaching into my course.

  2. It’s easier to see how I can (will?) use this information to improve my teaching and hopefully student learning.
    It’s harder to see how/if learners might use any sort of information to improve their learning.

Providing this information to me is the low hanging fruit. If it works, then I can perhaps reach for the fruit higher up.

Learner actions and productions

What are the learner actions and productions I’m going to generate analytics from?

The current course design means that students will be

  1. Using and completing a range of activities and resources contained on the course site and organised into weekly learning paths.
    These actions are in turn illustrated through a range of data including

    • Raw clicks around the course site stored in system logs.
    • Activity completion.
      i.e. if a student has viewed all pages in a resource, completed a quiz, or posted the required contributions to a discussion forum they are counted as completing an activity. Students get marks for completing activities.
    • Data specific to each activity.
      i.e. the content of the posts they contributed to a forum, the answers they gave on a quiz.
  2. Posting to their individual blog (external to institutional systems) for the course.
    Students get marks for # of posts, average word count and links to other students and external resources.
  3. Completing assignments.
  4. Contributing to discussions on various forms of social media.
    Some officially associated with the course (e.g. Diigo and others unofficially (student Facebook groups).

I can’t use some of the above as I do not have access to the data. Private student Facebook groups is one example, but the more prevalent is institutional data that I’m unable to access. In fact, the only data I can easily get access to is

  • Student blog posts; and,
  • Activity completion data.

So that’s what I’ll focus on. Obviously there is a danger here that what I can measure (or in this case access) is what becomes important. On the plus side, the design of this course does place significant importance on the learning activities students undertake and the blog posts. It appears what I can measure is actually important.

Here’s where I’m thinking that the IRAC framework can scaffold the design of what I’m doing.

Information

Is all the relevant Information and only the relevant information available?

Two broad sources of information

  1. Blog posts.
    I’ll be running a duplicate version of the BIM module in a Moodle install running on my laptop. BIM will keep a mirror of all the posts students make to their blogs. The information in the database will include

    • Date, time, link and total for each post.
    • A copy of the HTML for the post.
    • The total number of posts made so far, the url for the blog its feed.
  2. Activity completion.
    I’ll have to set up a manual process for importing activity completion data into a database on my computer. For each activity I will have access to the date and time when the student completed the activity (if they have).

What type of analysis or manipulation can I perform on this information?

At the moment, not a lot. I don’t have a development environment that will allow me to run lots of complex algorithms over this data. This will have to evolve over time. What do I want to be able to do initially? An early incomplete list of some questions

  1. When was the last time the student posted to their blog?
  2. How many blog posts have they contributed? What were they titled? What is the link to those posts?
  3. Are the blog posts spread out over time?
  4. Who are the other students they’ve linked to?
  5. What activities have they completed? How long ago?
  6. Does it appear they’ve engaged in a bit of task corruption in completing the activities?
    e.g. is there a sequence of activities that were completed very quickly?

Representation

Does the representation of the information aid the task being undertaken?

The task here is basically giving me some information about the student progress.

For now it’s going to be a simple extension to the approach talked about in the paper. i.e. whenever my browser sees on a course website a a link to a user profile, it will add a link [Details] next to it. If I click on that link I see a popup showing information about that student. The following is a mockup (click on the images to see a larger version) of what is currently partially working

001 - Personal Details

By default the student details are shown. There are two other tabs, one for activity completion and one for blog posts.

Requirement suggestion: Add into the title of each tab some initial information. e.g. Activity completion should include something like “(55%)” indicating the percentage of activities currently completed. Or perhaps it might be the percentage of the current week’s activities that have been completed (or perhaps the current module).

The activity completion tab is currently the most complicated and the ugliest. Moving the mouse of the Activity Completion tab brings up the following.

002 - Activity completion

The red, green and yellow colours are ugly and are intended to indicate a simple traffic light representation. Green means all complete, red is not, yellow means in progress for some scale.

The course is actually broken up into 3 modules. The image above shows each module being represented. Open up a module and you see the list of weeks for that module – also with the traffic light colours. Click on a particular week and you see the list of activities for that week. Also with colours, but also with the date when the student completed the activity.

Requirement suggestion: The title bars for the weeks and modules could show the first and last time the student completed an activity in that week/module.

Requirement suggestion: The date/time when an activity was completed could be a roll-over. Move the mouse over the date/time and it will change the date/time to how long ago that was.

Requirement suggestion: What about showing the percentage of students who have completed activities? Each activity could show the % of students who had completed it. Each week could show the percentage of students who had completed that week’s activities. Each module could….

Requirement suggestion: Find some better colours.

The blog post tab is the most under-developed. The mockup currently only shows some raw data that is used to generate the students mark.

003- blog posts

Update The following screen shot shows progress on this tab. The following is from the working tool.

BlogProcessAnalytics

Requirement suggestions:

  • Show a list of recent blog post titles that are also links to those posts.
    Knowing what the student has (or hasn’t) blogged recently may give some insight into their experience.
    Done: see above image.
  • Show the names of students where this student has linked to their blog posts.
  • Organise the statistics into Modules and show the interim mark they’d get.
    This would be of immediate interest to the students.

Affordances

Are there appropriate Affordances for action?

What functionality can this tool provide to me that will help?

Initially it may simply be the display of the information. I’ll be left to my own devices to do something with it.

Have to admit to being unable to think of anything useful, just yet.

Change

How will the information, representation and the affordances be Changed?

Some quick answers

  1. ATM, I’m the only one using this tool and it’s all running from my laptop. Hence no worry about impact on others if I make changes to what the tool does. Allows some rapid experimentation.
  2. Convincing the organisation to provide an API or some other form of access directly (and safely/appropriately) to the Moodle database would be the biggest/easiest way to change the information.
  3. Exploring additional algorithms that could reveal new insights and affordances is also a good source.
  4. Currently the design of the tool and its environment is quite kludgy. Some decent design could make this particularly flexible.
    e.g. simply having the server return JSON data rather than HTML and having some capacity on the client side to format that data could enable some experimentation and change.

References

Schneider, D. K., Class, B., Benetos, K., Lange, M., Internet, R., Developer, A., & Zealand, N. (2012). Requirements for learning scenario and learning process analytics. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 1632–1641).

Does GPA make any difference to #moodle course usage?

Summary

In short, there is definitely a pattern. In fact, there are two patterns evident:

  1. Students in higher GPA groups click on a range of different activities and resources at much higher rates than those in lower GPA groups.
  2. A greater percentage of students in higher GPA groups will click on resources.

There are a few exceptions to this. Another less explored pattern is a drop off in usage as the semester progresses.

This is in the context of a single offering of a course I teach with all students (n=~100) enrolled as online students.

The pattern seems to exist across different types of resources from across the semester. Though there does appear to be a drop off toward the end of semester.

This aligns with findings from prior work such as Dawson et al (2008) and Beer et al (2009).

My next step is to see what reaction presenting this pattern to the next crop of students will have.

Origins

In just over a week a new semester starts. Institutional requirements mean that course sites need to be available 2 weeks prior to the start of semester. Consequently there’s already been some activity on the new site for the course I teach. In response, I observed

To which @s_palm replied

Which got me wondering. Is there a link between accessing the course site and GPA? Do “good” students use the LMS more? What happens if students are aware of this pattern?

With the Moodle Activity Viewer installed, I have one way to explore the usage question for a past course site. To be clear

  1. This is just an initial look to see if there are any obvious patterns.
  2. As @s_palm has pointed out

To test this, I’m going to

  1. Create some new groups on my old Moodle course site based on student GPA.

    I could also do this based on the final grade in this course, might be an interesting comparison.

    Glad I had access to the database, creating these groups through the Moodle interface would have been painful.

  2. I can then use MAV’s “select a group” feature to view how they’ve accessed the course site.

    MAV will show the number of clicks or number of students who have visited every link on a Moodle course site. I don’t expect the number of students to reveal too much – at least not on the home page – as completing activities/resources is part of the assessment. Comparing the number of links is not going to be straight forward given the different numbers in each group (and MAV not offering anyway to normalise this).

Explanation of the “analysis”

The quick and dirty comparison is between the following groups

  • 6 GPA (n=11) – all students with a GPA of 6 or above.
  • 5 GPA (n=49) – all students with a GPA of above 5, but less than 6.
  • 4 GPA (n=35) – GPA above 4, but less than 5.
  • Less than 4 GPA (n=28) – the remaining students, apart from a handful with a GPA of 0 (exemptions?)

The analysis will compare two usage “indicators” for a range of course resources/activities.

The “indicators” being compared are

  • Clicks / Students – the total number of clicks on the resource/activity by all students in a group divided by the number of students in the group.
  • Percentage – the percentage of students in that group who clicked on the activity/resource.

Assessment and Study Schedule

The first resources compared are

  • Assessment – a Moodle book that contains all details of the assessment for the course.
  • Study Schedule – a page that gives an overall picture of the schedule of the course with links to each week’s block.
Group Clicks / Student % Students
Study Schedule
6 GPA 4.2 100.0
5 GPA 2.9 75.5
4 GPA 2.8 74.3
Less than 4 1.3 53.6
Assessment
6 GPA 22.7 100.0
5 GPA 12.2 75.5
4 GPA 11.0 74.3
Less than 4 8.2 64.3

The pattern is established early. The higher GPA groups access these resources more.

Unsurprisingly, the assessment information is used more than the study schedule.

Forums

Next comparison is two forums. Each assignment has it’s own forum. There is a general discussion forum. Finally, there are a range of forums used for specific learning activities during the semester. The two forums being compared here are

  • Q&A forum – is a forum for general questions and discussion.
  • Assignment 3 and Professional Experience Forum – assignment 3 is wrapped around the students’ 3 weeks practical teaching period.
Group Clicks / Student % Students
Q&A Forum
6 GPA 19.3 90.9
5 GPA 7.9 65.3
4 GPA 7.3 54.3
Less than 4 1.6 35.7
A3 and PE forum
6 GPA 16.0 100.0
5 GPA 8.4 73.5

4 GPA 5.5 68.6
Less than 4 1.2 35.7

The pattern continues. Particularly troubling is the significant reduction in use of the forums by the “Less than 4 GPA” group. Only about a third of them use the forums as opposed to over half accessing the study schedule and even more accessing the assessment.

I wonder how much of this percentage difference is due to students who have dropped out early?

Week 1 activities

In week 1 of the semester the students have to undertake a range of activities including the three compared here

  • Register their blog – they are required to create and use a personal blog throughout the semester. This activity has them register and be able to view the registered blogs of other students.
  • Share introductions – post an introduction of themselves and look at others. An activity that has been recently revisited for the coming semester.
  • PKM and reflection – a Moodle book introducing Personal Knowledge Management and reflection through a range of external resources. These two processes are positioned as key to the students’ learning in the course.
Group Clicks / Student % Students
Register your blog
6 GPA 12.9 100.0
5 GPA 9.2 75.5
4 GPA 10.8 77.1
Less than 4 6.6 60.7

Share introductions forum
6 GPA 6.6 100.0
5 GPA 4.6 75.5
4 GPA 5.6 77.1
Less than 4 2.2 57.1

PKM and reflection
6 GPA 3.8 100.0
5 GPA 2.3 75.5
4 GPA 2.1 74.3
Less than 4 1.4 53.6

Generally the pattern continues. The “4 GPA” group bucks this trend with the “Register your blog” activity. This generates at least two questions

  • Are the increased clicks / students due to troubles understanding the requirements?
  • Or is it due to wanting to explore the blogs of others?

Given that the percentage of students in the “4 GPA” group also bucks the trend, it might be the former.

Late semester resources

Finally, three resources from much later in the semester to explore how folk are keeping up. The three resources are

  • Overview and expectations – a Moodle book outlining what is expected of the students when they head out on their Professional Experience. There is still four weeks of theory left in the course, followed by 3 weeks of Professional Experience.
  • Your two interesting points – a Moodle forum in the last week of new content. The last week before the students go on Professional Experience. The students are asked to share in this forum the two points that resonated most with them from the previous reading that was made up reflections of prior students about Professional Experience.
  • Pragmatic advice on assignment 3 – another Moodle book with fairly specific advice about how to prepare and complete the 3rd assignment (should generate some interest, you’d think).
Group Clicks / Student % Students
Overview and expectations
6 GPA 1.7 90.9
5 GPA 2.4 75.5
4 GPA 1.6 65.7
Less than 4 0.8 50.0
Your two interesting points
6 GPA 1.2 63.6
5 GPA 1.0 55.1
4 GPA 0.6 34.3
Less than 4 0.2 14.3
Pragmatic A3 advice
6 GPA 1.5 90.9
5 GPA 1.4 73.5
4 GPA 1.0 60.0
Less than 4 0.6 42.9

The established pattern linking GPA with usage largely remains. However, the “5 GPA” students buck that pattern with the “Overview and Expectations” book. The “gap” between the top group and the others is also much lower with the other two resources (0.2 and 0.1 click / student) compared to some much larger margins with earlier resources.

There is also a drop off in groups toward the end of semester as shown in the following table comparing the main Assesment link with the pragmatic A3 advice.

Group Assessment Prag A3 advice
  C / S % studs C / S % studs
6 GPA 22.7 100.0 1.5 90.9
5 GPA 12.2 75.5 1.4 73.5
4 GPA

11.0 74.3 1.0 60.0
Less than 4 8.2 64.3 0.6 42.9

Some “warnings”. The 10% drop for the “6 GPA” group represents 1 student. There’s a chance that by the end of semester, the students have worked out they can print out a Moodle book (can be used to produce a PDF). So they visit it, save the PDF and refer to that. This might explain the drop off in clicks / students.

References

Beer, C., Jones, D., & Clark, K. (2009). The indicators project identifying effective learning, adoption, activity, grades and external factors. Same places, different spaces. Proceedings ascilite Auckland 2009. Auckland, New Zealand. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/beer.pdf

Dawson, S., McWilliam, E., & Tan, J. P. L. (2008). Teaching smarter: How mining ICT data can inform and improve learning and teaching practice. Melbourne. Retrieved from http://www.ascilite.org.au/conferences/melbourne08/procs/dawson.pdf

Challenges in employing complex e-learning strategies in campus-based universities

The following is a summary of McNaught et al (2009). This is one of three papers that from the same institution around the LMS that I’ve looked at recently.

The abstract for the paper is

Despite the existence of a significant number of established interactive e-learning tools and strategies, the overall adoption of e-learning is not high in many universities. It is thus important for us to identify and understand the challenges that face more complex e-learning projects. Using a qualitative method that gathered together the reflections of experienced practitioners in the field, this paper outlines many types of challenges that arise in the planning and development, implementation and evaluation stages of e-learning projects. Some of these challenges are related to human factors and some are associated with external factors such as technological infrastructure, university policy and support and the teaching and learning culture as a whole. A number of models are presented to assist our understanding of this situation – one on understanding the nature of innovation, a grounded model of the challenge factors we have encountered in our own experience and one to show possible future directions.

The paradox of e-learning

Lot’s of e-learning conferences full with presentations about digital resources and tools. But reality of institutional adoption of e-learning very different. “..this paper was born out of a desire to ‘come clean’ and see if we can advance e-learning from its often mundane position as the repository for lecture notes and PowerPoints” (McNaught et al, 2009, p. 268).

The context of campus-based universities

The cases arise from campus-based universities in Hong Kong, though “we believe our ‘story’ is more generally applicable” (p. 268). The authors do suggest that the “dynamic of distance universities are quite different” given that distance may provide more of an incentive for better e-learning strategies.

Note: I really don’t think that the distance dynamic plays much of a role at an overall level. There is perhaps more thought, but I wonder how much of that translates into action?

Even writing in 2009, the authors suggest that most of the success stories arise from pioneering teachers. The early adopters. References a 1998 paper by Anderson et al as support. Gives some statistics from CUHK from 2004 to show limited use. This draws on some of the previous papers.

More interactive uses of technology is often “development-intensive and/or administrative-intensive. They require teachers to spend a great deal of time in planning and creating the online resources, and then usually sustained significant effort in monitoring and regulating the online strategies while they are in progress”. Cites Weaver’s (2006) challenge to “encourage quality teaching practices…that seamlessly integrates the technical skills with the pedagogical and curricular practices… and does not promote transfer of existing poor teaching practices to the online environment”

Examples of unsuccessful complex e-learning

Appears to draw on literature to give examples of complex e-learning projects that failed in various stages of the development process

  1. During development – “getting it finished” – Cheng et al (2006).
  2. “Getting it used”

A model to show why innovation is challenging

Going to present two ways of “representing the challenges that face innovation and change – in this case we are considering a complex interactive use of e-learning in campus-based universities”.

The first is the J Curve. i.e. things will get worse before they get better “because of the expenses and challenges that occur early on in the innovation cycle”.

Note: But like much of the innovation literature this simplification doesn’t capture the full complexity of life, innovation and e-learning. If innovation is in something that is rapidly changing (e.g. university e-learning) then is there ever an upward swing? Or does the need for a new innovation – and another downward spiral – occur before you get the chance to climb out of the trough? For example, does the regular LMS migration phase in most universities (or the next organisational restructure) prevent any ability to climb up the J curve?

The second is the S curve (a related representation below). i.e. diffusion occurs through innovation, growth and maturity. With the transition from “innovation” to “growth” phase being the most important. And it’s hard

Leading innovation through the bottom of the J-curve or through the transition from ‘innovation’ to ‘growth’ in the S-curve is not easy as this process often requires people to rethink their beliefs and reformulate their ways of working; this is difficult. (p. 271)

Now brings in Lewin’s ideas about conceptual change process as a way of thinking about the challenge of changing beliefs and practices (a model the authors have used previously). This process has three stages

  1. “a process for diagnosing existing conceptual frameworks and revealing them to those involved;”
  2. a period of disequilibrium and conceptual conflict which makes the subject dissatisfied with existing conceptions; and
  3. a reforming or reconstruction phase in which a new conceptual framework is formed” [Kember et al. (2006), p.83]

Note: A few years ago I expressed some reservations about the applicability of Lewin’s model. I think they still apply.

To some extent this quote from the author’s gets at some of my reservations about this perspective on encouraging change with e-learning (emphasis added) “The process of demonstrating to teachers that there might be a better way forward with their use of e-learning requires evidence and this is why evaluation is so critical” (p. 271).

The assumption here is that there is a better way for the teacher to teach. We – the smart people who know better – just need to show them. Given the paucity of quality technology available within universities; the diversity of teachers, students and pedagogies; and the challenge from Weaver above I wonder if there is always a better way to demonstrate that is – to employ some Vygotsky – within the Zone of Proximal Development of the particulars of the learning/teaching context?

The author’s model for understanding the challenges facing e-learning, innovation and change is

  1. An understanding that change is not easy and always meets resistance (J-curve).
  2. An appreciation that there will be no significant gains unless significant numbers of teachers begin to adopt the innovation – in this case, complex e-learning (S-curve).
  3. A suggestion that the process of implementation should model the three stages of the conceptual change process. Evaluation is integral to this process.

Note: Are people all the change averse? Sure, we are/can be creatures of habit. However, when it comes to e-learning and that sort of “innovation” the change is often done to students and staff, rather than with them. i.e. the particular tool (e.g. a new LMS) or pedagogical approach (e.g. MOOC, flipped classroom etc) is chosen centrally and pushed out. Systems that are developed with people to solve problems or provide functions that were identified as needed are different (I think).

Note: I find #1 interesting in that it takes the J-Curve to suggest that there will always be resistance. From their introduction to the J-Curve the point seems to be that innovation brings challenges and expense that mean ROI will drop initially. This doesn’t seem to be about resistance.

Note: #2 is also interesting. The requirement that there be significant levels of adoption prior to significant gains arising from an innovation is a problem if you accept good quality L&T being about contextually appropriate approaches. The sheer diversity of learners, teachers etc – no to mention the rapid on-going change – suggests that this model of “significant gains == significant levels of adoption” may not fit with the context. Or at least cause some problems.

The study

Qualitative method to collect reflection of practitioners in the field “regarding the challenges in the various stages of development and use of complex e-learning strategies”. 5 authors – 3 from central L&T and 2 were pioneering teachers.

Note: Would appear that the sample is biased toward the “innovators”, involving other folk may have revealed very different insights.

Three sources of data

  1. Detailed interviews with teachers and programmers and analysis of email communication logs for projects that were never implemented.
  2. Publications about the work of one of the authors.
  3. Similar from another author.

Findings

Iterations of reflection and discussion led to a table of challenges.

Teachers Students Supporting staff Technology/Environment/Culture
Planning and development Limited time and resources Miscommunication Restrictions in university resources and support
Necessity of new skills Different perception of tasks with teachers Technology being inflexible
Miscommunication Limitation in resources and expertise Idiosyncratic nature of development
Different perception of tasks with support team Idiosyncratic nature of development
Implementation New to strategies
Unwillingness to learn differently
New to strategies Sustainability
Cost-effectiveness
Dissemination Unwillingness to share
Unmotivated to learn new technologies
Strategies do not match teaching styles
Contrary to existing T&L practice
Evaluation Lack of cases Lack of appreciation Question about effectiveness

These are elaborated more with examples.

Discussion

Taking the four sources from the above table, the authors propose the idea of a “mutual comfort zone”. An e-learning project needs to have all of the factors in this MCZ for it to be successful. The paper illustrates this with the obligatory overlapping circle diagram.

Cases of successful complex e-learning strategies, thus, seem to be limited to the instances when all the factors noted in Figure 4 work in unison. It is therefore easy to see why successful cases of complex e-learning are not all that common and are restricted to highly motivated pioneering teachers who are comfortable with innovative technologies and may also be in an innovation-friendly environment. (p. 281)

Note: resonating with the mention of ZPD above.

Becoming more optimistic, the future is promising because

  • The tools are getting more “e-learning friendly”.
  • LMSs are “now more user-friendly and more flexible” makes mention of open source LMSs like Moodle.

    Note: But doesn’t more flexibility bring complexity?

  • Teachers now have better IT skills are want to use technology.
  • Supporting services are proving based on accumulated experience.

    Note: I wonder about this. Organisational restructures and the movement of people aren’t necessarily helping with this. I can point to a number of situations where it has gone the other way.

  • Institutions are adopting e-learning, so the policy problem is solved.

    Note: Assumes that the policy is done well and actually can and does have an impact on practice. I’m not sure those conditions are met all the tie.

Given all this “E-learning might then reach a critical mass and so that e-learning will progress beyond the valley bottom of the J-curve and will start climbing the growth phase in the S-curve”.

I wonder if this is evident? This links very nicely with some of the ideas in my last post.

References

Mcnaught, C., Lam, P., Cheng, K., Kennedy, D. M., & Mohan, J. B. (2009). Challenges in employing complex e-learning strategies in campus-based universities. International Journal of Technology Enhanced Learning, 1(4), 266–285.

The adoption and acceptance of learning analytics

Much earlier this year I was invited to participate with some folk much cleverer than I around the question of the adoption of learning analytics and a project to explore this using the Technology Acceptance Model (TAM). Going by the date embedded in the URL of this post, that was way back in August. It’s December and I’m now trying to get back to this post to capture some of my thinking.

If I had to summarise my thinking now, prior to completing the post below, it would consist of

  1. Based on the experience with business intelligence systems in the broader business world and the LMS/e-learning within universities, adoption of learning analytics is likely to be problematic in terms of both quantity and quality.
  2. The centrality in TAM of an individual’s perceptions of the usefulness and ease of use of an IT innovation on adoption panders to my beliefs and prejudices.
  3. I have some qualms (from both the literature but also my limitations) about the value of research based on TAM and surveys of intention to use.

And now some random thoughts.

Deja vu all over again

Based on my current observations, my fear is that learning analytics as implemented by universities is going to suffer similar problems to most prior applications of ICTs into university learning. For example, Geoghegan’s (1994) identification of the chasm as it applied to instructional technology, the findings 10+ years later that usage of the LMS by academics was limited in terms of both quantity and quality, and more recent reports that understanding the information provided by learning analytics is really hard.

The Technology-Adoption-Model

For better or worse, the current research is looking at leverage the Technology Adoption Model (TAM) for exploring the likely acceptance of learning analytics. TAM is one of the “big theories” associated with the Information Systems discipline and has been widely used. TAM provides an instrument through which predictions can be made about whether or not some new technological tool is going to be adopted within a particular group or organisation. The idea is that based on the beliefs about the tool held by the individuals within that group, you can make predictions about whether or not the tool will be used. The particular beliefs that tend to be at the core are perceived usefulness (often the most influential) and perceived ease of use.

TAM is not without its criticisms, including Bagozzi (2007). It has evolved somewhat, currently at TAM3 (Venkatesh, et al 2008). One of the criticisms of TAM has been that it doesn’t provide practitioners with “actionable guidance”. i.e. how do you increase the likelihood of adoption.

TAM work is traditionally survey based. Venkatesh and Bala (2008) identify three broad areas of TAM research

  1. Replication and testing of the constructs.
  2. Develop theoretical underpinnings for TAM constructs.
  3. The addition of new constructs as determinants of TAM constructs.

    Leading to four different types of determinant: individual differences, system characteristics, social influence, facilitating conditions.

The determinants above arose in the development of TAM2. In developing TAM3, Venkatesh and Bala (2008) suggested the following additions:

  • Perceived usefulness

    • Subjective norm
    • Image
    • Job relevance
    • Output quality
    • Result demonstrability
  • Perceived ease of use
    • Computer self-efficacy
    • Perceptions of external control
    • Computer anxiety
    • Computer playfulness
    • Perceived enjoyment
    • Objective usability

With experience and voluntariness as potential moderator factors. Perhaps the above illustrates Bagozzi’s (2007) suggestion that

On the other hand, recent extensions of TAM (e.g., the UTAUT) have been a patchwork of many largely unintegrated and uncoordinated abridgements

Bagozzi (2007) points out that there can be an “potentially infinite list of such moderators” that has the result in making the broadenings of TAM “both unwieldy and conceptually impoverished”. The advice being that introduction of these moderating variables should be theory based.

LAAM

As it happens, Ali et al (2012) have taken TAM and done some work around learning analytics described as

While many approaches and tools for learning analytics have been proposed, there is limited empirical insights of the factors influencing potential adoption of this new technology. To address this research gap, we propose and empirically validate a Learning Analytics Acceptance Model (LAAM), which we report in this paper, to start building research understanding how the analytics provided in a learning analytics tool affect educators’ adoption beliefs. (p. 131)

Factors examined

  1. Pedagogical knowledge and information design skills
  2. Perceived utility of a learning analytics tool
  3. Educators perceived ease-of-use of a learning analytics tool

Identifying what influences usefulness and ease of use

Back in 2006 a group of us used TAM to explore perceptions of an online assignment submission system (e.g. Beherens et al, 2006). However, rather than trying to predict levels of usage of a new system, this work was exploring perceptions of a system that was already being used. The intent was to explore what was making this particular system successful. TAM1 was used in a survey but which included free text responses for respondents to talk about what influenced their perceptions.

Having re-read this again, there’s probably some value in exploring this research again. Especially given that the institution has moved onto using another system.

Some thoughts on TAM and learning analytics

I see the need for identifying and exploring the factors that will make learning analytics tools likely to be used. Not sure TAM or its variants are the right approach. Some reasons following.

Are there large groups of people actually using learning analytics?

How do you measure individual perceptions of something that many people haven’t used yet?

Ali et al (2012) got a group of educators together and had them experiment with a particular tool.

This approach raises a problem

Is there any commonality between learning analytics tools?

If the aim is to test this at different institutions, is each institution using the same set of learning analytics tools? I think not, currently most are doing their own thing.

Running TAM surveys on different tools would generate other problems.

Identifying the factors before hand

The survey approach is based on the assumption that you can identify the model beforehand. i.e. you figure out what factors will influence adoption, incorporate them into a model (in this case integrating with TAM) and then test it. Ali et al (2012) included pedagogical knowledge and information design skills of educators.

You might be able to argue that given the relative novelty (which itself is arguable) of learning analytics that you might want to explore these a bit more.

I think this comes back to my humble nature/stupidity and not thinking I can know everything up-front. Hence my preference for emergent/agile development.

Doesn’t offer tool developers/organisations guidance for intervention

There was a quote from the literature identifying this as a weakness of TAM. But as a wannabe developer of learning analytics enhanced tools, TAM appears to be of fairly limited use for another reason. As mentioned above TAM is focused on the internal beliefs, attitudes and intentions. Do you think this tool is easy to use? Do you think it’s useful? Or picking up on Ali et al (2012): what is your level of pedagogical knowledge or information design?

This doesn’t seem to provide me with any insight about how to make the learning analytics useful or easy to use? Or at least not insight that I couldn’t gain from a bit of user-centered design. As a tool developer, how do I change the users perceptions of computer self-efficacy or anxiety? An organisation might think it can do this via training etc, but I have my doubts.

Teacher conceptions of teaching and learning

If a factor were to be added for using TAM and learning analytics, I do think that the conceptions of teaching and learning work would be a strong candidate. In fact, the introduction to (Steel, 2009) cites some research to indicate that “teacher beliefs about the value of technology use are a significant factor in predicting usage”.

Where to know?

Not sure and time to go home. More thinking and reading to do.

References

Ali, L., Asadi, M., Gašević, D., Jovanović, J., & Hatala, M. (2012). Factors influencing beliefs for adoption of a learning analytics tool : An empirical study. Computers & Education, 62, 130–148.

Bagozzi, R. (2007). The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. Journal of the association for information systems, 8(4), 244–254.

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. In 16th Australasian Conference on Information Systems. Sydney.

Geoghegan, W. (1994). Whatever happened to instructional technology? In S. Bapna, A. Emdad, & J. Zaveri (Eds.), (pp. 438–447). Baltimore, MD: IBM.

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x

Page 1 of 4

Powered by WordPress & Theme by Anders Norén

css.php