Assembling the heterogeneous elements for (digital) learning

Month: July 2017

What’s changed in academic staff development?

The following is my initial response to this exercise from the week 3 learning path. It’s an exercise intended to get folk thinking about what practices, if any, have emerged in their disciplinary teaching context from when they were undergraduates until now. It asks them to consider some of the emerging practices mentioned in the Horizon and New Generation Pedagogy reports. It also asks them to consider if any of them are visible in “good practice” within the discipline.

As per the exercise instructions, the following is not a formal academic document. It’s a bit of writing to think. The exercise is intended to encourage folk start framing thoughts that will become the basis for an assessment task.

The following also tends to be specific to my context.

My Discipline?

I’m currently onto my third or fourth discipline. My journey in higher education has gone through computer science/information technology; information systems, teacher education; and what I’ll call academic staff development (i.e. helping other academic staff teach).

I’ll stick with my current “discipline” – academic staff development.

What was it like?

When I first stated teaching in higher education (in the Information Technology discipline) back in the early 1990s I was teaching in a dual-mode university. i.e. my students studied via two modes (on-campus and via distance education). In those days, distance education meant the production of slabs of print-based material that was posted out to students before the semester started. A process that in the early 1990s relied about an production-line type process for generating the print material.

My recollections of academic staff development in those days mostly involved the distance education folk running sessions or distributing print-based material designed to help academics develop the knowledge/skills to develop good print-based material.  I don’t remember too many workshops or presentations, but I remember huge folders of print material.

There were the occasional presentation on a teaching related topic and there were even some early forays into what might be characterised as communities of practice. e.g. I was involved with a computer-mediated communications working group in the early 1990s (pre Internet Services Provider days) that eventually developed some print material to help staff and students using CMC in learning and teaching.

There were also grants to fund innovative developments associated with L&T (I got one of those) and there were also teaching awards (I got one of those).

What’s changed?

To be brutally honest. Not much.  Perhaps the major change is that there are no longer any big sets of folders of print material. All that is now online. The nature of the online material and how you access has changed somewhat. There’s been a recent move to more contextual material.  But it’s still fairly kludgy and much of it is still in a print format (i.e. PDF documents).

There is still a reliance on presentations and workshops. Though these are increasingly available via Zoom and a couple of weeks ago a remote participant did engage with the institutional L&T orientation using a Kubi telepresence robot.

There are still L&T grants (some announced last week) and awards (announcing real soon).

However, there has been a shift in focus away from “academic staff development”. Seen as something done to teaching staff. Towards the idea of professional learning and professional learning opportunities. Moving the focus toward designing contexts/environments/opportunities for teaching staff to engage in professional learning.

What about ideas from the Next Generation Pedagogy report?

The Next Generation Pedagogy report offers five signposts on the roadmap to innovative pedagogy

  • Intelligent pedagogy – using technology to enhance learning, including beyond institutional confines.
    Technology use in academic staff development (in my context, but in a lot of others as well) is still somewhat limited. There’s no use of learning analytics to understand the teaching experience. Technology is largely used to supplement existing face-to-face approaches, rather than do something radically different.  Though aspects of this might be coming. The idea of untethered faculty development is indicative of early moves in this space.On the other hand, the academic staff who are our learners now have access to the abundance of resources that are on the Internet. There are staff drawing heavily on these, but there appears to be many that are not.
  • Distributed pedagogy – ownership of learning is shared amongst different stakeholders allowing students to source learning from competing providers
    There are aspects of this happening in how learning and teaching operates. e.g. TurnitIn is external and offers some staff development. This is happening more to support University students in their learning, than to support University teaching staff.
  • Engaging pedagogy – encouraging active participation from learners.
    There are early signs of this – e.g. the shift away from academic staff development in the broader field.  Locally, the approach used in our L&T orientation has moved away from experts leading sessions to participative, co-construction/solving of problems. But more could be done.
  • Agile pedagogy – flexibility/customisation of the student experience.
    There are attempts to do this, but not directly support by systems and processes.
  • Situated pedagogy – contextualisation to maximise real-world relevance.
    There are signs of this (e.g. how workshops are run) and approaches like Teaching@Sydney allow for more contextualisation. As do some move to contextualising access to resources.  But still fairly limited.Currently much of it relies on someone doing the customising/situating/personalising for the learner.

And the Horizon report

The 2017 Horizon report is the other source examined. It offers the following key trends

  • Advancing cultures of innovation
    Not so much. Innovation is suggested to be a good thing, but a “culture that promotes experiementation” it is not yet.
  • Deeper learning approaches – project-based, inquiry learning
    There are glimmers of this, but there’s also a strong pragmatic need amongst teaching staff.  I need to know how to do X now.
  • Growing focus on measuring learning
    In terms of external quality indicators (such as QILT) and quantitative measures such as pass/fail rates and results on student evaluation of teaching, this is increasing. Perhaps increasing beyond where it should be. However, there remains little use of learning analytics and other more interesting approaches for measuring the learning and learning needs of teaching staff.
  • Redesigning learning spaces
    Moves around this for students, but not so much for teaching staff.
  • Blended learning designs
    Much of staff development appears to stick with the face-to-face methods. Even when it moves online it is to video-conferencing in an attempt to continue with face-to-face, rather than explore the blend of affordances that both online and face-to-face might offer.
  • Collaborative learning
    One of the Horizon Report “predictions” that Audrey Watters labels as not even wrong. Communities of Practice and Learning Communities have been a feature of academic staff development, more broadly and locally (even back in the early 1990s). However, I’m not sure how truly collaborative those approaches have been.

What’s relevant now?

Many of the above offer interesting possibilities, some are inevitable, and some have always been a feature.

Institutional academic staff development has yet to scratch the surface in terms of how digital technology could be used. It does appear to be increasingly “strategic” in its intent. This may make it more difficult to be agile, situated and engaging.  Three signposts that could be very relevant.

Situating staff development within the context of the member of teaching staff strikes me as very relevant. Expanding upon the idea of professional learning opportunities and encouraging active participation from teaching staff seems very relevant. Providing examples and scaffolds around how to do this.

 

My current context and some initial issues

Semester is about to start and I’m back teaching. This semester I’m part of a team of folk designing and teaching a brand new, never been taught course – EDU8702 – Scholarship in Higher Education: Reflection and Evaluation. The course is part of the Graduate Certificate in Tertiary Teaching.

In the course, we are asking the participants to focus on a specific context into which they are (or will) teach. That context will form part of an teacher-led inquiry into learning and teaching that will underpin the whole course. Early on in the course we are asking the to briefly summarise the context they’ll focus on and generate an initial set of issues of interest that might form the basis for their inquiry. Get them thinking and sharing and provide a foundation for refinement over the semester.

The plan is that we’ll model what we ask, hence this blog post is my example.

Context

My current context is within a central learning and teaching unit at a University. My role is charged with helping teaching staff at the institution work toward and be recognised for “educational excellence and innovation”. i.e. we’re part of a team to helping teaching staff become better teachers and thus improve the quality of student learning. To that end we, amongst other things

  • Teach into the institution’s Graduate Certificate in Tertiary Teaching.
  • Develop a range of professional learning opportunities (PLO), including L&T orientation, workshops, small group sessions, online resources etc.
  • Develop and support programs of L&T scholarships and awards.

Issues

As a group that’s still forming a bit, there are a range of practical issues.

However, there are also a collection of issues that arise from the “discipline” of professional learning for teaching staff, some of these include:

  • Preaching to the choir.

    A perception that the people who engage with the professional learning opportunities we provide, are perhaps not those who might benefit most.

  • Difficulty of demonstrating impact.

    It can be very hard to prove that what is done, improves the quality of learning and teaching.

  • Perceived relevance of what we offer

    Often the focus can be on developing well-designed workshops and resources, rather than try to understand authentic, contextual needs.

  • A tendency to focus on designing a learning intervention when performance support might suit better.
  • How best to modify what we do to respond to an era of information abundance.

    A lot of traditional professional development arose from a time of scarce information. Developing a workshop/resource on topic X specifically for institution Y made sense, because there was no other way to get access. Chances are today you could find a long list of workshop/resources on topic X. Should you still develop yet another resource on topic X?

There are also some issues around the course we’re teaching

  • Limited insight into how the participants are, their backgrounds and reasons for enrolling.
  • The current small number of participants.
  • How to design an effective course within this context and within current constraints.

Learning analytics, quality indicators and meso-level practitioners

When it comes to research I’ve been a bit of failure, especially when measured against some of the more recent strategic and managerial expectations. Where are those quartile 1 journal articles? Isn’t your h-index showing a downward trajectory?

The concern generated by these quantitative indicators not only motivated the following ideas for a broad research topic, but also is one of the issues to explore within the topic. The following outlines early attempts to identify a broader research topics that is relevant enough for current sector and institutional concerns; provides sufficient space for interesting research and contribution; aligns nicely (from one perspective) with my day job; and, will likely provide a good platform for a program of collaborative research.

The following:

  1. explains the broad idea for research topic within the literature; and,
  2. describes the work we’ve done so far including two related examples of the initial analytics/indicators we’ve explored.

The aim here is to be generative. We want to do something that generates mutually beneficial collaborations with others. If you’re interested, let us know.

Research topic

As currently defined the research topic is focused around the design and critical evaluation of the use and value of a learning analytics platform to support meso-level practitioners in higher education to engage with quality indicators of learning and teaching.

Amongst the various aims, are an intent to:

  • Figure out how to design and implement an analytics platform that is useful for meso-level practitioners.
  • Develop design principles for that platform informed by the analytics research, but also ideas from reproducible research and other sources.
  • Use and encourage the use by others of the platform to:
    1. explore what value (if any) can be extracted from a range of different quality indicators;
    2. design interventions that can help improve L&T; and,
    3. to enable for a broader range of research – especially critical research – around the use of quality indicators and learning analytics for learning and teaching.

Quality Indicators

The managerial turn in higher education has increased the need for and use of various indicators of quality, especially numeric indicators (e.g. the number of Q1 journal articles published, or not). Kinash et al (2015) state the quantifiable performance indicators are important to universities because they provide “explicit descriptions of evidence against which quality is measured” (p. 410). Chalmers (2008) offers the following synthesized definition of performance indicators

measures which give information and statistics context; permitting comparisons between fields, over time and with commonly accepted standards. They provide information about the degree to which teaching and learning quality objectives are being met within the higher education sector and institutions. (p. 10)

However, the generation and use of these indicators is not without issues.

There is also a problem with a tendency to rely on quantitative indicators. Quantitative indicators provide insight into “how much or how many, but say little about quality” (Chalmers & Gardiner, 2015, p. 84). Ferguson and Clow (2017) – writing in the context of learning analytics – argue the good quality qualitative research needs to support good-quality quantitative research because “we cannot understand the data unless we understand the context”. Similarly, Kustra et al (2014) suggest that examining the quality of teaching requires significant qualitative indicators to “provide deeper interpretation and understanding of the measured variable”. Qualitative indicators are used by Universities to measure performance in terms of processes and outcomes,however, “because they are more difficult to measure and often produce tentative results, are used less frequently” (Chalmers & Gardiner, 2015, p. 84)

Taking a broader perspective there are problems such as Goodhart’s law and performativity. As restated by Strathern (1997), Goodhart’s Law is ‘When a measure becomes a target, it ceases to be a good measure’ (p. 308) Elton (2004) describes Goodhart’s Law as “a special case of Heisenberg’s Uncertainty Principle in Sociology, which states that any observation of a social system affects the system both before and after the observation, and with unintended and often deleterious consequences” (p. 121). When used for control and comparison purposes (e.g league tables) indicators “distort what is measured, influence practice towards what is being measured and cause unmeasured parts to get neglected” (Elton, 2004, p. 121).

And then there’s the perception that quality indicators and potentially this whole research project becomes an unquestioning part of part of performativity and all of the issues that generates. Ball (2003) outlines the issues and influence of the performative turn in institutions. He describes performativity as

a technology, a culture and a mode of regulation that employs judgements, comparisons and displays as means of incentive, control, attrition and change ^ based on rewards and sanctions (both material and symbolic). The performances (of individual subjects or organizations) serve as measures of productivity or output, or displays of ‘quality’, or ‘moments’ of promotion or inspection (Ball, 2003, p. 216)

All of the above (and I expect much more) all point to there being interesting and challenging questions to explore and answer around quality indicators and beyond. I do hope that any research we do around this topic engages with the necessary critical approach to this research. As I re-read this post now I can’t help but see echoes of a previous discussion Leigh and I have had around inside out, outside in, or both. This approach is currently framed as an inside out approach. An approach where those inside the “system” are aware of the constraints and work to address those. The question remains whether this is possible.

Learning analytics

Siemens and Long (2011) define LA as “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs” (p. 34). The dominant application of learning analytics has focused on “predicting student learning success and providing proactive feedback” (Gasevic, Dawson and Siemens, 2015), often driven by an interest in increasing student retention and success. Colvin et al (2016) found two distinct trajectories of activity in learning analytics within Australian higher education. The first were ultimately motivated by measurement and retention implemented specific retention related learning analytics programs. The second saw retention as a consequence of the broader learning and teaching experience and “viewed learning analytics as a process to bring understanding to learning and teaching practices” (Colvin et al, 2016, p. 2).

Personally, I’m a fan of the second trajectory and see supporting that trajectory as a major aim for this project.

Not all that surprisingly, learning analytics has been applied to the question of quality indicators. Dawson and McWilliam (2008) explored the use of “academic analytics” to

address the need for higher education institutions (HEIs) to develop and adopt scalable and automated measures of learning and teaching performance in order to evaluate the student learning experience (p. 1)

Their findings included (emphasis added):

  • “LMS data can be used to identify significant differences in pedagogical approaches adopted at school and faculty levels”
  • “provided key information for senior management for identifying levels of ICT adoption across the institution, ascertaining the extent to which teaching approaches reflect the strategic institutional priorities and thereby prioritise the allocation of staff development resources
  • refining the analysis can identify “further specific exemplars of online teaching” and subsequently identify “‘hotspots’ of student learning engagement”; “provide lead indicators of student online community and satisfaction”; and, identify successful teaching practices “for the purposes of staff development activities and peer mentoring”

Macfadyen and Dawson (2012) provide examples of how learning analytics can reveal data that offer “benchmarks by which the institution can measure its LMS integration both over time, and against comparable organizations” (p. 157). However, the availability of such data does not ensure use in decision making. Macfadyen and Dawson (2012) also report that the availability of patterns generated by learning analytics did not generate critical debate and consideration of the implications of such data by the responsible organisational committee and thus apparently failed to influence institutional decision-making.

A bit more surprising, however, is that in my experience there doesn’t appear to have been a concerted effort to leverage learning analytics for these purposes. Perhaps this is related to findings from Colvin et al (2016) that even with all the attention given to learning analytics there continues to be: a lack of institutional exemplars; limited resources to guide implementation; and perceived challenges in how to effectively scale learning analytics across an institution. There remains little evidence that learning analytics has been helpful in closing the loop between research and practice, and made an impact on university-wide practice (Rogers et al, 2016).

Even if analytics is used, there are other questions such as the role of theory and context. Gasevic et al (2015) argue that while counting clicks may provide indicators of tool use it is unlikely to reveal insights of value for practice or the development of theory. If learning analytics is to achieve an a lasting impact on student learning and teaching practice it will be necessary to draw of appropriate theoretical models (Gasevic et al, 2015). Rogers et al (2016) illustrate how such an approach “supports an ever-deepening ontological engagement that refines our understanding and can inform actionable recommendations that are sensitive to the situated practice of educators” (p. 245). If learning analytics aims to enhance to learning and teaching, it is crucial that it engages with teachers and their dynamic contexts (Sharples et al., 2013). Accounting for course and context specific instructional conditions and learning designs are increasingly seen as imperatives for the use of learning analytics (Gaservic et al, 2015; Lockyer et al, 2013)

There remain many other questions about learning analytics. Many of those questions are shared with the use of quality indicators. There is also the question of how learning analytics can harnessed via means that are sustainable, scale up, and at the same time provide contextually appropriate support. How can the tensions between the need for institutional level quality indicators of learning and teaching, and the inherently contextually specific nature of learning and teaching?

Meso-level practitioners

The limited evidence of impact from learning analytics on learning and teaching practice may simply be a mirror of the broader difficulty that universities have had with other institutional learning technologies. Hannon (2013) explains that when framed as a technology project the implementation of institutional learning technologies “risks achieving technical goals accompanied by social breakdowns or failure, and with minimal effect on teaching and learning practices” (p. 175). Breakdowns that arise, in part, from the established view of enterprise technologies. A view that sees enterprise technologies as unable to be changed, and “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (Rushkoff, 2010, p. 15).

Jones et al (2006) describe the term meso-level to describe the “level that was intermediate between small scale, local interaction, and large-scale policy and institutional procesess” (p. 37). Hannon (2013) describes meso-level practitioners as the “teaching academics, learning technologists, and academic developers” (p. 175) working between the learning and teaching coal-face and the institutional context defined by an institution’s policies and technological systems. These are the people who can see themselves as trying to bridge the gaps between the institutional/technological vision (macro-level) and the practical coal-face realities (micro-level). These are the people who are often required to help “optimise humans for machinery”, but who would generally prefer to do the reverse. Hannon (2013) also observes thateEven though there has been significant growth in the meso-level within contemporary higher education, research has continued to largely focused on the macro or micro levels.

My personal experience suggests that the same can be said about the design and use of learning analytics. Most institutional attempts are focused at either the macro or micro level. The macro level focused largely on large-scale student retention efforts. The micro level focused on the provision of learning analytics dashboards and other tools to teaching staff and students. There has been some stellar work by meso-level practitioners in developing supports for the micro-level (e.g. Liu, Bartimote-Aufflick, Pardo, & Bridgeman, 2017). However, much of this work has been in spite of the affordances and support offered by the macro-level. Not enough of the work, beyond the exceptions already cited, appears to have actively attempted to help optimise the machinery for the humans. In addition, there doesn’t appear to be a great deal of work – beyond the initial work from almost 10 years ago – focused on if and how learning analytics can help meso-level practitioners in the work that they do.

As a result there are sure to be questions to explore about meso-level practitioners, their experience and impact on higher education. Leigh Blackall has recently observed that the growth in meso-level practitioners in the form of “LMS specialists and ed tech support staff” comes with the instruction that they “focus their attentions on a renewed sense of managerial oversight”. Implicating meso-level practitioners in questions related to performativity etc. Leigh also positions these meso-level practitioners as examples of disabling professions. Good pointers to some of the more critical questions to be asked about this type of work.

Can meso-level practitioners break out, or are we doomed to be instruments of performativity? What might it take to break free? How can learning analytics be implemented in a way that allows it to be optimised for the contextually specific needs of the human beings involved, rather than require the humans to be optimised for the machinery? Would such a focus improve the quality of L&T?

What have we done so far?

Initial work has focused on developing an open, traceable, cross-institutional platform for exploring learning analytics. In particular, exploring how recent ideas such as reproducible research and insights from learning analytics might help design a platform that enables meso-level practitioners to break some of the more concerning limitations of current practice.

We’re particularly interested in ideas from Elton (2004) where bottom-up approaches might “be considerably less prone to the undesirable consequences of Goodhart’s Law” (p. 125). A perspective that resonates with our four paths idea for learning analytics. i.e. That it’s more desirable and successful to follow the do-it-with learners and teachers or learner/teacher DIY paths.

The “platform” is seen as an enabler for the rest of the research program. Without a protean technological platform – a platform we’re able to tailor to our requirements – it’s difficult to see how we’d be able to effectively support the deeply contextual nature of learning and teaching or escape broader constraints such as performativity. This also harks back to my disciplinary background as a computer scientist. In particular, the computer scientist as envisioned by Brooks (1996) as a toolsmith whose delight “is to fashion powertools and amplifiers for minds” (p. 64) and who “must partner with those who will use our tools, those whose intelligences we hope to amplify” (p. 64).

First steps

As a first step, we’re revisiting our earlier use of Malikowski, Thompson & Theis (2007) to look at LMS usage (yea, not that exciting, but you have to start somewhere). We’ve developed a set of Python classes that enable the use of the Malikowski et al (2007) LMS research model. That set of classes has been used to develop a collection of Jupyter notebooks that help explore LMS usage in a variety of ways.

The theory is that these technologies (and the use of github to share the code openly) should allow anyone else to perform these same analysis with their LMS/institution. So far, the code is limited to working only with Moodle. However, we have been successful in sharing code between two different installations of Moodle. i.e. one of us can develop some new code, share it via github, and the other can run that code over their data. A small win.

The Malikowski et al (2007) model groups LMS features by the following categories: Content, Communication, Assessment, Evaluation and Computer-Based Instruction. It also suggests that tool use occurs in a certain order and with a certain frequency. The following figure (click on it to see a larger version) is a representation of the Malikowski model.

Malikowski Flow Chart

Looking for engagement?

Dawson and McWilliam (2008) suggested that academic analytics could be used to identify “potential “hotspots” of student learning engagement” (p. 1). Assuming that the number of times students click within an LMS course is a somewhat useful proxy for engagement (a big question), then this platform might allow you to:

  1. Select a collection of courses.

    This might be all the courses in a discipline that scored well (or poorly) on some other performance indicator, all courses in a semester, all large first year courses, all courses in a discipline etc.

  2. Visualise the number of total student clicks within each course clicked on LMS functionality in each of the Malikowski categories.
  3. Visualise the number of clicks per student within each course in each Malikowski category.

These visualisations might then provide a useful indication of something that is (or isn’t) happening. An indication that would not have been visible otherwise and is worthy of further exploration via other means (e.g. qualitative).

The following two graphs were generated by our platform and are included here to provide a concrete example of the above process. Some features of the platform that the following illustrates

  • It generates artefacts (e.g. graphs, figures) that can be easily embedded anywhere on the web (e.g. this blog post). You don’t have to be using out analytics platform to see the artefacts.
  • It can anonymise data for external display. For example, courses in the following artefacts have been randomly given people’s names rather than course codes/names.

Number of total student clicks

The first graph shows a group of 7 courses. It shows the number of students enrolled in each course (e.g. the course Michael has n=451) and the bars represent the total number of clicks by enrolled students on the course website. The clicks are grouped according to the Malikowski categories. If you roll your mouse over one of the bars, then you should see displayed the exact number of clicks for each category.

For example, the course Marilyn with 90 students had

  • 183,000+ clicks on content resources
  • 27,600+ clicks on communication activities
  • 5659 clicks on assessment activities
  • and 0 for evaluation of CBI

Total number of clicks isn’t all that useful for course comparisons. Normalising to clicks per enrolled student might be useful.


 

 

 

 

Clicks per student

The following graph uses the same data as above, however, the number of clicks is now divided by the number of enrolled students. A simple change in analysis that highlights differences between courses.

2000+ clicks on content per student certainly raises some questions about the Marilyn course. Whether that number is good, bad, or meaningless would require further exploration.

 


 

 

 

 

What’s next?

We’ll keep refining the approach, some likely work could include

  • Using different theoretical models to generate indicators.
  • Exploring how to effectively supplement the quantitative with qualitative.
  • Exploring how engaging with this type of visualisation might be a useful as part of professional learning.
  • Exploring if these visualisations can be easily embedded within the LMS, allowing staff and students to see appropriate indicators in the context of use.
  • Exploring various relationships between features quantitatively.

    For example, is there any correlation between results on student evaluation and Malikowski or other indicators? Correlations between disciplines or course design?

  • Combining the Malikowski model with additional analysis to see if it’s possible to identify significant changes in the evolution of LMS usage over time.

    e.g. to measure the impact of organisational policies.

  • Refine the platform itself.

    e.g. can it be modified to support other LMS?

  • Working with a variety of people to explore what different questions they might wish to answer with this platform.
  • Using the platform to enable specific research project.

And a few more.

Want to play? Let me know. The more the merrier.

References

Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228. https://doi.org/10.1080/0268093022000043065

Brooks, F. (1996). The Computer Scientist as Toolsmith II. Communications of the ACM, 39(3), 61–68.

Chalmers, D. (2008). Indicators of university teaching and learning quality.

Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46, 81–91. https://doi.org/10.1016/j.stueduc.2015.02.002

Colvin, C., Wade, A., Dawson, S., Gasevic, D., Buckingham Shum, S., Nelson, K., … Fisher, J. (2016). Student retention and learning analytics : A snapshot of Australian practices and a framework for advancement. Canberra, ACT: Australian Government Office for Learning and Teaching. Retrieved from http://he-analytics.com/wp-content/uploads/SP13-3249_-Master17Aug2015-web.pdf

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicators of learning and teaching performance. Queensland University of Technology and the University of British Columbia.

Elton, L. (2004). Goodhart’s Law and Performance Indicators in Higher Education. Evaluation & Research in Education, 18(1–2), 120–128. https://doi.org/10.1080/09500790408668312

Ferguson, R., & Clow, D. (2017). Where is the Evidence?: A Call to Action for Learning Analytics. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference (pp. 56–65). New York, NY, USA: ACM. https://doi.org/10.1145/3027385.3027396

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64–71. https://doi.org/10.1007/s11528-014-0822-x

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. https://doi.org/10.1016/j.iheduc.2015.10.002

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168–178. https://doi.org/10.1111/j.1365-2729.2012.00480.x

Jones, C., Dirckinck‐Holmfeld, L., & Lindström, B. (2006). A relational, indirect, meso-level approach to CSCL design in the next decade. International Journal of Computer-Supported Collaborative Learning, 1(1), 35–56. https://doi.org/10.1007/s11412-006-6841-7

Kinash, S., Naidu, V., Knight, D., Judd, M.-M., Nair, C. S., Booth, S., … Tulloch, M. (2015). Student feedback: a learning and teaching performance indicator. Quality Assurance in Education, 23(4), 410–428. https://doi.org/10.1108/QAE-10-2013-0042

Liu, D. Y.-T., Bartimote-Aufflick, K., Pardo, A., & Bridgeman, A. J. (2017). Data-Driven Personalization of Student Learning Support in Higher Education. In A. Peña-Ayala (Ed.), Learning Analytics: Fundaments, Applications, and Trends (pp. 143–169). Springer International Publishing.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. https://doi.org/10.1177/0002764213479367

Macfadyen, L., & Dawson, S. (2012). Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan. Educational Technology & Society, 15(3), 149–163.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Rogers, Tim, Dawson, Shane, & Gasevic, Dragan. (2016). Learning Analytics and the Imperative for Theory-Driven Research. In The SAGE Handbook of E-learning Research (2nd ed., pp. 232–250).

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Sharples, M., Mcandrew, P., Weller, M., Ferguson, R., Fitzgerald, E., & Hirst, T. (2013). Innovating Pedagogy 2013: Open University Innovation Report 2 (No. 9781780079370). Milton Keynes: UK. Retrieved from http://www.open.ac.uk/blogs/innovating/

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5). Retrieved from http://moourl.com/j6a5d

Strathern, M. (1997). “Improving ratings”: audit in the British University system. European Review, 5(3), 305–321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4

Powered by WordPress & Theme by Anders Norén

css.php