Assembling the heterogeneous elements for (digital) learning

Category: pirac

Making course activity more transparent: A proposed use of MAV

As part of the USQ Technology Demonstrator Project (a bit more here) we’ll soon be able to play with the Moodle Activity Viewer. As described the VC, the Technology Demonstrator Project entails

The demonstrator process is 90 days and is a trial of a product that will improve an educator’s professional practice and ultimately motivate and provide significant enhancement to the student learning journey,

The process develops a case study which in turn is evaluated by the institution to determine if there is sufficient value to continue or perhaps scale up the project.  As part o the process I need to “articulate what it is you hope to achieve/demonstrate by using MAV”.

The following provides some background/rationale/aim on the project and MAV. It concludes with an initial suggestion for how MAV might be used.

Rationale and aim

In short, it’s difficult to form a good understanding of which resources and activities students are engaging with (or not) on a Moodle course site. In particular, it’s difficult to form a good understanding of how they are engaging within those resources and activities. Making it easier for teaching staff to visualise and explore student engagement with resources and activities will help improve their understanding of student engagement. This improved understanding could lead to re-thinking course and activity design. It could enhance the “student learning journey”.

It’s hard to visualise what’s happening

Digital technologies are opaque. Turkle (1995) talks about how what is going on within these technologies are hidden from the user. This is a problem that confronts university teaching staff using a Learning Management System. Being able to identify what resources and activities within a course website students are engaging with,which resources they are not, and identifying which students are engaging can take a significant amount of time.

For example, testing at USQ in 2014 (for this presentation) found that once you knew which reports to run on Moodle you had to step through a number of different reports. Many of these reports include waiting for minutes (in 2016 the speed is better) with a blank page while the server responds to the request. After that delay, you can’t actually focus only on student activity (staff activity is included) and it won’t work for all modules. In addition, the visualisation that is provided is limited to tabular data – like the following.

EDC3100 2016 S1 - Week 0 activity

Other limitations of the standard reports, include:

  • Identifying how many students, rather than clicks have accessed each resource/activity.
  • Identify which students have/haven’t accessed each resource/activity.
  • Generate the same report within an activity/resource to understand how students have engaged within the activity/resource.

Michael de Raadt has developed the Heatmap block for Moodle (inspired by MAV) which addresses many of the limitations of the standard Moodle report. However, it does not (yet) enable the generation of a activity report within an activity/resource.

The alternative – Moodle Activity Viewer (MAV)

This particular project will introduce and scaffold the use of the Moodle Activity Viewer (MAV) by USQ staff. The following illustrates MAV’s advantages.

MAV modifies any standard Moodle page by overlaying a heat map on it.  The following image shows part of a 2013 course site of mine with the addition of MAV’s heatmap. The “hotter” (more red) a link has been coloured, the most times it has been clicked upon. In addition, the number of clicks on any link has been added in brackets.

A switch of a MAV option will modify the heatmap to show the number of students, rather than clicks. If you visit this page, you will see an image of the entire course site with a MAV heatmap showing the number of students.

EDC3100 S2, 2013 - heat map

The current major advantage of MAV is that the heatmap will work on any standard Moodle links that appear on any Moodle page. Meaning you can view a specific resource (e.g. a Moodle Book resource) or an activity (e.g. a discussion forum) and use the MAV heatmap to understand student engagement with that activity.

The following image (click on it to see larger versions) shows the MAV heatmap on a discussion forum from the 2013 course site above.  This forum is the “introduce yourself” activity for the course. It shows that the most visited forum post was my introduction, visited by 87 students. Most of the other introductions were visited by significantly less students.

This illustrate a potential failure for this activity design. Students aren’t reading many other introductions. Perhaps suggesting a need to redesign this activity.
Forum students

Using MAV

At CQU, MAV is installed and teaching staff can choose to use it, or not. I’m unaware of how much shared discussion occurs around what MAV reveals. However, given that I’ve co-authored a paper titled “TPACK as shared practice: Toward a research agenda” (Jones, Heffernan, & Albion, 2015) I am interested in exploring if MAV can be leveraged in a way that is more situated, social and distributed.  Hence the following approach, which is all very tentative and initial.  Suggestions welcome.

The approach is influenced by the Visitor and Resident Mapping approach developed by Dave White and others. We (I believe I can talk with my co-authors) found using an adapted version of the mapping process for this paper to be very useful.

  1. Identify a group of teaching staff and have them identify courses of interest.
    Staff from within a program or other related group of courses would be one approach. But a diverse group of courses might help challenge assumptions.
  2. Prepare colour print outs of their course sites, both with and without the MAV heatmap.
  3. Gather them in a room/time and ask them to bring along laptops (or run it in a computer lab)
  4. Ask them to mark up the clear (no MAV heatmap) print out of their course site to represent their current thoughts on student engagement.
    This could include

    • Introducing them to the idea of heatmaps, engagment.
    • Some group discussion about why and what students might engage with.
    • Development of shared predictions.
    • A show and tell of their highlighted maps.
  5. Handout the MAV heatmap versions of their course site and ask them to analyse and compare.
    Perhaps including:

    • Specific tasks for them to respond to
      1. How closely aligned is the MAV map and your prediction?
      2. What are the major differences?
      3. Why do you think that might be?
      4. What else would you like to know to better explain?
    • Show and tell of the answers
  6. Show the use of MAV live on a course site
    Showing

    1. changing between # of clicks or # students
    2. focus on specific groups of students
    3. generating heatmaps on particular activities/resources and what that might reveal
  7. Based on this capability, engage in some group generation of questions that MAV might be able to help answer.
  8. Walk through the process of installing MAV on their computer(s) (if required)
  9. Allow time for them to start using MAV to answer questions that interest them.
  10. What did you find?
    Group discussion around what people found, what worked, what didn’t etc.  Including discussion of what might need to be changed about their course/learning design.
  11. Final reflections and evaluation

Learn to code for data analyis – step 1

An attempt to start another MOOC.  Learn to code for data analysis from FutureLearn/OUUK.  Interested in this one to perhaps start the migration from Perl to Python as my main vehicle for data munging; and, also to check out the use of Jupyter notebooks as a learning environment.

Reflections

  • The approach – not unexpectedly – resonates. Very much like the approach I use in my courses, but done much better.
  • The Juypter notebooks work well for learning, could be useful in other contexts.  Good example of the move toward a platform
  • The bit of Python I’ve seen so far looks good. The question is whether or not I have the time to come up to speed.

Getting started

Intro video from a BBC journalist and now the software.  Following a sequential approach, pared down interface, quite different from the standard, institutional Moodle interface. It does have a very visible and simple “Mark as complete” interface for the information.  Similar to, but perhaps better than the Moodle book approach from EDC3100.

Option to install the software locally (using Anaconda) or use the cloud (SageMathCloud).  Longer term, local installation would suit me better, but interested in the cloud approach.  The instructions are not part of the course, seem to be generic instructions used for the OUUK.

SageMathCloud

Intro using a video, which on my connection was a bit laggy. SageMathCloud allows connection with existing accounts, up and going.  Lots of warnings about this being a free service with degraded performance, and the start up process for the project is illustrating that nicely.  Offline might be the better option. Looks like the video is set up for the course.

The test notebook loads and runs. That’s nice.  Like I expected, will be interesting to see how it works in “anger”.

Python 3 is the go for this course, apparently.

Anaconda

Worried a little about installing another version of python.  Hoping it won’t trash what I have installed, looks like it might not.  Looks like the download is going to take a long time – 30 min+.  Go the NBN!

Course design

Two notebooks a week: exercise and project.  Encouraged to extend project. Exercises based on data from WHO, World Bank etc.  Quizzes to check knowledge and use of glossaries.  Comments/discussions on each page.  Again embedded in the interface, unlike Moodle.  Discussion threads expand into RHS of page.

Course content

Week 1

Start with a question – point about data analysis illustrated with a personal story. Has prompts to expand and share related to that story.  Encouraging connections.

Ahh, now the challenge of how to segue into first steps in programming and supporting the wide array of prior knowledge there must be. Variables and assignment. and a bit of Jupyter syntax.  Wonder how the addition of Jupyter impacts cognitive load?

Variable naming and also starting to talk about syntax, errors etc. camelCase is the go apparently.

And now for some coding. Mmm, the video is using Anaconda.  Could see that causing some problems for some learners. And the discussion seems to illustrate aspects of that.  Seems installing Anaconda was more of a problem. Hence the advantages of a cloud service if it is available..

Mmm, notebooks consist of cells. These can be edited and run. Useful possibilities.

Expressions.  Again Juypter adds it’s own little behavioural wrinkle that could prove interesting.  IF the last line in a cell is an expression, it’s value will be output.  Can see that being a practice people try when writing stand alone python code.

Functions. Using established functions.

Onto a quiz.  Comments on given answers include an avatar of the teaching staff.

Values and units.  With some discussion to connect to real examples.

Pandas. The transition to working with large amounts of data. And another quiz, connected to the notebook.  That’s a nice connection.  Works well.

Range of pages an exercises looking at the pandas module.  Some nice stuff here.

Do I bother with the practice project?  Not now.  But nice to see the notebooks can be exported.

Week 2 – Cleaning up our act

The BBC journalist giving an intro and doing an interview. Nodding head and all.

Ahh weather data.  Becoming part of the lefty conspiracy that is climate change?  🙂

Comparison operators, with the addition of data frames.  Which appears to be a very useful abstraction.

Bitwise operators. Always called these logical or boolean operators.  Boolean isn’t given a lot of intro yet.

Ahh, the first bit of “don’t worry about they syntax, just use it as a template” advice. Looks like it’s using the equivalent of a hash that hasn’t yet been covered.

 

 

Designing a collection of analytics to explore "engagement"

I’m working with a group of fellow teacher educators here at USQ to explore what is happening around student engagement with our online courses. It’s driven by the apparent less than stellar responses on the QILT site from our prior students around “engagement”. It’s also driven by some disquiet about the limitations of aggregated and de-contextualised data like that reported on the QILT site and also that arising from most learning analytics (e.g. as found by Gašević et. al. (2015).

Hence our idea is to do something like

  1. Take a collection of teacher education courses.
  2. Iteratively apply a range of increasingly specific learning analytics to reveal what’s happening around engagement in our course sites.
  3. Reflect on what we learn and what that might have to say about
    • the use of aggregated and de-contextualised data/analytics;
    • what data/analytics might be useful for ourselves and our students; and
    • what’s happening in our courses, and how that compares to what we thought was going on.

As the person most familiar with technology and learning analytics, I’m tasked with identifying the sequence of “increasingly specific learning analytics” that we’ll use.

What follows is a first draft.  I’m keen to hear suggestions and criticisms. Fire away.

Specific questions to be to be answered

  1. Does the sequence below make sense?
  2. Are there other types of analytics that could be usefully added to the following and would help explore student/staff engagement and/or perhaps increase contextualisation?
  3. What literature exists around each of these analytics, where did the apply the analytics, and what did they find?

Process overview

Each of the ovals in the following diagram are intended to represent a cycle where some analytics are presented.  We’ll reflect on what is revealed and generate thoughts and questions. The labels for the ovals a short-hand for a specific type of analytics. These are described in more detail below.

The sequence is meant to capture the increasing contextualisation. The first four cycles would use fairly generic analytics, but analytics that reveal different and perhaps more specific detail. The last two cycles – learning design and course specific – are very specific to each course. The course specific cycle would be aimed at exploring any of the questions we identified for our individual courses as we worked through the other cycles.
Version 1 of process
It won’t be quite as neat as the above. There will be some iteration and refinement of existing and previous cycles, but the overall trend would be down.

The analytics below could also be compared and analysed a variety of ways, most of which would be responding to details of our context. e.g. comparisons against mode and specialisation etc.

Click/grade & Time/grade

This cycle replicates some of the patterns from Beer et al (2010) (somewhat shameless, but relevant self-citation) and related.  This is aimed at just getting the toe in the water, getting the process set up.  It’s also arguably perhaps as removed from student learning/engagement as you can get. A recent post showed off what one of these will look like.
EDC3100 2015 Course and grades

This would also include the heatmap type analysis such as the following diagrams.

12259871663_48602a5da1_q 

“Rossi” data sets

Rossi et al (2013) extended the Beer et al (2010) work and worked with the “Rossi” data sets drawing on the Interaction Equivalency Theorem (Miyazoe and Anderson, 2011). Hence increasing the theoretical connection with interaction/engagement.

The additions in the “Rossi” data sets follow

Proportion of clicks within LMS discussion forums (dhits)

CQU Moodle courses Non-forum clicks Forum clicks (dhits)
n=12870 students 68% 32%

# of forum hits (dhit), posts, and replies

A “dhit” is a click on a forum. This includes navigation etc. The idea here is to compare dhits with posts and replies

the posts and replies made within the forums are more representative of student and teacher engagement and interaction (Rossi et al 2013, p. 48)

Moodle CQU T1, 2011 dhits Posts replies
n=12870 385113 17154 29586

learner-learner, learner-teacher and ratio

# of forum posts that are learners replying to a learner post (learner-learner) or a learning responding to a teacher or vice versa (learner-teacher).

Average for T1, 2011 CQU courses Learner-learner Learner-teacher Ratio of LT to LL
n=336 86 56 .65

Comparison of learner-learner, learner-teacher and learner content interactions

 

Screen Shot 2016-04-12 at 12.36.46 pm.png

Networks and paths

These analytics focus on the relationships and connections between people and the paths they follow while studying. Moving beyond numbers to starting to understand connections.

This 2013 post from Martin Hawksey (found via his other post mentioned below) gives an overview of a range of uses and tools (including SNAPP) for social network analysis. It’s the early SNAPP work that identifies some of what these visualisations can help identify

  • isolated students
  • facilitator-centric network patterns where a tutor or academic is central to the network with little interaction occurring between student participants
  • group malfunction
  • users that bridge smaller clustered networks and serve as information brokers

The following is one of my first attempts generating such a graph. It shows the connections between individual student blogs (from EDC3100 2013). The bigger the line between dots (blogs), the more links.

Romero et al (2013) offers one example.

001 - First Graph

A Sankey diagram is a method for representing flow in networks. It can be used to understand usage of websites.  Martin Hawksey has just written this post (showing how to take LMS discussion data and send it through Google analytics) which includes the following screen shot of “event flow” (a related idea).  It shows (I believe) how a particular use has moved through a discussion forum. Looks like it provides various ways to interact with this information.

Google Analytics - Event Flow

Hoping we might be able to leverage some of the work Danny Liu is doing.

 Sentiment, content, and broader discourse analysis

The previous cycles are focused on using clicks and links to understand what’s going on. This cycle would start to play with natural language processing to analyse what the students and teachers are actually saying.

This is a fairly new area for me. Initially, it might focus on

  • readability/complexity analysis;
    Unpublished work from CQU has identified a negative correlation between the complexity of writing in assignment specifications and course satisfaction.
  • sentiment analysis
    How positive or negative are forum posts etc? The comments and questions on this blog post about a paper using sentiment analysis on MOOC forums provides one place to start.

Learning design

The plan here is to focus explicitly on the learning designs within the courses and explore what can be revealed using checkpoint and process analytics as outlined by Lockyer et al (2013).

Course specific

Not explict planned here. The idea is that the explorations and reflections from each of the above cycles will identify a range of additional course specific questions that will be dealt with as appropriate.

References

Beer, C., Clark, K., & Jones, D. (2010). Indicators of engagement. In Curriculum, technology and transformation for an unknown future. Proceedings of ASCILITE Sydney 2010 (pp. 75–86). Sydney. Retrieved from http://ascilite.org.au/conferences/sydney10/procs/Beer-full.pdf

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist. doi:10.1177/0002764213479367

Rossi, D., Rensburg, H. Van, Beer, C., Clark, D., Danaher, P., & Harreveld, B. (2013). Learning interactions: A cross-institutional multi-disciplinary analysis of learner-learner and learner-teacher and learner-content interactions in online learning contexts. Retrieved from http://www.dehub.edu.au/wp-content/uploads/2013/07/CQU_Report.pdf

 

Playing with D3

I’m part of a group that’s trying to take a deep dive into our courses using “learning analytics”. My contribution is largely the technology side of it and its time to generate some pretty pictures.  The following is a summary of some playing around with D3.js and ended up with some success with Plot.ly.

Example – box plots

Toe in the water time, can I get an example working. Start with box plots as I’ve got some data that could fit with that.

Rough process should be something like.

  1. Set up some space on the local web server.
  2. Install the library.
  3. Get the example working.
  4. Modify the example for my data

Step #4 is a bit harder given that I don’t yet grok the d3 model.

Work through some tutorials

Starting with bar charts.

So some jQuery like basics: selections, selectors, method chaining.

Appears joins might be the most challenging of the concepts.

data space (domain) -> display space (range)

Mmm, d3.js is too low level for current needs.

Plot.ly

Plot.ly is an online service driven by a Javascript library that is built upon d3.js and other services. Appears to be at the level I’d like.  Example works nicely.

Does appear to be a big project.

Ohh nice.  The example looks very appropriate.

Bit of data wrangling and the following is produced.

EDC3100 2015 Course and grades
Even that simple little test reveals some interesting little tidbits. Exploring a bit further should be even more interesting.

Learning analytics should not promote one size fits all: The effects of instructional conditions in predicting academic success

What follows is a summary of

Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

I’ve skimmed it before, but renewed interest is being driven by a local project to explore what analytics might reveal about 9 teacher education courses, especially in light of the QILT process and data.

Reactions

Good paper.

Connections to the work we’re doing in terms of similar number of courses (9) and a focus on looking into the diversity hidden by aggregated and homogenised data analysis. The differences are

  • we’re looking at the question of engagement, not prediction (necessarily);
  • we’re looking for differences within a single discipline/program and aiming to explore diversity within/across a program
  • in particular, what it might reveal about our assumptions and practices
  • some of our offering are online only

Summary

Gašević, et al (2015) looks at the influence of specific instructional conditions in 9 blended courses on success prediction using learning analytics and log-data.

A lack of attention to instructional conditions can lead to an over or under estimation of the effects of LMS features on students’ academic success

Learning analytics

Interest in, but questions around the portability of learning analytics.

the paper aims to empirically demonstrate the importance for understanding the course and disciplinary context as an essential step when developing and interpreting predictive models of academic success and attrition (Lockyer, Heathcote, & Dawson, 2013)

Some aims to decontextualise – i.e. that some work aims to identify predictive models that can

inform a generalized model of predictive risk that acts independently of contextual factors such as institution, discipline, or learning design. These omissions of contextual variables are also occasionally expressed as an overt objective.

While there are some large scale projects, most are small scale and (emphasis added)

small sample sizes and disciplinary homogeneity adds further complexity in interpreting the research findings, leaving open the possibility that disciplinary context and course specific effects may be contributing factors

 Absence of theory in learning analytics – at least until recently.  Theory that points to the influence of diversity in context, subject, teacher, and learner.

Most post-behaviorist learning theories would suggest the importance of elements of the specific learning situation and student and teacher intentions

Impact of context – Mentions Finnegan, Morris and Lee (2009) as a study that looked at the role of contextual variables and finding disciplinary differences and “no single significant predictor shared across all three disciplines”

Role of theoretical frameworks – argument for benefits of integrating theory

  • connect with prior research;
  • make clear the aim of research designs and thus what outcomes mean.

Theoretical grounding for study

Winne and Hadwin’s “constructivist, meta-cognitive approach to self-regulated learning

  1. learners construct their knowledge by using tools (cognitive, physical, and digital);
  2. to operate on raw information (stuff given by courses);
  3. to construct products of their learning;
  4. learning products are evaluated via internal and external standards
  5. learners make decisions about their the tactics and standards used.
  6. decisions are influenced by internal and external conditions

Leading to the idea proposition

that learning analytics must account for conditions in order to make any meaningful interpretation of learning success prediction

The focus here is on instructional conditions.

Predictions from this

  1. Students will tend to interact more with recommended tools
  2. There will be a positive relationship between students level of interaction and the instructional conditions of the course (high frequency of use tools will have a large impact on success)
  3. The central tendency will prevail so that models that aggregate variables about student interaction may lead to over/under estimation

Method

Correlational (non-experimental) design. 9 first year courses that were part of an institutional project on retention. Participation in that project based on a discipline specific low level of retention – a quite low 20% (at least to me).  4134 students from 9 courses over 5 years – not big numbers.

Outcome variables – percent mark and academic status – pass, fail, or withdrawn (n=88).

Data based on other studies and availability

  • Student characteristics: age, gender, international student, language at home, home remoteness, term access, previous enrolment, course start.
  • LMS trace data: usage of various tools, some continuous, some lesser used as dichotomous and then categorical variables (reasons given)

Various statistics tests and models used.

Discussion

Usage across courses was variable hence the advice (p. 79)

  1. there is a need to create models for academic success prediction for individual courses, incorporating instructional conditions into the analysis model.
  2. there must be careful consideration in any interpretation of any predictive model of academic success, if these models do not incorporate instructional conditions
  3. particular courses,which may have similar technology use,maywarrant separatemodels for academic suc- cess prediction due to the individual differences in the enrolled student cohort.

And

we draw two important conclusions: a) generalized models of academic success prediction can overestimate or underestimate effects of individual predictors derived from trace data; and b) use of a specific LMS feature by the students within a course does not necessarily mean that the feature would have a significant effect on the students’ academic success; rather, instructional conditions need to be considered in order to understand if, and why, some variables were significant in order to inform the research and practice of learning and teaching (pp. 79, 81)

Closes out with some good comments on moving students/teachers beyond passive consumers of these models and the danger of existing institutional practice around analytics having decisions be made too far removed from the teaching context.

 

Sentiment analysis of student blog posts

In June last year I started an exploration into the value of sentiment analysis of student blog posts. This morning I’ve actually gotten it to work. There may be some value, but further exploration is required. Here’s the visible representation of what I’ve done.

The following is a screen shot of the modified “know thy student” kludge I’ve implemented for my course. The window shows some details for an individual student from second semester last year (I’ve blurred out identifying elements). The current focus is on the blog posts the student has written.
Sentiment analysis of blog posts

Each row in the above corresponds to an individual blog post. It used to show how long ago the post was written, the post’s title, and provide a link to the blog post. The modified version has the background colour for the cell modified to represent the sentiment of the blog post content. A red background indicates a negative post, a green background indicates a positive post, and a yellow background indicates somewhere in the middle.

The number between 0 and 1 shown next to the post title is the result provided by the Indico sentiment analysis function. The method use to perform the sentiment analysis.

Does this help?

Does this provide any help? Can it be useful?

An initial quick skim of posts from different students seemed to indicate mostly all green. Was the sentiment analysis revealing anything useful? Was it working?

In the following I examine what is revealed by the sentiment analysis by paying close attention to an individual student, the one shown in the image above.

Red blog post – reveal target for intervention?

The “red” blog post from the image above included words like “epic fail”. It tells the story of how the student had problems getting the new software for the course working. It shows as the third post the student made in the semester. The start of this course can be frustrating for students due to technical problems. This particular student didn’t report any of these problems on the course discussion forums.

Given that the course is totally online and there are ~100 students in this offering, there’s little chance for me to have known about these problems otherwise. Had the sentiment analysis been in place during the offering and if it was represented effectively, I might have been able to respond and that response might have been helpful.

Yellow blog post – a problem to address?

The yellow post above is a reflection on the students experience on Professional Experience, in a school, in front of a classroom, actually teaching. It is a reflection on how the student went through an emotional roller coaster on prac (not unusual), how her mentor really helped (also not unusual, but a little less so), but also how the various exemptions she received contributed to her problems.

Very positive blog posts – loved resources?

A number of the posts from this student are as positive as they can get – 1.0. Interestingly, almost all of them are descriptions of useful resources and include phrases like

what a wonderful learning tool …lovely resource…wonderful resource for teachers

What’s next?

Appears that the following are required/might be useful

  1. Explore different representations and analysis
    So far I’ve only looked at the student by student representation. Another forms of analysis/representation would seem potentially useful. Are there differences/patterns across semester, between students that are the same/different on certain characteristics, between different offerings of the course etc.How can and should this representation be made visible to the students?
  2. Set this in place for Semester 1.
    In a couple of weeks the 300+ student version of this course runs. Having the sentiment analysis working live during that semester could be useful.
  3. Explore useful affordances.
    One of the points of the PIRAC framework is that this form of learning analytics is only as useful as the affordances for action that it supports. What functionality can be added to this to help me and the students take action in response?

Reflection

I’ve been thinking about doing this for quite some time. But the business of academic life has contributed to a delay.  Getting this to work actually only required three hours of free time. But perhaps more importantly, it required the breathing space to get it done. That said, I still did the work on a Sunday morning and probably would not have had the time to do it within traditional work time.

 

Dashboards suck: learning analytics' broken metaphor

I started playing around with what became learning analytics in 2007 or so. Since then every/any time “learning analytics” is mentioned in a university there’s almost an automatic mention of dashboards. So much so I was lead to tweet.

I’ve always thought dashboards suck. This morning when preparing the slides for this talk on learning analytics I came across an explanation which I think captures my discomfort around dashboards (I do wonder whether I’d heard it somewhere else previously).

What is a dashboard

In the context of an Australian university discussion about learning analytics the phrase “dashboard” is typically mentioned by the folk from the business intelligence unit. The folk responsible for the organisational data warehouse. It might also get a mention from the web guru who’s keen on Google Analytics. In this context a dashboard is typically a collection of colourful charts, often even doing a good job of representing important information.

So what’s not to like?

The broken metaphor

Obviously “analytics” dashboards are a metaphor referencing the type of dashboard we’re familiar with in cars. The problem is that many (most?) of the learning analytics dashboards are conceptualised and designed like the following dashboard.

The problem is that this conceptualisation of dashboards misses the bigger picture. Rather than being thought of like the above dashboard, learning analytics dashboards need to be thought of as like the following dashboard.

Do you see the difference? (and it’s not the ugly, primitive nature of the graphical representation in the second dashboard).

Representation without Affordances and removed from the action

The second dashboard image includes: the accelerator, brake, and clutch pedals; the steering wheel; the indicators; the radio; air conditioning; and all of the other interface elements a driver requires to do something with the information presented in the dashboard. All of the affordances a driver requires to drive a car.

The first dashboard image – like many learning analytics dashboards – provides no affordances for action. The first vision of a dashboard doesn’t actually help you do anything.

What’s worse, the dashboards provided by most data warehouses aren’t even located within the learning environment. You have to enter into another system entirely, find the dashboard, interpret the information presented, translate that into some potential actions, exit the data warehouse, return to the learning environment, translate those potential actions into the affordances of the learning environment.

Picking up on the argument of Don Norman (see quote in image below), the difficulty of this process would seem likely to reduce the chances of any of those potential actions being taken. Especially if we’re talking about (casual) teaching staff working within a large course with limited training, support and tools.

Norman on affordances

Affordances improve learning analytics

Hence, my argument is that the dashboard (Representation) isn’t sufficient. In designing your learning analytics application you need to include the pedals, steering wheel etc (Affordances) if you want to increase the likelihood of that application actually helping improve the quality of learning and teaching. Which tends to suggest that your learning analytics application should be integrated into the learning environment.

The four paths for implementing learning analytics and enhancing the quality of learning and teaching

The following is a place holder for two presentations that are related. They are:

  1. “Four paths for learning analytics: Moving beyond a management fashion”; and,

    An extension of Beer et al (2014) (e.g. there are four paths now, rather than three) that’s been accepted to Moodlemoot’AU 2015.

  2. “The four paths for implementing learning analytics and enhancing the quality of learning and teaching”;

    A USQ research seminar that is part a warm up of the Moot presentation, but also an early attempt to extend the 4 paths idea beyond learning analytics and into broader institutional attempts to improve learning and teaching.

Eventually the slides and other resources from the presentations will show up here. What follows is the abstract for the second talk.

Slides for the MootAU15 presentation

Only 15 minutes for this talk. Tried to distill the key messages. Thanks to @catspyjamasnz the talk was captured on Periscope

Slides for the USQ talk

Had the luxury of an hour for this talk. Perhaps to verbose.

Abstract

Baskerville and Myers (2009) define a management fashion as “a relatively transitory belief that a certain management technique leads rational management progress” (p. 647). Maddux and Cummings (2004) observe that “education has always been particularly susceptible to short-lived, fashionable movements that come suddenly into vogue, generate brief but intense enthusiasm and optimism, and fall quickly into disrepute and abandonment” (p. 511). Over recent years learning analytics has been looming as one of the more prominent fashionable movements in educational technology. Illustrated by the apparent engagement of every institution and vendor in some project badged with the label learning analytics. If these organisations hope to successfully harness learning analytics to address the challenges facing higher education, then it is important to move beyond the slavish adoption of the latest fashion and aim for more mindful innovation.

Building on an earlier paper (Beer, Tickner, & Jones, 2014) this session will provide a conceptual framework to aid in moving learning analytics projects beyond mere fashion. The session will identify, characterize, and explain the importance of four possible paths for learning analytics: “do it to” teachers; “do it for” teachers; “do it with” teachers; and, teachers “DIY”. Each path will be illustrated with concrete examples of learning analytics projects from a number of universities. Each of these example projects will be analysed using the IRAC framework (Jones, Beer, & Clark, 2013) and other lenses. That analysis will be used to identify the relative strengths, weaknesses, and requirements of each of the four paths. The analysis will also be used to derive implications for the decision-makers, developers, instructional designers, teachers, and other stakeholders involved in both learning analytics, and learning and teaching.

It will be argued that learning analytics projects that follow only one of the four paths are those most likely to be doomed to mere fashion. It will argue that moving a learning analytics project beyond mere fashion will require a much greater focus on the “do it with” and “DIY” paths. An observation that is particularly troubling when almost all organizational learning analytics projects appear focused primarily on either the “do it to” or “do it for” paths.

Lastly, the possibility of connections between this argument and the broader problem of enhancing the quality of learning and teaching will be explored. Which paths are used by institutional attempts to improve learning and teaching? Do the paths used by institutions inherently limit the amount and types of improvements that are possible? What implications might this have for both research and practice?

References

Baskerville, R. L., & Myers, M. D. (2009). Fashion waves in information systems research and practice. MIS Quarterly, 33(4), 647–662.

Beer, C., Tickner, R., & Jones, D. (2014). Three paths for learning analytics and beyond : moving from rhetoric to reality. In B. Hegarty, J. McDonald, & S. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 242–250).

Jones, D., Beer, C., & Clark, D. (2013). The IRAC framwork: Locating the performance zone for learning analytics. In H. Carter, M. Gosper, & J. Hedberg (Eds.), Electric Dreams. Proceedings ascilite 2013 (pp. 446–450). Sydney, Australia.

Maddux, C., & Cummings, R. (2004). Fad, fashion, and the weak role of theory and research in information technology in education. Journal of Technology and Teacher Education, 12(4), 511–533.

Using the PIRAC – Thinking about an "integrated dashboard"

On Monday I’m off to a rather large meeting to talk about what data might be usefully syndicated into a integrated dashboard. The following is an attempt to think out lod about the (P)IRAC framework (Jones, Beer and Clark, 2013) in the context of this local project. To help prepare me for the meeting, but also to ponder some recent thoughts about the framework.

This is still a work in progress.

Get the negativity out of the way first

Dashboards sux!!

I have a long-term negative view of the value of dashboards and traditional data warehouses/business intelligence type systems. A view that has risen out of both experience and research. For example, the following is a slide from this invited presentation. There’s also a a paper (Beer, Jones, & Tickner, 2014) that evolved from that presentation.

Slide19

I don’t have a problem with the technology. Data warehouse tools do have a range of functionality that is useful. However, in terms of providing something useful to the everyday life of teachers in a way that enhances learning and teaching, they leave a lot to be desired.

The first problem is the Law of Instrument.

Hammer ... Nail ... by Theen ..., on Flickr
Creative Commons Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License   by  Theen … 

The only “analytics” tool the institution has is the data warehouse, so that’s what it has to use. The problem being is that the data warehouse cannot be easily and effectively integrated into the daily act of learning and teaching in a way that provides significant additional affordances (more on affordances below).

Hence it doesn’t get used.

Now, leaving that aside.

(P)IRAC

After a few years of doing learning analytics stuff, we put together the IRAC framework as an attempt to guide learning analytics projects. Broaden the outlook and what needed to be considered. Especially what needed to be considered to ensure that the project outcome was widely and effectively used. The idea is that the four elements of the framework could help ponder what was available and what might be required. The four original components of IRAC are summarised in the following table.

IRAC Framework (adapted from Jones et al 2013)
Component Description
Information
  • the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13).
  • Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14).
  • Is the information required technically and ethically available for use?
  • How is the information to be cleaned, analysed and manipulated?
  • Is the information sufficient to fulfill the needs of the task?
  • In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?
Representation
  • A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993).
  • To maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540).
  • Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis.
  • Considerations here focus on how easy is it to understand the implications and limitations of the findings provided by learning analytics? (and much, much more)
Affordances
  • A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993).
  • To have a positive impact on individual performance an IT tool must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995).
  • Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106).
  • The nature of such affordances are not inherent to the artefact, but are instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000).
  • Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62).
  • The consideration for affordances is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.
Change
  • Evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005).
  • Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005).
  • Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated.
  • Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6).
  • Universities are complex systems (Beer, Jones, & Clark, 2012) requiring reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010).
  • Potential considerations here include, who is able to implement change? Which, if any, of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Adding purpose

Whilst on holiday enjoying the Queenstown view below and various refreshments, @beerc and I discussed a range of issues, including the IRAC framework and what might be missing. Both @beerc and @damoclarky have identified potential elements to be added, but I’ve always been reluctant. However, one of the common themes underpinning much of the discussion of learning analytics at ASCILITE’2014 was for whom was learning analytics being done? We raised this question somewhat in our paper when we suggested that much of learning analytics (and educational technology) is mostly done to academics (and students). Typically in the service of some purpose serving the needs of senior management or central services. But the issue was also raised by many others.

Which got us thinking about Purpose.

Queenstown View

As originally framed (Jones et al, 2013)

The IRAC framework is intended to be applied with a particular context and a particular task in mind……Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved.

If you start the design of a learning analytics tool/intervention without a clear idea of the task (and its context) in mind, then it’s going to be difficult to implement.

In our discussions in NZ, I’d actually forgotten about this focus in the original paper. This perhaps reinforces the need for IRAC to become PIRAC. To explicitly make purpose the initial consideration.

Beyond increasing focus on the task, purpose also brings in the broader organisational, personal, and political considerations that are inherent in this type of work.

So perhaps purpose encapsulates

  1. Why are we doing this? What’s the purpose?
    Reading between the lines, this particular project seems to be driven more by the availability of the tool and a person with the expertise to do stuff with the tool. The creation of a dashboard seems the strongest reason given.
    Tied in with seems to be the point that the institution needs to be seen to be responding to the “learning analytics” fad (the FOMO problem). Related to this will, no doubt, be some idea that by doing something in this area, learning and teaching will improve.
  2. What’s the actual task we’re trying to support?
    In terms of a specific L&T task, nothing is mentioned.
  3. Who is involved? Who are they? etc.
    The apparent assumption is that it is teaching staff. The integrated dashboard will be used by staff to improve teaching?

Personally, I’ve found thinking about these different perspectives useful. Wonder if anyone else will?

(P)IRAC analysis for the integrated dashboard project

What follows is a more concerted effort to use PIRAC to think about the project. Mainly to see if I can come up with some useful questions/contributions for Monday.

Purpose

  • Purpose
    As above the purpose appears to be to use the data warehouse.

    Questions:

    • What’s the actual BI/data warehouse application(s)?
    • What’s the usage of the BI/data warehouse at the moment?
    • What’s it used for?
    • What is the difference in purpose in using the BI/data warehouse tool versus Moodle analytics plugins or standard Moodle reports?
  • Task
    Without knowing what the tool can do I’m left with pondering what information related tasks that are currently frustrating or limited. A list might include

    1. Knowing who my students are, where they are, what they are studying, what they’ve studied and when the add/drop the course (in a way that I can leverage).
      Which is part of what I’m doing here.
    2. Having access to the results of course evaluation surveys in a form that I can analyse (e.g. with NVivo).
    3. How do I identify students who are not engaging, struggling, not learning, doing fantastic and intervene?

    Questions:

    • Can the “dashboards” help with the tasks above?
    • What are the tasks that a dashboard can help with that isn’t available in the Moodle reports?
  • Who
  • Context

What might be some potential sources for a task?

  1. Existing practice
    e.g. what are staff currently using in terms of Moodle reports and is that good/bad/indifferent?

  2. Widespread problems?
    What are the problems faced by teaching staff?
  3. Specific pedagogical goals?
  4. Espoused institutional priorities?
    Personalised learning appears to be one. What are others?

Questions:

  • How are staff using existing Moodle reports and analytics plugins?
  • How are they using the BI tools?
  • What are widespread problems facing teaching staff?
  • What is important to the institution?

Information

The simple questions

  • What information is technically available?
    It appears that the data warehouse includes data on

    • enrolment load
      Apparently aimed more at trends, but can do semester numbers.
    • Completion of courses and programs.
    • Recruitment and admission
      The description of what’s included in this isn’t clear.
    • Student evaluation and surveys
      Appears to include institutional and external evaluation results. Could be useful.

    As I view the dashboards, I do find myself asking questions (fairly unimportant ones) related to the data that is available, rather than the data that is important.

    Questions

    • Does the data warehouse/BI system know who’s teaching what when?
    • When/what information is accessible from Moodle, Mahara and other teaching systems?
    • Can the BI system enrolment load information drill down to course and cohort levels?
    • What type of information is included in the recruitment and admission data that might be useful to teaching staff?
    • Can we get access to course evaluation surveys for courses in a flexible format?
  • What information is ethically available?

Given the absence of a specific task, it would appear

Representation

  • What types of representation are available?
    It would appear that the dashboards etc are being implemented with PerformancePoint hence it’s integration with Sharepoint (off to a strong start there). I assume relying on its “dashboards” feature hence meaning it can do this. So there would appear to be a requirement for Silverlight to see some of the representations

    Questions

    • Can the data warehouse provide flexible/primitive access to data?
      i.e. CSV, text or direct database connections?
  • What is knowledge is required to view those representations?
    There doesn’t appear to be much in the way of contextual help with the existing dashboards. You have to know what the labels/terminology mean. Which may not be a problem for the people for whom the existing dashboards are intended.
  • What is the process for viewing these representations?

Affordances

Based on the information above about the tool, it would appear that there are no real affordances that the dashboard system can provide. It will tend to be limited to representing information.

  • What functionality does the tool allow people to do?
  • What knowledge and other resources are required to effectively use that functionality?

Change

  • Who, how, how regularly and with what cost can the
    1. Purpose;
      Will need to be approved via whatever governance process exists.
    2. Information;
      This would be fairly constrained. I can’t see much of the above information changing. At least not in terms of getting access to more or different data. The question about ethics could potentially meant that there would be less information available.
    3. Representation; and,
      Essentially this would appear that all the dashboards can change. Any change will be limited by the specifics of the tool
    4. Affordances.
      You can’t change what you don’t have.

    be changed?

Powered by WordPress & Theme by Anders Norén

css.php