Oh Academia

It’s been one of those weeks in academia.

Earlier in the week the “I quit academia” meme went through my Twitter stream. Perhaps the closest this meme came to me was @marksmithers “On leaving academia” post.

That was about the day when I had to pull the pin on a grant application. Great idea, something we could do and would probably make a difference, but I didn’t have the skills (or the time) to get it over the line.

As it happened, I was reading Asimov’s “Caves of Steel” this week and came across the following quote about the “Medievalists”, a dissaffected part of society

people sometimes mistake their own shortcomings for those of society and want to fix the Cities because they don’t know how to fix themselves

On Tuesday night it was wonder if you could replace “Cities” with “Universities” and capture some of drivers behind the “I quit academia” meme.

And then I attended a presentation today titled “Playing the research game well”. All the standard pragmatic tropes – know your H-Index (mine’s only 16), know the impact factor for journals, only publish in journals with an impact factor greater than 3, meta-analysis get cited more etc.

It is this sort of push for KPIs and objective measures that is being created by the corporatisation of the Australian University sector. The sort of push which makes me skeptical of Mark’s belief

that higher education institutions can and will find their way back to being genuinely positive friendly and enjoyable places to work and study.

If anything these moves are likely to increase the types of experiences Mark reports.

So, I certainly don’t think that the Asimov quote applies. That’s not to say that academics don’t have shortcomings. I have many – the grant application non-submission is indicative of some – but by far the larger looming problem (IMHO) is the changing nature of universities.

That said, it hasn’t been all that bad this week. I did get a phone call from a student in my course. A happy student. Telling stories about how he has been encouraged to experiment with the use of ICTs in his teaching and how he’s found a small group at his work who are collaborating.

Which raises the question, if you’re not going to quit academia (like Leigh commented on Mark’s post, I too am “trapped in wage slavery and servitude”) do you play the game or seek to change it?

Or should we all just take a spoonful?

Processing and Visualizing Data in Complex Learning Environments

The following is a summary and some thinking around

Thompson, K., Ashe, D., Carvalho, L., Goodyear, P., Kelly, N., & Parisio, M. (2013). Processing and Visualizing Data in Complex Learning Environments. American Behavioral Scientist, 57(10), 1401–1420. doi:10.1177/0002764213479368


The ability to capture large amounts of data that describe the interactions of learners becomes useful when one has a framework in which to make sense of the processes of learning in complex learning environments. Through the analysis of such data, one is able to understand what is happening in these networks; however, deciding which elements will be of most interest in a specific learning context and how to process, visualize, and analyze large amounts of data requires the use of analytical tools that adequately support the phases of the research process. In this article, we discuss the selection, processing, visualization, and analysis of multiple elements of learning and learning environments and the links between them. We discuss, using the cases of two learning environments, how structure affects the behavior of learners and, in turn, how that behavior has the potential to affect learning. This approach will allow us to suggest possible ways of improving future designs of learning environments.


Some interesting ideas and frameworks/scaffolding for thinking about learning analytics and understanding educational design. Interesting perspective about “big data” techniques not being limited to small amounts of data about lots of people, but also being useful for large amounts of data about small groups of people. Which is linked to the idea of moving analytics beyond use at the macro level into the “micro”.

Remain unsure that some of the work labeled “learning analytics” is what is commonly called learning analytics.

I wonder whether any thought has been given to the application of analytics techniques going beyond the representation/communication stage and extending into action through integration into the educational design? As yet another tool contributing to co-creation and co-configuration.


Very brief intro to learning analytics “focused on making sense of “big data,” data usually collected from learning management systems” used at course and student levels…often to intervene with low/high achieving students…a type of analytics that doesn’t help designers of learning.

educational design defined as “constructing representations of how people should be helped to learn in specific circumstances (Goodyear & Retalis, 2010, p. 10)” and to include the “design of tools, tasks and interactions associated with learning”.

analytics has “mainly focused on the design of courses and analysis on the macro level” and not on “identifying complex patterns of behaviour”. The argument here is to expand “the principels and applications of learning analytics”. Based on “processing and visualizing data in two complex learning environments”


Networked learning

“Networked learning involves people collaborating with the help of technologies in a shared enterprise of knowledge creation” which raises the question for me of “how much ‘help’ do the technologies provide?”.

Focus is exploration of the physical, digital and human elements within learning environments. Objects have properties, intentions and brings values from choices made during design. Thus objects have “effects on human perception and action”.

A distinction of digital technologies is the capacity to change. i.e. it’s protean.

Aside: interesting that the example they give is of customisation of display and not something deeper.

The objects, perception of them and action with them involve various levels of mental/cognitive effort. Additional complication arises from the combination of objects “Thus, only by analyzing the architecture of networks of objects (the pattern of their relations) can one see how design intentions affect what people do, including what they learn. Research of this nature has implica- tions for design work in education” (p. 1403)

Analytical famework

Four analytics dimensions

  1. set design – the physical stage on which learning activity is situated – tools, artifacts etc.
  2. epistemic design – tasks proposed, knowledge implicated.
  3. social design – roles, divisions of labor.
  4. co-creation and co-configuration – since participants’ activities lead to rearrangement of the learning environment.

Aside: it’s reassuring to see explicit mention of change/modification in the last element.

The analytic framework helps identify and represent key elements of complex learning environments – but there’s a need to “develop methods of analysis that incorporate multiple streams of data to describe multiple tool use across multiple tasks” (p. 1403-1404).

Learner behaviour

The aim is to “reveal process that can inform educational design and student learning processes”. Visualisations of both order and time allow for: “identification of typologies for form” which can help theorize about what works.

Learning analytics

After some common quotes about learning analytics, makes the point that rather than “big data” meaning data from lots of people. It can also be “lots of data” about not many people e.g. “short episodes of collaborative work can rapidly create hundreds of gigabytes of data” (p. 1405). This raises difficulties. Expands on tools the developed and considerations.

“We consider learners’ use of the space as important as what they say and the artifacts they create”.

The following figure offers a summary of how patterns are discovered in data. Most definitions of analytics are based on the use of computational methods. The use of human analysis probably doesn’t fit. But it does capture, I think, what actually happens. Certainly part of the data mining activity.

Discovery of patterns within data by David T Jones, on Flickr

“Finally, the patterns themselves are represented in some way for communication (Figure 1).” (p. 1405)

I have a small problem with the word “finally” in this quote. Not in the context of this paper, but more broadly in learning analytics. At some level representation is enough, but if you do want to make an improvement to educational design, then action is needed. This the argument we make in the IRAC framework – the R is “Representation” i.e. communication. The A is affordance for action.

Now moving onto more specific descriptions of what they’ve done

  • focus “on the demonstration of expertise in individual learners as an indicator of successful collaboration”
  • Case #1 is an informal networked learning environment (iSpot)
    The website (through screenshots) is analysed using semiotics and design to examine the elements of the analytical framework.
  • Case #2 – four mater’s students working on a collaborative task.
    video data and transcripts analysed to identify indicators of expertise.

Case #1

Describes iSpot. Mentions earlier analysis (Clow & Makriyannis, 2011).

The focus here is “on the design adopted to make visible a member’s overall level of expertise” – linked somewhat to the outcome of the earlier analysis.

After an introduction to semiotics there is a description of how iSpot represents and calculates the expertise of an individual to illustrate how a design feature is not only a “design element placed in the stage (set design) but in fact encodes a number of underlying meanings, which ultimately reflect a particular way of structuring knowledge (epistemic design) and roles (social design) within the learning network.” (p. 1409).

While this is claimed as drawing on “notions from learning analytics” I’m not sure I see this from the description of what was done. Screenshots of a website followed by semiotic analysis doesn’t quite align with the common definitions of learning analytics I’m familiar with.

Case #2

Four masters students completing 5 week task. F-t-f meetings captured and analysed. Analytic framework used guide investigation. Automated discourse analysis used examine how learners used the tools, interpreted the task and designed their roles. This group achieved the highest grade in the collaborative component – so looking for identifiable design elements. Description of how this was done.

Through this demonstration, the argument is that the framework has provided added depth to understanding of the co-creation and co-configuration activities in this successful collaboration.


Expands on the options available for further application of learning-analytics techniques, including through the use of a table that draws on components of the figure above as a scaffold.


Learners are influenced by the structure of an environment. The framework here helps identify and theorize about this. Which leads to research/analytics work – at a finer grain. “In so doing, the impacts of design decisions on the behavior of learners can be assessed, and informed redesign work can take place”

I wonder whether the authors are thinking about how environment/tools can be impacted by this “informed redesign”. What if the digital tools capability for modification was informed or even activated by learning analytics? They seem to lean towards this as they finish with

Understanding the relationship between the design of a learning environment and the behavior and learning that occur may enable the design of more effective learning environments.

Creative Commons, Flickr and presentations: A bit of tinkering

The following is a summary of some tinkering to develop a script that will help me appropriately attribute use of Creative Commons licensed images in presentations. Beyond addressing a long-standing problem of mine, this bit of tinkering is an attempt to feel a bit productive.

The problem

When I give presentations I use Powerpoint (not inherently the problem). I use it in a particular way. Lots of slides, little if any text, and each slide with an interesting photo related to the point I’m trying to make. What follows is an example. (Move beyond the first slide for a feel).

The images are all licensed with a Creative Commons licence and I source them from Flickr via the Creative Commons search. According to this source

All Creative Commons licences require that users of the work attribute the creator. This is also a requirement under Australian copyright law. This means you always have to acknowledge the creator of the CC work you are using, as well as provide any relevant copyright information.

The document continues with “For many users of CC material, attribution is one of the hardest parts of the process”. My current practice is to include the URL of the original image on Flickr on each slide. This has three problems

  1. It adds text to each slide, taking away some of the impact of the image.
  2. Doesn’t fulfil the requirements of the CC licence.
  3. With this style of presentation, most 20/30 minutes presentations are getting close to 100 slides often with the same number of images to attribute.

The requirements are

you should:

  • Credit the creator;
  • Provide the title of the work;
  • Provide the URL where the work is hosted;
  • Indicate the type of licence it is available under and provide a link to the licence (so others can find out the licence terms); and
  • Keep intact any copyright notice associated with the work.

There are a range of online services that help with attribution. ImageCodr generates HTML, which I use often. flickr storm does a similar task somewhat differently. The Flickr CC helper will generate HTML or text.

To fit with the workflow I use when creating presentations, I’m after something that will

  1. Parse a text file of the format

    1 http://my.flickr.com/photo
    2 http://my.flickr.com/photo2

  2. Use the Flickr API to extract the information necessary for an appropriate CC attribution.
  3. Add that to a text/HTML file that will form a “credits” slide at the end of a presentation.

    As per the advice from this source

    Alternatively, you can include a ‘credits’ slide at the end of the show, that lists all the materials used and their attribution details. Again, you should indicate the slide or order so people can find the attribution for a specific work.

  4. Optionally, add a message to the photo on Flickr summarising how/where the photo has been used.

Tinkering process

What follows is the planned/actual tinkering process toward implementation of a solution as a Perl script. The script will use the Flickr API to extract the licence information and hopefully add a comment.

Flickr API working – extracting information

Perl has a range of Flickr related modules. Flickr::API2 seems to be the current standard.

The flickr.photos.licences.getInfo method gives a list of all the licenses. When you get a photo by id (part of the URL) Flickr returns a licence id with which you can find the URL and name of the licence for the photo.

Some limitations of the information

  • Flickr doesn’t provide the abbreviation for the CC licences.
    hard-coded into the script.
  • The url_l method for Flickr::API2 doesn’t seem to be working.
    that’s because it’s not a method – page_url works.
  • The owner_name method for Flickr::API2 doesn’t seem to always reliably return the owner’s name.
    Use the username as a supplement.

Generating credits page

Initially, I was going to copy the format used by the flickr cc attribution helper i.e.

cc licensed ( *ABBR* ) flickr photo by *username*:

But this suggests that the title of the work and a link to the licence is also required (though it does mention flexibility). The format they’re using is

*title* by *name* available at *url*
under a *licence name*
*licence url*

Will do this as simple text, single reference to a line. Will also add in the slide number.

After a bit of experimentation the following is what the script is currently generating

Slide 2, 3: “My downhill run!” by Mike Mueller available at http://flickr.com/photos/mike912mueller/6407874723 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Slide 4: “Question Mark Graffiti” by zeevveez available at http://flickr.com/photos/zeevveez/7095563439 under Attribution License http://creativecommons.org/licenses/by/2.0/

Slide 1: “Greyhound Half Way Station” by Joseph available at http://flickr.com/photos/josepha/4876231714 under Attribution-NonCommercial-ShareAlike License http://creativecommons.org/licenses/by-nc-sa/2.0/

Modified to recognise that I sometimes use an image on multiple pages. I should perhaps add a bit of smarts into the code to order the slides correctly, but time is short.

Adding comment on Flickr

The flickr.photos.comments.addComment method seems to offer what I need. Of course it’s not that simple. To make a comment the script needs to be authenticated with flickr. i.e. as me.

The documentation for Flickr::API2 is not 100% clear on this and the evolution of authentication means that flickr is moving on, but the following process seems to work

  • Get a “frob”
    [sourcecode lang=”perl”]
    use Flickr::API2;
    my $api = Flickr::API2->new({
    ‘key’ => <em>mykey</em>,
    ‘secret’ => <em>mysecret</em> });
    my $result = $api->execute_method( ‘flickr.auth.getFrob’ );
    my $frob = $result->{frob}->{_content};
  • Get a special URL to tell Flickr to authorise the script
    [sourcecode lang=”perl”]
    my $url = $api->raw->request_auth_url( ‘write’, $frob );
    print Dumper( $url );
    # wait until I visit the URL and hit enter
  • Get the token
    [sourcecode lang=”perl”]
    my $res = $api->execute_method( ‘flickr.auth.getToken’, { ‘frob’ => $frob} );
    print Dumper( $res );
  • Copy the token that’s displayed and hard code that into subsequent scripts, including adding a comment using my flickr account.
    [sourcecode lang=”perl”]
    my $comment =<<"EOF";
    G’day, This is a test comment.
    my $response = $api->execute_method( "flickr.photos.comments.addComment",
    { photo_id => 3673725336, comment_text => $comment,
    auth_token => <em>the token I got</em> } );

Put it all together

I’m going to use a small presentation I use in my teaching as a test case. I’ll hardcode the link between image and slide number into the initial script. Longer term the script will rely on there being a text file of the format

1,flickr photo url
2,flickr photo url

(see below for some ideas of how I’ll do this)

It all works. Up above you can see the credit text produced based on a small presentation I use in my teaching. The following is one of the images used in that presentation. If you click on the image you can see the comment that was added by the script.

Greyhound Half Way Station by joseph a, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  joseph a 

What follows are various bits of the script, happy to share the file, but I don’t imagine that there’s a lot of folk with Perl installed and configured that would want to use it. There needs to be some more work tidying up and adding in error checking. But it works well enough for now.

The main logic of the script is

[sourcecode lang=”perl”]
use strict;
use Flickr::API2;

# hard-code abbreviations for CC licences based on Flickr id
my %CC = ( 1 => "BY-NC-SA", 2 => "BY-NC", 3 => "BY-NC-ND",
4 => "BY", 5 => "BY-SA", 6 => "BY-ND" );

my $TOKEN = "my token";
my $auth = {
‘key’ => ‘my key’,
‘secret’ => ‘my secret’

# which flickr URLs appear on which slides
# flickr photo URL is the key, value is array of slides on which the image appears
‘http://www.flickr.com/photos/7150652@N02/4876231714/’ => [ 1 ],
‘http://www.flickr.com/photos/27933068@N03/6407874723/’ => [ 2, 3 ],
‘http://www.flickr.com/photos/zeevveez/7095563439/’ => [ 4 ]

my $COMMENT =<<"EOF";
–whatever comment I want to add

my $API = Flickr::API2->new( $auth );
my $credits = generate_credits( $PHOTO_SLIDES, $API );
add_comment( $PHOTO_SLIDES, $COMMENT, $API );
print $credits;

To add the comments (I’m guessing the extraction of the Flickr ID will break eventually)

[sourcecode lang=”perl”]
sub add_comment($$$) {
my $photo_slides = shift;
my $comment = shift;
my $api = shift;

foreach my $photo_url ( keys %$photo_slides ) {
if ( $photo_url =~ m#http://www.flickr.com/photos/.*/([0-9]*)/# ) {
my $id = $1;
my $response = $api->execute_method(
{ photo_id => $id, comment_text => $comment,
auth_token => $TOKEN } );

And finally generating the attribution information

[sourcecode lang=”perl”]
sub generate_credits( $$ ) {
my $photo_slides = shift;
my $api = shift;

## Get the licence options
my $response = $api->execute_method( "flickr.photos.licenses.getInfo" );
my $licences = $response->{licenses}->{license};

my $content = "";

foreach my $photo_url ( keys %$photo_slides ) {
# extract the id
if ( $photo_url =~ m#http://www.flickr.com/photos/.*/([0-9]*)/# ) {
my $id = $1;
my $photo = $api->photos->by_id( $id );

# get the licence
my $info = $photo->info();
my $licence = getLicence( $info->{photo}->{license}, $licences);
die "No CC licence found for $photo_urln"
if ( ! defined $licence ) ;
$content .= displayInfo( $licence, $photo, $info, $photo_slides->{$phto_url} );
return $content;

sub displayInfo( $$$ ) {
my $licence = shift;
my $photo = shift;
my $info = shift;
my $slides = shift; # array of slide numbers

my $slide = join ", ", @$slides;

my $url = $photo->page_url;
$url =~ s/ //g;
my $name = $photo->owner_name;
$name = $info->{photo}->{owner}->{username} if ( $name eq "" );

return <<"EOF";
Slide $slide: "$photo->{title}" by $name available at $url under $licence->{name}


sub getLicence( $$ ) {
my $id = shift;
my $licenses = shift;

foreach my $licence ( @{$licenses} ) {
return $licence if ( $id == $licence->{id} );

return undef;

Getting the URLs of images

The final script assumes I have a text file of the format

The question of how to generate this text file remains open. I can see three possible options

  1. Construct the file manually.

    This would be painful and have to wait until after the presentation file is complete. Manual is to be avoided.

  2. Extract it from the Slideshare transcript.

    As well as producing an online version of a presentation, Slieshare also produces a transcript of all the text. This includes flickr photo URLs. This currently works because of my practice of including the URLs on each slide, something I’d like to avoid. As a kludge, I could probably include the URL on each slide but place it behind the image. i.e. make it invisible to the eye, but still to slideshare?

  3. Extract it from the pptx file.

    Powerpoint files are now just zip file collections of xml files. I could draw on perl code like this to extract the URLs. Perhaps the best way is to insert the Flickr URL of the photos used in the notes section (as they too are XML files).

#3 is the long term option. Will use #2 as my first test.

Supporting Action Research with Learning Analytics

The following is a summary and some thoughts on

Dyckhoff, a. L., Lukarov, V., Muslim, A., Chatti, M. a., & Schroeder, U. (2013). Supporting action research with learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK ’13 (pp. 220–229). New York, New York, USA: ACM Press. doi:10.1145/2460296.2460340


Bringing in reflection, action research and the idea of learning analytics enabling these reinforces one of my interests. So I’m biased toward this sort of work.

Some good quotes supporting some ideas we’re working on.

Find it interesting that the LA research work tends to talk simply about indicators, i.e. the patterns/correlations that are generated from analysis, rather than on helping users (teachers/learners) actually do something.


My emphasis added.

Learning analytics tools should be useful, i.e., they should be usable and provide the functionality for reaching the goals attributed to learning analytics. This paper seeks to unite learning analytics and action research. Based on this, we investigate how the multitude of questions that arise during technology-enhanced teaching and learning systematically can be mapped to sets of indicators. We examine, which questions are not yet supported and propose concepts of indicators that have a high potential of positively influencing teachers’ didactical considerations. Our investigation shows that many questions of teachers cannot be answered with currently available research tools. Furthermore, few learning analytics studies report about measuring impact. We describe which effects learning analytics should have on teaching and discuss how this could be evaluated.


Starts with the proposition that “teaching is a dynamic activity” where teachers should “constantly analyse, self-reflect, regulate and update their diadactical methods and the learning resources they provide to their students”

Of course, learning is also a dynamic activity. Raising the possibility that the same sort of analysis being done here might be done for learners.

Moves onto reflection, its definition and how it can foster learning if “embedded in a cyclical process of active experimentation, where concrete experience forms a basis for observation and reflection”. Action research is positioned as “a method for reflective teaching practice” ending up with learning analytics being able to “initiate and support action research” (AR).

Noting multiple definitions of learning analytics (LA) before offering what they use

learning analytics as the development and exploration of methods and tools for visual analysis and pattern recognition in educational data to permit institutions, teachers, and students to iteratively reflect on learning processes and, thus, call for the optimization of learning designs [39, 40] on the on (sic) hand and aid the improvement of learning on the other [14, 15].

Relationship between LA and AR

  • LA arise from observations made with already collected data.
  • AR start with research question arising from teaching practice
  • AR often use qualitative methods for more holistic view, LA mostly quantitative.

Important point – the creation of indicators from the LA work has “been controlled by and based on the data available in learning environments”. Leading to a focus on indicators arising from what’s available. AR starts with the questions first, before deciding about the methods and sources. Proposes that asking questions without thought to the available data could “improve the design of future LA tools and learning environments”.

Three questions/assumptions

  1. Indicator-question-mapping
    Which teacher questions cannot be mapped to existing indicators? Which indicators could delivery what kind of enlightenment?
  2. Teacher-data-indicators
    Current indicators don’t “explicitly relate teaching and teaching activities to student learning” (p. 220). Are there tools that do this? How should it be done?
  3. Missing impact analysis
    Current LA research “fails to prove the impact of LA tools on stakeholders’ behaviors” (p. 221). How can LA impact teaching? How could it be evaluated?

Paper structure

  • Methods – research procedure and materials
  • Categorisation of indicators
  • Analysis and discussion
  • Conclusion


  1. Results of qualitative meta-analysis investigating what kind of questions teachers ask while performing AR in TEL. – see table below
  2. Collected publications on LA tools and available indicators.
  3. 2 of the researchers developed a categorisation scheme for the 198 indicators.
  4. Further analysis of LA tools and indicators.
  5. 2 researchers mapped teachers’ questions to sets of available indicators

Teachers’ questions

The questions asked by teachers – summarised in the following table – are taken from

Dyckhoff, A.L. 2011. Implications for Learning Analytics Tools: A Meta-Analysis of Applied Research Questions. IJCISIM. 3, (2011), 594–601.

Must read this to learn more about how these questions came about. Strike me as fairly specific and not necessarily exhaustive. Authors note that some questions fit into more than one category.

(a) Qualitative evaluation (b) Quantitative measures of use/attendenace (c) Differentiation between groups of students
How do students like/rate/value specific learning offerings?
How difficult/easy is it to use the learning offering?
Why do students appreciate the learning offering?
When and how long are student accessing specific learning offerings (during a day)?
How often do students use a learning environment (per week)?
Are there specific learning offerings that are NOT used at all?
By which properties can students be grouped?
Do native speakers have fewer problems with learning offerings than non-native speakers?
How is the acceptance of specific learning offerings differing according to user properties (e.g. previous knowledge)?
(d) Differentiation between learning offerings (e) Data consolidation/correlation (f) Effects on performance
Are students using specific learning materials (e.g. lecture recordings) in
addition or alternatively to attendance?
Will the access of specific learning offerings increase if lectures and exercises on the same topic are scheduled during the same week?
How many (percent of the) learning modules are student viewing?
Which didactical activities facilitate continuous learning?
How do learning offerings have to be provided and combined to with support to increase usage?
How do those low achieving students profit by continuous learning with etest compared to those who have not yet used the e-tests?
Is the performance in e-tests somehow related


Provides a list of tools chosen for analysis. Chosen given presentation in literature as “state-of-the-art LA-tools, which can already be used by their intended target users”.

Categorisation of indicators

Categorisation scheme includes

  • Five perspectives categories – “point of view a user might have on the same data”
    1. individual student

      “inspire an individual student’s self-reflection” on their learning. Also support teachers in monitoring. Sophisticated systems recommend learning activities. Includes a long list of example indicators in this category with references.

    2. group

      As the name suggests, the group.

    3. course
    4. content
    5. teacher

      Only a few found in this category – including sociogram of interaction between teacher and participant.

  • Six data sources categories
    1. student generated data

      Students’ presence online. Clickstreams, but also forum posts etc.

    2. context/local data
    3. academic profile

      Includes grades and demographic data.

    4. evaluation

      student responses to surveys, ratings, course evlaluations.

    5. performance

      Grades etc from the course. # of assignments submitted.

    6. course meta-data

      course goals, events etc.

Analysis and discussion


Mapped indicators from chosen tool to questions asked by teachers. Missing documentation meant mapping was at times subjective.

“Our analysis showed that current LA implementations still fail to answer several important questions of teachers” (p. 223).

Using categories from the above table

  • Category A – almost all “cannot yet be answered sufficient”. Deal with questions of student satisifaction and preferences.
  • Category B – most questions can be answered. A few cannot (e.g. use of service via mobile or at home) Aside: while a question teachers’ might ask, not sure it’s strongly connected to learning.
  • Category E – generally no. Most systems don’t allow the combination of data that this would require. I would expect in large part because of the research nature of these tools – focused on a particular set of concerns. The paper raises the question of learner privacy issues.
  • Category F – can be difficult depending on access to this information.


“We did not find tools or indicators that explicitly collect and present teacher data”. (p. 224). The closest are indicators related to course phases and interactions between teachers and students.

Activity logs contain some teacher data. But other data missing. Information on lectures etc is missing.

If teachers had indicators about their activities and online
presence, they might be inspired and motivated to be more active in the online learning environment. Hence, their presence in discussions might stimulate students likewise to participate more actively and motivate them to share knowledge and ideas.

Authors brainstormed some potential indicators

  • Teacher form participation indicator.
  • Teacher correspondence indicator

    Tracking personal correspondence, and tracking interventions and impact on student behaviour.

  • Average assignments grading time.

    Would be interesting to see the reaction to enabling this.

    The authors mention privacy issues and suggest only showing the data to the individual teacher.

Missing impact analysis

Provides a table comparing AR and LA.

“very few publications reporting about findings related to the behavioural reactions of teachers and students, i.e. few studies measure the impact of using learning analytics tools” (p. 225) Instead LA research tends to focus on functionality, usability issues and percieved usefulness of specific indicators. …”several projects have not yet published data about conducting reliable case studies or evaluation results at all”.

Proceed to offer one approach to measuring impact of LA tools – an approach that could “be described as design-based research with a focus on uncovering action research activities”.

The steps

  1. Make the tools available to users.

    A representable group of non-expert teachers and students. Need to know about the course. How it’s operating without LA and a great deal of information to use as a reference point for comparison later on. Including interviews/online surveys with staff and students.

  2. Identify which activities are likely to be improved by LA.

    Hypothesise about the usage and impact of LA.

  3. Interview after use.

Limitations of this approach

  • long time required.
  • significant effort from researchers and participants.
  • analysis of qualitative data prone to personal interpretation.
  • Clear conclusions may not be possible.

Limitations of this study

  • meta-analysis from which questions were drawn was limited to case studies described in the conference proceedings of a German e-learning conference.
  • identification of indicators limited to 27 tools, there are other research especially from EDM. “The challenge is, how to make them usable”
  • subjectivity of the questions and the indicators – addressed somewhat by two researchers – but not an easy process.


Learning Analytics tools should be an integral part of TEL. The tools aim at having an impact on teachers and students. But the impact has not been evaluated. The concern we are raising is that LA tools should not only be usable, but also useful in the context of the goals we want to achieve. (p. 227)

  • present indicators focused on answering questions around usage analysis
  • “currently available research tools do not yet answer many questions of teacher”
  • qualitative analysis and correlation between multiple data sources can’t yet be answered.
  • “causes for these shortcomings are insufficient involvement of teachers in the design and development of indicators, absence of rating data/features, non-used student academic profile data, and absence of specific student generated data (mobile, data usage from different devices), as well as missing data correlation and combination from different data sources”.
  • teachers data is not easily visible.
  • future tools will probably have rating features and that data should be used by LA tools.
  • “researchers should actively involve teachers in the design and implementation of indicators”.
  • researchers need to provide guidelines on how indicators can be used and limitations.
  • Need to create evaluation tools to measure impact of LA.


Dyckhoff, a. L., Lukarov, V., Muslim, A., Chatti, M. a., & Schroeder, U. (2013). Supporting action research with learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge – LAK ’13 (pp. 220–229). New York, New York, USA: ACM Press. doi:10.1145/2460296.2460340

Strategies for curriculum mapping and data collection for assuring learning

The following is a summary of and some reaction to the final report of an OLT funded project titled “Hunters and gatherers: strategies for curriculum mapping and data collection for assuring learning”. This appears to be the project website.

My interest arises from the “Outcomes and Analytics Project” that’s on my list of tasks. That project is taking a look at Moodle’s new outcomes support (and perhaps other tools) that can be leveraged by a Bachelor of Education and try to figure out what might be required to gain some benefit from those tools (and whether it’s worth it).


The recommended strategies (holistic, integrated, collaborative, maintainable) could form a good set of principles for some of what I’m thinking.

In terms of gathering data on student performance, assessment and rubrics appear to be the main method. Wonder if analytics and other approaches can supplement this?

It would appear that no-one is doing this stuff very well. The best curriculum mapping tool is a spreadsheet!!! And the data gathering tools are essentially assignment marking tools. Neither set of tools were well evaluated in terms of ease of use.

Lots of good principles and guidelines for implementation, but crappy tools.

Student centered much?

AoL is defined as determining program learning outcomes and standards and then gathering evidence to measure student performance. use for curriculum development, continuous improvement and accrediatation – but no mention is made of helping students develop eportfolios for employers. Especially in professional programs with national standards, this would seem an obvious overlap. Student centered much?

Interesting given that AoL is meant to be based on student centered learning.

Staff engagement

Is positioned as a difficulty.


AN earlier project (on which this was built) was titled “Facilitating staff and student engagement with graduate attribute development”. Apparently limited to helping staff design criteria to assess GAs and students self-evaluate against those. Seems to be only a small part of the facilitation process.

GAs versus professional standards

I have to admit to being deeply skeptical around the notation of institutional graduate attributes. Always struck me as a myth created by high-priced senior management to justify what was unique about the vision they were creating and hence with little connection to the reality of teaching at a university. In particular, that the standards/attributes set by the professional bodies associated with certain disciplines would always count for more.

Of course institutional GAs have been apparently required by the government since 1992. I wonder if this is like some of those other legal requirements that have been discovered to have never existed, ceased to exist or which were misinterpreted?

Executive summary

Assurance of Learning (AoL) “evaluates how well an institution accomplishes the educational aims at the core of its activities”…AoL provides “qualitative and quantitative indicators for the assessment of the quality of award courses” Thus be used for

  • institutional/management ends: strategic directions, priorities, quality assurance, enhancement processes.
  • individual curriculum development.
  • Valid evidence to external constitutents.

Focus of this project on

  1. mapping program learning outcomes.
  2. Collecting data on student performance against each learning objective.

Aside: interesting that they’ve already used both outcome and objective. Does this mean these are different concepts, or just different labels for the same concept?

Investigation was done via

  • Exploratory interviews with 25 of 39 associate deans L&T from Oz Unis.
  • 8 focus groups with 4 good practice institutions, 2 at each institution – one with a senior leader, the other with teaching staff.
  • Delphi method – but who with?
  • Interviews with experts.
  • online survey.

Recommended good practice strategies

  • holistic – whole of program, to ensure student’s progress and introduction of GAs prior to demonstration.
  • integrated – GAs must be embedded in the curriculum and links to assessment.
  • collaborative – developed with teaching in an inclusive – not top down – approach to engage staff.
  • maintainable – must not be reliant on particular individuals or extra-ordinary resources.

They then proceed to mention typical cultural change strategies.

Also did an independent review of existing tools. Interestingly the Blackboard 9.1 goals/standards service gets a mention.

Chapter 1 – Project overview

Identifies 7 key stages in assuring learning from an AACSB White Paper

  1. establishing graduate attributes and measurable learning outcomes for the program;
  2. mapping learning outcomes to suitable units of study in the program (where possible allowing for introduction, further development and then assurance of the outcomes);
  3. aligning relevant assessment tasks to assure learning outcomes;
  4. communicating learning outcomes to students;
  5. collecting data to show student performance for each learning objective;
  6. reporting student performance in the learning outcomes;
  7. reviewing reports to identify areas for program development (‘Closing the Loop’).

Explains the growing requirement for this, lots of acronyms and literature.

Mentions their prior project which development the ReView online assessment system to help staff develop criteria that assessed GAs within the set assignments. Students have self-evaluations.

Project aims to inform strategy to identify efficient and manageable assurance mechanisms (effective not important?).

Chapter 2 – methodology

Key guiding questions

  1. How is mapping of GAs being done?
  2. How is the collection of GA data being done?
  3. What are the main challenges in mapping and collecting?
  4. Are there identifiable good practice principels?
  5. What are the tools currently being used?

Chapter 3 – Literature review

here come the standards

Standards are defined as “the explicit levels of attainment required of and achieved by students and graduates, individually and collectively, in defined areas of knowledge and skills” (TEQSA, 2011, p. 3)……Academic standards are learning outcomes described in terms of core discipline knowledge and core discipline-specific skills, and expressed as the minimum learning outcomes that a graduate of any given discipline (or program) must have achieved (Ewan, 2010).

TEQSA is apparently requiring academic standards “be expressed as measurable or assessable learning outcomes”.

Determining standards and then collecting data against those is complex. Coates (2010) acknowledges the complexity and suggests a need for cultural change. And there is apparently an urgent need for “new, efficient and effective ways of judging and warranting” (Oliver, 2011, p. 3).

Extant literature

AoL finds its pedagogical basis in student-centered learning.

Curriculum mapping in AOL is the process of embedding learning outcomes related to GAS into units of study where these are introduced, developed and then assured.

AUQA required curriculum mapping as do most professional accrediting bodies – hence the 2009 observation from Barrie (et al 2009) that most Australian universities have some sort of strategic project underway.

The higher ed mapping literature is scant but suggests it’s useful for (all backed up with citations)

  • identifying gaps in a program
  • monitoring course diversity and overlap
  • providing opportunity for reflection and discourse
  • reducing confusion and overlap and increasing coherence

There are more, but there does seem to be some overlap.

Mentions the problem of the compliance culture, others include

  • difference between the intended and the enacted curriculum from the students’ perspective
  • how to contextualise GAs into a discipline.
  • mapping seen as threatening, as a course cutting exercise, criticisms of teaching material etc.
  • labour intensive exericse.

Staff engagement is seen as the key and current suggestions for improvement include

  • develop a conceptual framework for developing GAs, including 3 elements
    1. clear statement of purpose for curriculum mapping.
    2. a tool that allows an aggregate view of a course.
    3. a process for use of the tool
  • map GAs using extensive audits of each course.
  • a cyclical process including visual representations to enable a fluid/adaptable curriculum
  • availability of sufficient resources.
  • use of alignment templates (isn’t this a tool?)
  • professional development to integrate and contextualise GAs.
  • having specialists who can teach a particular attribute.
  • whole of program approach, focus on team co-operation and more time spent on design.
  • staff support where workloads increase.
  • linkages between GA development and professional development.

Embedding versus standardised testing

Mention of various standardised tests at the end of study. Talks about plusses and minuses.

Data collection for AoL

Focused on entering student performance outcomes against each learning objective. Need a “systematic method to collect data and explore the achievement levels of students in each of the selected attributes” to inform on-going development.

There are challenges in collecting and providing evidence – highlighting the need for efficiency and streamlining the process.

Assessment rubrics (formative and summative) are key. But there are challenges. Don’t want a “tick list”. Some skills are ill-defined, overlapping and difficult to measure. And the question of standardisation – homegenisation or pursuit of common goals. Multiple interpretations of criteria.

Rubrics can become to be used for comparison between institutions; assurance of content/process/outcomes across courses.

Continuous improvement/closing the loop

Apparently the “raison d’etre for assessing student learning” and also something that institutions are “most confused about how to go about closing the loop (Martell, 2007)”…..”integration of the assessment of learning outcomes into developmental approaches in the classroom has been somewhat intangible (Tayloer et al, 2009)”.

Curriculum mapping

important features for selecting a CM system to support AoL

  • support an inclusive and participatory process;
  • foster a program-wide approach to produce a mapped overview;
  • map by assessment task;
  • develop student awareness of attributes and their distribution within the program;

The standout tool was a spreadsheet!

Data collection

Important features for a data collection system included

  • implement a consistent criteria for attributes across programs;
  • extract outcome-specific data;
  • embed measurement in the curriculum
  • produce built-in reports;
  • conduct analysis for closing the loop;
  • implement multiple measures of AoL for program wide view.

ReView was seen as the stand out. But then, it arose from the last OLT project. But then that makes this comment interesting

ReView does not rate that well on ‘ease of use without the need for much supplementary professional development

Technology-enhanced learning – workloads and costs

The following is a summary and some thoughts on the final report of an OLT funded project titled e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology enhanced learning.

The final report adds “Out of hours” to the title and captures my interest in this area. In particular, I think that the workload for academic staff (and hence the quality of learning and teaching) is being directly impacted by the poor quality of both the institutional tools and how they are being provided. Improving these is where my research interests sit, so I’m hoping this report will provide some insights/quotes to build upon. I also think that the next couple of years will hold “interesting” conversations about workloads and workload models.


The work identifies that

  1. No Australian university really has an idea about workload allocation when it comes to online/blended learning.
  2. Academics are reporting significant increases in workload due to the rise of online/blended learning.

Some of the key recommendations appear to be

  1. “DEEWR in tandem with Universities Australia and other agencies should initiate a multi-level audit of teaching time and WAMs”.
  2. “Define clearly what it means in each program to teach online for staff, learn online for students and manage staff allocation within higher education institutions so that all stakeholders as well as Finance Officers can participate in workload model development.”

Both appear to assume that what currently passes for teaching online is as good as it gets. I’m thinking we still haven’t figured out how to do this well enough. We’re still in the process of recasting what it means to teach online. So, I wonder if putting a lot of effort into workload allocation, prior to figure out how to do the work, is putting the cart before the horse?

Of course, formulating a workload formula is much easier than recasting the nature of teaching, learning and the institution in which those take place.

Executive summary

Project aims changed due to “lack of consistent sector information on real teaching costs in universities” (p. 2). Also no rigorous cost-accounting protocol is applied to e-teaching. “Unsurprisingly, the study found overload due to e-teaching was a significant factor in staff dissatisfaction” (p.2).

Of course, the conclusion from this is that “Workload models needed to change to accommodate the additional tasks of e-teaching” but you wonder why other factors weren’t considered. e.g. are the tools provided crap? are teaching staff not changing practice based on the changing nature of the task? etc.

Hoping that others will build on this with a sector-wide survey

Four outcomes

  1. Analysis of literature on costs/benefits of online teaching.
  2. Data of workload implications to help in developing workload models.
  3. Four case studies of staff perceptions of workload with TEL.
  4. Recommendations


  • Literature review revealed: lack of reporting and no documentation of the impact on workload when teaching online or in blended modes.
  • 88 interviews across four institutions showed poorly defined policy frameworks for workload allocations and staff didn’t understand those models.
  • The new technologies with new teaching methods have “increased both the number and type of teaching tasks undertaken by staff, with a consequent increase in their work hours” (p. 3)

Part 1 – Project outline and processes

As a result, institutional policies are often guided
more by untested assumptions about reduction of costs per student unit, rather than being evidence-based and teacher-focused, with the result that implementation of new technologies for online teaching intended to reduce costs per student ‘unit’ results in a ‘black hole’ of additional expense (p. 4).

Part 2 – Literature review

Despite predictions otherwise “evidence of productivity gains and cost reduction due to e-teaching/learning is scant”.

Suggests that this project’s focus is on the everyday experience with ICTs, specifically, the workload factors. But I wonder if it will touch on the other factors identified in quotes in the literature review.

Expands on four broad influences: globalisation; technological innovation; macro/micro economic settings; and, renewed cultural emphasis on individualism.

One striking feature of the Gosper et al. study is that 75 per cent of staff had not altered the structure of their unit to incorporate new technologies, despite the clear evidence of Laurillard (2002), Bates (1995) and Twigg (2003) that re-design is crucial in utilising the web.

Part 3 – Project approach

Describes the interviews of 88 academic staff across 4 institutions and the analysis approach.

Part 3 – Aggregated results of interview analysis

Lists each of the questions and summarises results

76 out of 88 did not think workload matched actual work.

Types of online learning

  • 73 / 88 – discussions.
  • 63 – traditional learning resources.
  • 51 – podcasts
  • 42 – Assessment (what does this mean when..)
  • 23 – Assessment quizzes
  • 11 – Assessment submission and marking

Dissenting views of institutional e-learning

The following two quotes are talking about the e-learning context at the same institution at about the same time (2009 through about 2011).

The great

no name of institution removed interviewees commented on the impact of technology. It is probable that since the institution had undergone a large review and renewal of technology in the learning management system where processes to support academics were put in place and where academics were included in decision making and empowered to change and upskill, negative attitudes towards the general impact of technology were not an issue for staff. One can hypothesise that these issues were principally resolved.

The not so great

During training sessions … several people made suggestions and raised issues with the structure and use of Moodle. As these suggestions and issues were not recorded and the trainers did not feed them back to the programmers … This resulted in frustration for academic staff when teaching with Moodle for the first time as the problems were not fixed before teaching started…..[t]he longer the communication chain, the less likely it was that academic users’ concerns would be communicated correctly to the people who could fix the problems

Seems to be a problem of communication somewhere in there.

I wonder which view was closer to the truth (whatever that is)? Given that the first quote is from a nationally funded research project (the second from a peer-reviewed journal publication), I wonder what implications this has for the practice of institutional e-learning? Or, what it is that institutions say about their practice of e-learning?

Planning an analysis of the learning analytics literature

With the vague idea of the IRAC framework done, it’s time to take the next step, i.e. “to use the framework to analyse the extant learning analytics literature”. The following is some initial thinking behind why, what and how we’re thinking of doing.

The main question we have here is “how?”. Trying to figure out the best tools and process to help do this analysis. I’ve come up with two possibilities below, any better ones?

Wondering if Evernote and similar applications might offer a possibility? Chance are that NVIVO might be an option, or some other for of qualitiative analysis software might be the go.


We have a feeling that the learning analytics literature is over focusing on certain areas and ignoring some others. We want to find out if this is the case. This is important because we argue that this possible problem is going to make it harder for learning analytics to be integrated into the learning and teaching process and actually improve learning and teaching.


The plan is to analyse the learning analytics literature – initially we’ll probably focus on the proceedings from the 2013 LAK conference – for two main aims

  1. Identify the relative frequency with which the literature talks about the four components of the IRAC framework, i.e. Information, Representation, Affordance and Change.
  2. Identify what aspects of each of the four components are covered in the literature.

We think this is sensible because of the theory/principles we’ve built the IRAC framework on and some suggestion from the learning analytics literature that the IRAC components apply to learning analytics.

The foundation of the following image is Siemens (2013) analytics model. Over the top of that model we’ve applied the IRAC framework components. Not only does this suggest that the IRAC components aren’t entirely divorced from learning analytics, it also suggests the potential over emphasis that we’re worried about. The image shows how the Information component – gathering the information and analysing it – takes up more than three-quarters of the model. While this is an essential component of learning analytics, our argument is that the Affordances and Change components need significantly greater consideration if learning analytics is to be integrated into learning and teaching processes.



The basic process is likely to be that each of the co-authors will read each article in the list of papers we select and highlight and annotate sections of the papers that discuss one or more of the four components of the IRAC framework.

For each paper, maintain some sort of database that tracks for each paper and each co-author

  • The number of times each framework components is mentioned.
  • The text associated with each component.
  • Some potential labels for the attribute of each component that the text might represent.

Some possible solutions follow.


  1. Each of us import the list of papers into Mendeley.
  2. Read the papers in Mendeley’s reader.
  3. Use the highlight and annotate tools to indicate appropriate quotes.

    The annotation could include the name of the IRAC component.

  4. When finished a paper, export the annotations as a PDF.
  5. Use the PDFs as input to the database.

PDFs are not great for this type of manipulation. Can see some manual work arising there.


  1. Convert the list of papers to separate HTML files.

    Diigo won’t let you annotate PDF files.

  2. Upload them to a secure web server.

    So that only the authors can see them, protecting copyright etc.

  3. Create a Diigo group for the authors.
  4. Use Diigo to tag and annotate the HTML versions of the papers.
  5. Extract the information via the Diigo RSS feeds.

Almost a nice solution, however, Diigo doesn’t include the annotations in the RSS feed.


Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1371–1379. doi:10.1177/0002764213498851

The IRAC framework: Locating the performance zone for learning analytics

The following is the final version of a short paper that’s been accepted for ASCILITE’2013. It’s our first attempt to formulate and present the IRAC framework for analysing and designing learning analytics applications. This presentation from last week expands on the IRAC framework a little and touches on some of the future work.

David Jones
University of Southern Queensland
Colin Beer, Damien Clark
Office of Learning and Teaching


It is an unusual Australian University that is not currently expending time and resources in an attempt to harness learning analytics. This rush, like prior management fads, is likely to face significant challenges when it comes to adoption, let alone the more difficult challenge of translating possible insights from learning analytics into action that improves learning and teaching. This paper draws on a range of prior research to develop four questions – the IRAC framework – that can be used to improve the analysis and design of learning analytics tools and interventions. Use of the IRAC framework is illustrated through the analysis of three learning analytics tools currently under development. This analysis highlights how learning analytics projects tend to focus on limited understandings of only some aspects of the IRAC framework and suggests that this will limit its potential impact.

Keywords: learning analytics; IRAC; e-learning; EPSS; educational data mining; complex adaptive systems


The adoption of learning analytics within Australian universities is trending towards a management fashion or fad. Given the wide array of challenges facing Australian higher education, the lure of evidence-based decision making has made the quest to implement some form of learning analytics “stunningly obvious” (Siemens & Long, 2011, p. 31). After all, learning analytics is increasingly being seen as “essential for penetrating the fog that has settled over much of higher education” (Siemens & Long, 2011, p. 40). The rush toward Learning Analytics is illustrated by its transition from not even a glimmer on the Australian and New Zealand Higher Education technology horizon in 2010 (Johnson, Smith, Levine, & Haywood, 2010) to predictions of its adoption in one year or less in 2012 (Johnson, Adams, & Cummins, 2012) and again in 2013 (Johnson et al., 2013). It is in situations like this – where an innovation has achieved a sufficiently high public profile – that the rush to join the bandwagon can swamp deliberative, mindful behaviour (Swanson & Ramiller, 2004). If institutions are going to successfully harness learning analytics to address the challenges facing the higher education sector, then it is important to move beyond slavish adoption of the latest fashion and aim for mindful innovation.

This paper describes the formulation and use of the IRAC framework as a tool to aid the mindful implementation of learning analytics. The IRAC framework consists of four broad categories of questions – Information, Representation, Affordances and Change – that can be used to scaffold analysis of the complex array of, often competing, considerations associated with the institutional implementation of learning analytics. The design of the IRAC framework draws upon bodies of literature including Electronic Performance Support Systems (EPSS) (Gery, 1991), the design of cognitive artefacts (Norman, 1993), and Decision Support Systems (Arnott & Pervan, 2005). In turn, considerations within each of the four questions are further informed by a broad array of research from fields including learning analytics, educational data mining, complex adaptive systems, ethics and many more. It is suggested that the considered use of the IRAC framework to analyse learning analytics implementations in a particular context, for specific tasks, will result in designs that are more likely to be integrated into and improve learning and teaching practices.

Learning from the past

The IRAC framework is based on the assumption that the real value and impact of learning analytics arises from its integration into the “tools and processes of teaching and learning” (Elias, 2011, p. 5). It is from this perspective that the notion of Electronic Performance Support Systems (EPSS) is seen as providing useful insights as EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin, McCarthy, Hannafin, & Radtke, 2001, p. 658). EPSS are computer-based systems intended to “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001, p. 63). This captures the notion of the performance zone defined by Gery (1991) as the metaphorical area where all of the necessary information, skills, and dispositions come together to ensure successful task completion. For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 540). This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artefacts drawn from Norman’s (1993) book “Things That Make Us Smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artefacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artefacts that “more often interferes and confuses than aids and clarifies” (p. 9). A danger faced in the current rush toward learning analytics.

The notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artefacts – along with insights from other literature – provide the four questions that form the IRAC framework. The IRAC framework is intended to be applied with a particular context and a particular task in mind. A nuanced appreciation of context is at the heart of mindful innovation with Information Technology (Swanson & Ramiller, 2004). Olmos & Corrin (2012), amongst others, reinforce the importance for learning analytics to start with “a clear understanding of the questions to be answered” (p. 47) or the task to be achieved. When used this way, it is suggested that the IRAC framework will help focus attention on factors that will improve the implementation and impact of learning analytics. The following lists the four questions at the core of the IRAC framework and briefly describes some of the associated factors. The four questions are:

  1. Is all the relevant Information and only the relevant information available?

    While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). This leads to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). There are various other aspects of information to consider. For instance, is the information required technically and ethically available for use? How is the information to be cleaned, analysed and manipulated? Is the information sufficient to fulfill the needs of the task? In particular, does the information captured provide a reasonable basis upon which to “contribute to the understanding of student learning in a complex social context such as higher education” (Lodge & Lewis, 2012, p. 563)?

  2. Does the Representation of the information aid the task being undertaken?

    A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straightforward task (Norman, 1993). Representation has a profound impact on design work (Hevner, March, Park, & Ram, 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica et al., 2006, p. 540). Olmos and Corrin (2012) suggest that there is a need to better understand how visualisations of complex information can be used to aid analysis. Considerations here focus on how easy is it to understand the implications and limitations of the findings provided by learning analytics?

  3. Are there appropriate Affordances for action?

    A poorly designed or constructed artefact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance it must be utilised and be a good fit for the task it supports (Goodhue & Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, 1993, p. 106). The nature of such affordances are not inherent to the artefact, but are instead co-determined by the properties of the artefact in relation to the properties of the individual, including the goals of that individual (Young, Barab, & Garrett, 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). Tutty, Sheard and Avram (2008) suggest there is evidence that institutional quality measures not only inhibit change, “they may actually encourage inferior teaching approaches” (p. 182). The consideration for affordances is whether or not the tool and the surrounding environment provide support for action that is appropriate to the context, the individuals and the task.

  4. How will the information, representation and the affordances be Changed?

    The idea of evolutionary development has been central to the theory of decision support systems (DSS) since its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches from which the data was generated. Bollier and Firestone (2010) observe that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Universities are complex systems (Beer, Jones, & Clark, 2012) requiring reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustani et al., 2010). Potential considerations here include, who is able to implement change? Which, if any, of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

It is proposed that the lens provided by the IRAC framework can help increase the mindfulness of innovation arising from learning analytics. In particular, it can move consideration beyond the existing over emphasis on the first two questions and raise awareness of the last two questions. This shift in emphasis appears necessary to increase the use and effectiveness of learning analytics. The IRAC framework can also provide suggestions for future directions. In the last section, the paper seeks to illustrate the value of the IRAC framework by using it to compare and contrast three nascent learning analytics tools against each other and contemporary practice.

Looking to the future

The Student Support Indexing system (SSI) mirrors many other contemporary learning analytics tools with a focus on the task of improving retention through intervention. Like similar systems, it draws upon LMS clickstream information in combination with data from other context specific student information systems and continuously indexes potential student risk. Only a very few such systems, such as S3 (Essa & Ayad, 2012), provide the ability to change a formula in response to a particular context. SSI also represents the information in tabular form, separate from the learning context. SSI does provide common affordances for intervention and tracking, which appear to assist in the development of a shared understanding of student support needs across teaching and student support staff. Initial findings are positive with teaching staff appreciating the aggregation of information from various institutional systems in conjunction with basic affordances for intervention facilitation and tracking. In its current pilot form, the SSI provides little in terms of change and it is hoped that the underlying process for indexing student risk, tracking student interventions and monitoring students interventions can be represented in more contextually appropriate ways in future iterations.

The Moodle Activity Viewer (MAV) currently serves a similar task as traditional LMS reporting functionality and draws on much the same LMS clickstream information to represent student usage of course website activities and resources. MAV’s representative distinction is that it visualises student activity as a heat map that is overlaid directly onto the course website. MAV, like many contemporary learning analytics applications, offers little in the way of affordances. Perhaps the key distinction with MAV is that it is implemented as a browser-based add-on that depends on a LMS independent server. This architectural design offers greater ability for change because it avoids the administrative and technical complexity of LMS module development (Leony, Pardo, Valentın, Quinones, & Kloos, 2012) and the associated governance constraints. It is this capability for change that is seen as the great strength of MAV, offering the potential to overcome its limited affordances, and a foundation for future research.

BIM is a Moodle plugin that manages the use of student selected, externally hosted blogs as reflective journals. It is posts written by students that form the information used by BIM, moving beyond the limitations (see Lodge & Lewis, 2012) associated with an over-reliance on clickstream information. Since BIM aims to support a particular learning design – reflective journals – it enables exploration of process analytics (Lockyer, Heathcote, & Dawson, 2013). In particular, how process analytics can be leveraged to support the implementation of affordances for automated assessment, scaffolding of student reflective writing, and encouraging connections between students and staff. Like MAV, the work on BIM is also exploring approaches to avoid the constraints on change placed by existing LMS and organisational approaches.

The IRAC framework arose from a concern that most existing learning analytics applications were falling outside the performance zone and were thus unlikely to successfully and sustainably improve learning and teaching. Existing initiatives focus heavily on information, its analysis, and how it is represented; and, not enough on technological affordances for action and agility to change and adapt. Drawing on earlier work from the EPSS and other literature we have proposed the IRAC framework as a guide to help locate the performance zone for learning analytics. The next step with the IRAC framework is a more detailed identification and description of its four components. Following this we intend to use the framework to analyse the extant learning analytics literature and to guide the development and evaluation of learning analytics applications such as SSI, MAV and BIM.


Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87.

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland & F. Collopy (Eds.), Managing as designing (pp. 106–112). Standford, CA: Standford University Press.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Boustani, M. a, Munger, S., Gulati, R., Vogel, M., Beck, R. a, & Callahan, C. M. (2010). Selecting a change and evaluating its impact on the performance of a complex adaptive health care delivery system. Clinical interventions in aging, 5, 141–8.

Buckingham Shum, S. (2012). Learning Analytics. Moscow.

Dickelman, G. (1995). Things That Help Us Perform : Commentary on Ideas from Donald A . Norman. Performance improvement quarterly, 8(1), 23–30.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. Learning.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAKÕ12 (pp. 2–5). Vancouver: ACM Press.

Gery, G. J. (1991). Electronic Performance Support Systems: How and why to remake the workplace through the strategic adoption of technology. Tolland, MA: Gery Performance Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213.

Hannafin, M., McCarthy, J., Hannafin, K., & Radtke, P. (2001). Scaffolding performance in EPSSs: Bridging theory and practice. In World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp. 658–663).

Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Johnson, L., Adams Becker, S., Cummins, M., Freeman, A., Ifenthaler, D., & Vardaxis, N. (2013). Technology Outlook for Australian Tertiary Education 2013-2018: An NMC Horizon Project Regional Analysis. Austin, Texas.

Johnson, L., Adams, S., & Cummins, M. (2012). Technology Outlook for Australian Tertiary Education 2012-2017: An NMC Horizon Report Regional Analysis. New Media Consortium. Austin, Texas.

Johnson, L., Smith, R., Levine, A., & Haywood, K. (2010). The horizon report: 2010 Australia-New Zealand Edition. Austin, Texas.

Leony, D., Pardo, A., Valentõn, L. de la F., Quinones, I., & Kloos, C. D. (2012). Learning analytics in the LMS: Using browser extensions to embed visualizations into a Learning Management System. In R. Vatrapu, W. Halb, & S. Bull (Eds.), TaPTA. Saarbrucken: CEUR-WS.org.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, XX(X), 57(10), 1439-1459.

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Olmos, M., & Corrin, L. (2012). Learning analytics: a case study of the process of design of visualizations. Journal of Asynchronous Learning Networks, 16(3), 39–49.

Reiser, R. (2001). A history of instructional design and technology: Part II: A history of instructional design. Educational Technology Research and Development, 49(2), 57–67.

Siemens, G., & Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46(5).

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553–583.

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Young, M. F., Barab, S. A., & Garrett, S. (2000). Agent as detector: An ecological psychology perspective on learning by perceiving-acting systems. In D. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments (pp. 143–173). Mahwah, New Jersey: Lawrence Erlbaum Associates.