Assembling the heterogeneous elements for (digital) learning

Month: May 2016

Digital technology ignorance and its implications for learning and teaching

Slides and abstract for the presentation can be found below.

A video recording of the presentation is also available.



Digital technology is increasingly a pervasive presence in contemporary society. The knowledge and skills required to utilise digital technologies are increasingly seen as necessary for both individuals and organisations, if they wish to become successful participants in and contributors to society. For the individuals and organisations involved in education there is a growing expectation that they are not only required to help learners develop the necessary digital technology knowledge and skills, but that they have the knowledge and skills to effectively use digital technology to fulfill that requirement. Recent history suggests that many individuals and institutions involved in education are struggling to fulfill this expectation (Bigum, 2012; Johnson, Adams Becker, Estrada, & Freeman, 2015; Masters, 2016; Mcleod & Carabott, 2016; OECD, 2015; Willingham, 2016).

There are numerous factors that contribute toward these on-going struggles. However, this talk will propose that ignorance of and the subsequent failure to harness the true nature of digital technology is a significant, under-examined, and in some cases deemed an unimportant factor (Kirschner, 2015). Drawing on a range of literature (Kay, 1984; Mishra & Koehler, 2006; Papert, 1993; Yoo, Boland, Lyytinen, & Majchrzak, 2012; Yoo, Henfridsson, & Lyytinen, 2010) this talk will develop a model for understanding the fundamental properties and unique affordances of digital technology. The talk will illustrate how this model can be used to identify and understand significant shortcomings with existing practice and research at all levels of education. Lastly, the talk will use the model to map out potentially, fruitful areas of future research around questions such as:

  • Why will growing up using digital technology everyday never be sufficient to make you a digital native?
  • Why might 88.5% of teachers and 74% of students in Auburn, Maine prefer laptops over iPads, and what might that say about the value of tablets as computing devices?
  • Why is the Moodle assignment activity so hard to use in my course and why does the provided documentation not help?
  • What’s next after the Learning Management System?
  • Why is the current push to embed the teaching of coding in primary schools likely to fail and what might be done about it?
  • How might an educational institution leverage the fundamental properties and unique affordances of digital technology to be “a leader in physical and digital higher education learning experiences geared to a diverse student constituency“?


Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15–28). London: Springer.

Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2015). NMC Horizon Report: 2015 K-12 Edition. Austin, Texas.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kirschner, P. a. (2015). Do we need teachers as designers of technology enhanced learning? Instructional Science, 43(2), 309–322.

Masters, G. (2016). Five challenges in Australian School Education.

Mcleod, A., & Carabott, K. (2016). Students struggle with digital skills because their teachers lack confidence. The Conversation. Retrieved May 30, 2016, from

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

OECD. (2015). Students, Computers and Learning: Making the Connection. Paris.

Papert, S. (1993). Mindstorms: Children, Computers and Powerful ideas (2nd ed.). New York, New York: Basic Books.

Willingham, D. (2016, May 15). The false promise of tech in schools: Let’s make chagrined admission 2.0. New York Daily News. Retrieved from

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). The new organizing logic of digital innovation: An agenda for information systems research. Information Systems Research, 21(4), 724–735.

Early thoughts on S1, 2016 offering of EDC3100

First semester for 2016 is just about over. Time to reflect back on what’s happened with EDC3100, ICT and Pedagogy for this semester.

Overall, I feel the course is in a better place than it was last year. But there remains some significant room for improvement.

It will be interesting to see what the students think. It will be a couple of months until I see their feedback.

Changes made this semester

A range of different changes were made this semester.

New module 1 and assignment 1

Historically, EDC3100 starts with a bang and a lot of work – too much work – for students. The old assignment 1 required students to expend a fair bit of time getting to now a new technology. The return on that investment wasn’t as much as it might have been, hence students disliked it. This semester Assignment 1 and the supporting Module 1 was completely re-designed with an intent to reduce student workload and focus on a few particular outcomes.

In short, that appears to have worked okay. Workload reduced.  The content and activities for Module 1 could use some enhancement to make their purpose clearer and more engaging. The weekly release of Module 1 was’t great.

Assignment 1 as an Excel spreadsheet

Assignment 1 was designed with students using an Excel spreadsheet for at least three reasons:

  1. Provide students with more experience using an application type they appear not to have used a great deal.
  2. Ensure that the insights generated by both students and markers could be analysed via computer programs.
  3. Using Excel to reduce the workload for markers.

The spreadsheet worked reasonably well. The checklist within the spreadsheet requires some refinement. As does some aspects of the rubric. The duplication of a Word-based coversheet needs to be removed.

Analysing the submitted spreadsheets via software has commenced, but hasn’t been completed.  In part this is due to Moodle not providing a good way of extracting all marked files. This has been worked around – thanks to the good folk at LSS – but time is an issue. More work needs to be done on the analysis and sharing of insights gained from it.

The return of Toowoomba lectures

In 2015 there were no Toowoomba lectures and as a result the SET results for 2015 suffered. Students missed the lectures. In 2016 they were back and were streamed live using Zoom. The lectures and way stations were also more effectively integrated into the Study Desk structure.

The lectures were okay and there were folk attending via Zoom.  Overall, however, the lectures need work. The exact relationship between the lectures and the learning paths need to be thought about. Should it be duplicate? A complement?

Recordings of the lectures (hosted on Vimeo) show a greater amount of usage than I expected to see.
Video stats end May 2016

Refinements to later modules

A range of minor to more major refinements were made, especially for Module 3. These are generally an improvement over what went before.

The not so good

A list of the not so good experience this semester include:

  • The weekly release of Module 1.
  • My absence in Week 4 due to a conference.
  • The difficulty in finding material within the learning paths.
    The availability of the Moodle book search block from next semester will help massively with this problem.
  • Assignment 2 has the students thinking more about unit design than ICT and pedagogy.
  • A couple of major “marking outages” leading to late return of marked assignments.
  • The quality and quantity of feedback provided to students.
    This is a two-edged sword. Feedback on assignments was variable. Feedback via some of the study desk activities and discussion forums was quite good.
  • The on-going confusion amongst at least quite a few students around the learning journal and the marking of it.
  • Marking of assignments still requires too much work from the markers.

To do

All of this (including the following) will need to be revisited once the student feedback has been given, released to me, and considered.

Beyond some of what is mentioned above and based on what I current know, the following need to be done:

  1. Use analytics to explore how students are engaging with the learning paths, including the ability to produce an ePub version and/or print them.
  2. Analyse the Assignment 1 data and identify what new activities/resources this might be useful for.
  3. Better integrate the recorded lectures and other components with the learning paths.
  4. Think about a re-design of Module 2 and Assignment 2.

Automating a SET leaderboard

End of semester 1 is fast approaching. One of the end of semester tasks is encouraging students in courses to complete the institutional Student Evaluation of Teaching (SET) surveys. Last year I experimented with a “SET leaderboard” (see the following image). It’s a table that lists the response rates on the SET surveys for the current and previous offerings of the course ranked according to the percentage response rate.

SET leaderboard

At my institution the SET surveys open quite a few weeks before end of semester and remain open until just before the release of final results. While the surveys are open teaching staff cannot see any of the responses. However, we can see the number of responses (Update: at least until this year when they introduced a new system that removed this functionality Update on the update: Nope, looks like the new system does support it, PEBKAC). This was how I was able to update the leaderboard for the current offering every couple of days. The leaderboard was visible to students whenever they visited the course website. Making the current response rate visible.

As the above image shows, it seems to have been a fairly successful approach.

Automating the process

That success means has generated some interest from others in replicating this approach. The problem is that doing so requires some familiarity with HTML and tables. The standard GUI HTML editors don’t do a great job of supporting the re-ordering of table rows. To help others adopt this practice, and also to reduce my load a bit, the following explores if/and how the process can be slightly automated.

The plan is to write some Javascript that can be included in a Web page that will automatically generate a table like the above (with the correctly ordered rows) based on data from a Google spreadsheet.

The implication being that all I (or some other academic) needs to do is to keep the Google spreadsheet updated and the script will take care of the rest. It also means that I can create multiple copies of the leaderboard in different locations, but only need to modify the data in one source.

In a perfect world, the institutional SET survey system would have an API that could be used to extract the data. Thereby removing the need for the academic to manually copy the response rate to a Google spreadsheet.

Of course, after implementing all of the work below, it appears that the new institutional system for administering the SET surveys has removed the functionality that allowed course examiners to see how many students had responded so far.

Reading data from a Google spreadsheet

There are a variety of ways this can be done, the Sheetrock library looks like an useful approach.

Got the sample working locally.  Connect it to my spreadsheet. Need to make the spreadsheet public and use the same URL I use, not the “shareable link” to allow others view.

The neat thing is that the Google query language allows the data to be pre-ordered. So the table is automatically in the right order.

Make it look pretty

Next step is to style it, in particular to highlight the current offering.

Applying the existing styles is a first step. Highlighting the current year. All good.

Make it real

Now update it with the data that I’ll be using this term, and use the Google spreadsheet to auto-calculate the percentage.  Simple, the hardest part of this was the manual process for gathering the data to put in the spreadsheet.

Time to test it in Moodle. All seems to work. Here’s a version within a Book chapter. This should work on any web page. Each time the page below is reloaded the Javascript will update the table based on the latest data in the Google spreadsheet.  The current semester is always highlighted, but it will move up and down the ranking based on its response rate.


Automated leaderboard

 What’s required

To get this to work, you need to have

  1. A Google spreadsheet that has been made public.
    This allows anyone (including the script) to read the contents, but they can’t change anything.  The spreadsheet should have a row for each offering of the course with the following columns:

    1. Year – of offering
    2. Responses – number of responses
    3. Percentage – the % of total enrolment that has responded
      I’ve implemented this using a spreadsheet formula using an extra column Total Enrolment.
    4. Current – a yes should go in the row that matches the current year.
  2. A link to a modified version of the sheetrock library.
    The modification is a function that generates the table.
  3. A table element that has the id SETleaderboard and has four columns: Rank, Year, Responses, Percentage
  4. The following javascript
    (Which is still a little rough)


[code lang=”javascript”]
var mySpreadsheet = ‘some_url_here’;
url: mySpreadsheet,
query: "select A,B,C,D order by C desc",
callback: myCallback

Building a CASA for student evaluation of teaching results

I have a problem with my Student Evaluation of Teaching (SET) data!

No. It’s not that the results are terrible. Some are good, some not so much. (see the two images in this post)

Student comments - EDC3100

The problem is that I (and every other academic at my institution) is unable to get access to the data in a form that we can analyse. For example, back in early 2014 I manually extracted the free text comments from the SET data and analysed them using NVIVO to produce the graph to the right. Click on it to see a larger version. Yea, manually.

The following documents the development of what might be called a kludge or a work around to this problem. Though being an academic I prefer to define and use my own term of Context Appropriate Scaffolding Assemblage (CASA). Expect to hear a bit more about that.

The aim is to produce a bit of technology that I can slot into my context that will scaffold my ability to perform a required task in a way that is appropriate. Rather than the current situation where performing the task requires stupid jumping through of unnecessary, manual hoops. Not to mention an organisational structure that over many years has been unable to see the need, let alone do something.

The following outlines (briefly) the process used to create a Greasemonkey script that when I visit a web page containing SET data for my courses, automatically convert that data into a CSV file that I can download. From there I can import that data into which ever anlaysis tool I deem appropriate.

Given all this data has to be stored in a database, it would appear incredibly straight forward for the institution to have already done this. Especially given the emphasis being placed on teaching staff being seen to do something with student feedback. But apparently it’s not that simple.

Perhaps this is where I get into trouble for breaking some policy, protocol, or expectation.

Current situation

The institutional SET system produces a collection of web pages for each offering of a course. Different student cohorts get different pages.

The institutional survey consists of a combination of Likert-type scale questions and free-text questions. In addition, each of the Likert-type scale questions include the option for students to add free-text comments.

The display of the Likert-type scale questions is either in tables or bar graphs, including a comparison against school and university averages. The free text questions are grouped by question and simply listed. Comments added to the Likert-type scale questions are displayed along with the student’s response Likert question.

The problems that arise, including:

  • Combining, comparing and analysing data between cohorts is difficult.
  • Analysing relationships between the responses to different questions is impossible.
  • Passing any of the data – especially the free-text comments – into other systems (e.g. Leximancer, NVIVO etc) for further analysis is next to impossible.

 Ideas for CASA

  1. Greasemonkey script to parse the web page
  2. Publish to a Google spreadsheet using ideas such as (this or this)
    Could use the name of the course in the web page to add to a different sheet. The spreadsheet could become a single place with all the data.
  3. Perl scripts etc could pull the data from there

A potential idea here for Google spreadsheets to become a broad

Structure of the data

The system provides a number of different views. I’m going to focus on the “print view” which produces a web page that contains all of the information in one page.

The data on that page includes

  • Comparative means;
    Table with various stats from Likert style questions (# ans, response rate, std dev, % positive) and the average for each question for the class, course, school, faculty, campus, and USQ.
  • Frequency of responses;
    For each of the 5 possible responses to a Likert style question the count and percentage when that response was chosen.
  • Free text responses
    For each question (text of question is a heading) where the student could provide a free text response, a list of all the free text responses, including the comment, and if the comment is associated with a Likert style question the response the student chose.

Time to convert that into the HTML elements used.

Comparative means

The table doesn’t have an id.  It’s the first table with the class reportDataTables. The table consists of rows alternately of class reportRow or reportRowAlt. Each row has the following cells

  1. Question id (in a span) and question text
  2. Number of answers
  3. Response rate
  4. Class average
  5. Course average
  6. School average
  7. Faculty average
  8. Campus average
  9. USQ average
  10. Std Dev
  11. % positive

Frequency of responses

The second table with class reportDataTables. Same basic structure. The cells on each row are

  1. Question id and text
  2. Number of answers
  3. Response rate
  4. Number of “1” responses
  5. Percentage of “1” responses
  6. Number of “2” responses
  7. Percentage of “2” responses
  8. Number of “3” responses
  9. Percentage of “3” responses
  10. Number of “4” responses
  11. Percentage of “4” responses
  12. Number of “5” responses
  13. Percentage of “5” responses

Free text responses

Are contained within a div with id commentCont. Contains a sequence of divs

  1. class reportCommentsQuestionTitle contains the question title
  2. a follow on div with no class just a style setting padding-right to 10px that contains an unordered list where each element has
    1. The text of the student comment (including their response if associated with likert style question)
    2. A bit of javascript that allows the display of all of the students other responses.
      In theory, this could be used to generate each individual student’s complete survey response.

When the user clicks on the “bit of javascript” some additional content gets added.

Actually it appears that there are a collection of divs (hidden) with ids of the format singleStudentComments7 where 7 seems to be a unique id.  This gives access to all the comments from that student.

Of course, it’s not unique.  With 47 responses there are actually 80+ singleStudentComments# divs. Going to need to filter.

Extract the data and share

At this stage, I could quite easily write a Perl script that would extract the data. The problem is that I could share that particular CASA (kludge). The aim here is to put in a bit of extra work and develop a CASA that others could (fairly easily) adopt. So Greasemonkey it is.

Using Greasemonkey to extract that data is fairly simple (once I refresh my memory), but doing something with the data is a little more difficult, but there appear to be solutions such as this one that will allow a Greasemonkey script to generate a text file to download.

 Use the data

A text file is being produced that contains three sets of data in CSV file format.  The intent is that this is a simple default format that people can re-purpose into other systems for another analysis.

Time to test it by importing into Excel.  Fix up the delimiting characters and replace some others.

Frequency of responses

And hey presto it works. The graph to the right is the simplest example of finally being able to analyse this data directly. Of course, for the likert style questions I still don’t have access to the raw data. But at the very least I can start comparing summary data from different modes and offerings of the same course. More interestingly, I can now finally easily get access to the student responses to the free text questions.

But that’s a task for another day. (FYI: SEC05 is the question “I found the assessmen tin this course reasonable”)

Organizing for Innovation in the Digitized world

The following is a summary and some reflection upon

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

The abstract for which is (p. 1398)

Our era is one of increasingly pervasive digital technologies, which penetrate deeply into the very core of the products, services, and operations of many organizations and radically change the nature of product and service innovations.
The fundamental properties of digital technology are reprogrammability and data homogenization. Together, they provide an environment of open and flexible affordances that are used in creating innovations characterized by convergence and generativity. An analysis of convergence and generativity observed in innovations with pervasive digital technologies reveals three traits: (1) the importance of digital technology platforms, (2) the emergence of distributed innovations, and (3) the prevalence of combinatorial innovation. Each of the six articles in this special issue relates to one or more of these three traits. In this essay, we explore the organizational research implications of these three digital innovation traits and identify research opportunities for organization science scholars. Examples from the articles in this special issue on organizing for innovation in the digitized world are used to demonstrate the kind of organizational scholarship that can faithfully reflect and inform innovation in a world of pervasive digital technologies.


There’s an awful lot in this paper (and the rest of the papers in the special issue this introduces). Some of it is not that new, but the framing and linkages to other research are valuable and certainly prompted some thinking on my part. There are insights here that could be usefully employed to study and design digital learning by organisations.

Some random personal thoughts follow, before a summary of the paper below.

Pervasive digital technology, simluation, generativity and divergence

The first important point that I think many people (and organisations) still haven’t grasped is the pervasiveness of digital technology. Organisations in particular have a problem with this when they have policies that say you must use technology X to perform task Y. They don’t realise that with pervasive digital technology if performing task Y with their chosen technology X is way to onerous, then I’ll find technology Z that works much better for me. Even if I have to figure out how to simluate (see corruption and simulation) the use of technology X.

I wonder if this is a form of generativity that the authors haven’t considered? Is it an example of how digital technology actually enables divergence? Where organisational actors no longer have to converge into the one organisationally mandated system.

University digital learning: organisation versus industry and platforms

The authors tend to focus at the industry level. For example, the illustrate the importance of “platform” through the use of Apple and iOS. A platform by which other organisations can leverage to create innovation.  Hence the need to balance of control and generativity. e.g. the initial absence and subsequent development of the Apple App Store.

In my experience, it would appear that University digital learning is focued overly on control, rather than generativity. Perhaps it’s more productive for Universities to see the LMS and the rest of it’s digital learning technologies as a platform, let go of the control, and focus on enabling generativity a bit more.

Of course, that would need to be informed by

more work is needed in this area to consider the role of power, knowledge, culture, and institutional norms in creating and  managing platform generativity in multisided markets (p. 1401)

What is the platform?

Begging the question of what could be counted as the “digital learning platform” within an organisation? Not to mention why it needs to be the organisation’s platform. Why not the individual’s platform? e.g. a personal API?

Platforms focused on diversity not economy of scale

Boudreau (2012) – an article in the special issue – apparently finds that the aim of a platform should not be economy of scale, but instead should be focused on increasing the heterogeneity of those developing for the platform.

For quite some time now LMS vendors have been talking about their products being a platform. Actually, they’ve probably moved beyond that now into talking about being part of an ecosystem. Either way, I wonder what might be said about the heterogeneity of the developers involved with those platforms?

I wonder what implications this argument might have for the ‘scale’ fetish in certain areas of education?

Challenging conventional norms of ownership, roles and rules

Which is an example of how digital technologies are challenging conventional norms of ownership, roles and rules. Something that is cropping up all the time in my experience within universities.  As illustrated by a couple of tweets from my stream.

On the news that RMIT are starting on the Personal API path.

Slight tweet of frustration arising from a couple of weeks struggling to access data about my courses.

I wonder how likely it is to get University senior management to engage with (p. 1401)

Such reconfigurations of roles, rules, and norms suggest the value of examining organizational design as a dynamic emergent process enabled by the digital platform (Yoo et al. 2006)

Courses as not one product, but different products

Using business focused terms like customer, client or product is destined to annoy many in academia and there are good questions to be asked about whether or not they are appropriate. However, increasingly contemporary the functioning of contemporary universities are increasingly underpinned by these very terms and concepts. The following is an attempt to think about how they are using those terms/concepts incorrectly.

The rampant application of institution-wide standardisation to various aspects of University courses (e.g. standard design for all course websites) appears to suggest that Universities see “courses” as their product. Or perhaps the course is the standard building block of the actual product, the program. Either way the attempts at standardisation appear to reinforce the perspective that these are the same product.

But I wonder whether a first year course in Marketing is the same product as a Masters level course on Networked and Global Learning? Is there value in allowing these products to be differentiated? To perhaps be different types or groups of product?

Breaking boundaries

Whatever the conception, a problem is that what goes on in one course, is rarely seen or re-used in other courses. Suggesting the need for “generative platforms of knowledge, skills, learning processes, structures and strategies” to muddy boundaries and “allow for continuous scanning to identify the signals that indicate when boundaries should be crossed and reconfigured and when they should not” (p. 1401)

What type of “platform” is needed to allow an innovation that works in my course, to be spread into other courses and vice versa?

What types of knowledge are integrated into the design of digital learning tools?

The authors argue that the convergence of pervasive digital technologies hugely increases the heterogeneity and quantity of knowledge that need to be integrated into innovative digital tools and products.

I wonder what an evaluation of the knowledge embedded in the standard digital learning technologies would reveal in terms of the quantity and heterogeneity of knowledge drawn upon to design those tools?

How well does university digital learning tools “enable others”?

The authors argue that the nature of pervasive digital technologies means that

innovation increasingly requires that others be enabled to innovate as well.

It’s not just about providing an LMS, it’s about providing a platform that enables the people using the LMS to innovate.

Moodle has an API, has your institution enabled access to that API by people outside of the institutional IT unit?

Giving up on “up-front design”

The combinatorial innovations that arise from “enabling others” amongst other facets of pervasive digital technologies challenge another core assumption/practice of traditional organisational practice – “up-front design”.

The authors explain that (p. 1402)

With traditional, “physical” modular designs, modules are created through a decomposition of complex products. That is, a product is designed first, then parts and subsystems are designed with standardized physical interfaces.

But that combinatorial innovations arising from pervasive digital technologies are (p. 1402)

most often designed without fully knowing the ‘whole’ design of how each module will be integrated with another (Gawyer, 2009)

This echoes with the on-going challenge my institution is currently having with figuring out what to do with learning analytics. They are starting by wanting to know the final requirements. What questions do people want answered? What should be the final form of the system to provide those answers? How can we use decomposition to implement?


Properties of Digital technology – reprogrammability and data homogenization

Combine to create environment of “open and flexible affordances” leading to innovations characterised by convergence and generativity.

The three types of convergence identified appear – to me – to be a little forced and not necessarily the result of just the nature of the digital technologies. For example, the authors use Skype competing with telecommunication companies as an example of the convergence of industries. Is this solely down to the digital technology? Or does the inability of companies invested so much in their current methods of operation to handle something completely different, mean that companies that can will compete with them? Need to think more about this.

Those innovations have three traits

  1. Importance of digital platforms
  2. Emergence of distributed innovations
  3. Prevalence of combinatorial innovation

Have the authors captured the full implications? They seem to suggest not. How does the work of Mishra et al – opaque, protean and ?? – overlap/complement what is here? How does the opaque nature of digital technology overlay with the contraints/affordances from TACT?


  • Suggesting that if you have failed to pay attention to these, you aren’t achieving similar innovations?
  • Can analysis of what passes for innovations in digital learning reveal an absence of these three traits, and thus suggest flaws in how digital technologies are being conceptualised and harnessed?
  • Can the ideas here be harnessed to inform the design of better digital learning technologies?
  • If pervasive digital tech heralds new area of organisational research looking at new product and service designs; business models; and, organisational forms, what’s the equivalent in learning and teaching? In particular, what is the equivalent that will escape the “infection” of business/organisation thinking?

Paper summary

Starts with examples of the increasing pervasiveness of digital technology: organisational down to very personal.

Defines pervasive digital technology as (p. 1398)

the incorporation of digital capabilities into objects that previously had a purely physical materiality

Suggests physical materiality (p. 1398) (emphasis added)

refers to artifacts that can be seen and touched, that are generally hard to change, and that connote a sense of place and time. For example, shoes have physical materiality because they can be worn, are hard to convert into a screwdriver, and carry social meanings of appropriate uses and settings for wearing them.

Digital materiality (p. 1398)

refers to what the software incorporated into an artifact can do by manipulating digital representations.

Cites Kallinikos, Aaltonen, & Marton (2010), Yoo, Hendfridsson, & Lyytinen (2010) and Zammuto et al (2007) on the “powerful affordances of digital technologies” that “allow designers to expand existing physical materiality by ‘entangling’ it with software-based digital capabilities” (p. 1398)

“The fundamental, unique properties of digital technology” (p. 1399)  — coming from Yoo et al (2010)

  • reprogrammable functionality; and,
    A program (way of doing things) can be changed. Digital technology is protean.
  • data homogenisation.
    All the data being “homogenised” into bits (0s and 1s)

With digital technologies becoming pervasive, you get environments (p. 1399)

of open an flexible affordances that result in two unique characteristics of organisational innovation with digital technologies: convergence and generativity

This enables/heralds/requires “a new area of organisational science” to explore what this means for new: product and service designs; business models; and, organisational forms.

Socio-technical research is substantial in this area, but apparently (p. 1399)

there has been less research on the digital materiality created by pervasive digital technology (Law and Urry 2004, Orlikowski and Scott 2008, Robey et al. 2003). The desire to promote increased scholarship in these emerging areas is the pri- mary motivation for developing this special issue.

The two unique characteristics shape/lead to “three traits of innovations with pervasive digital technology that are believed to be (p. 1399)

crucial for understanding the potential impact of pervasive digital technology on innovation processes and organization science.

Convergent and Generative characteristics

Draws on a definition of technology affordance from Majchrzak and Markus (2012)

an action potential, that is, to what an individual or organization with a particular purpose can do with a technology or information system

Though Yoo et al (2012) don’t mention the following from Majchrzak and Markus (2012)

Affordances and constraints are understood as relational concepts, that is, as potential interactions between people and technology, rather than as properties of either people or technology. (p. 832)

As outlined above, it’s argued that the affordances of pervasive digital technologies create innovations that are characterised by

  1. convergence; and
    Arises in a number of different ways

    • Convergence of media: Separate user experiences can be brought together
    • Convergence of products: Embedding digital within physical artifacts, creating “smart” products
    • Convergence of industries: e.g. software development firm Skype, competing with telecommunication companies
  2. generativity
    Defined as “that digital technologies become inherently dynamic and malleable” (p. 1399) and cite Zittrain (2006, p. 1980) “a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences”This occurs via different means

    • procrastinated binding (Zittrain 2006) of form and function – the ability to added new capabilities are the product has been designed and produced
    • wakes of innovation (Boland et al, 2007) where the introduction of a digital technology (e.g. 3D visualisation tools in the construction industry) changes the role and scope of the roles involved and in turn requires new approaches to management, contracts etc.
    • derivative innovations where the use of digital technologies generates additional digital traces. Traces that can be then used to add new layers of affordances. It’s argued that the bulk of innovation around social and mobile media are derivative innovations derived from the generative use of the digital traces generated by those media.

I particularly like this quote (p. 1399)

Organizational theories that may have assumed (either explicitly or by oversight) that technology is fixed and immutable now must consider the possibility that the technology providing the basis for organizational functioning is dynamically changing, triggering consequent changes in organizational functioning.

Organisational innovation with pervasive digital technology

Time to offer a “tentative, initial list of key traits of innovation processes and outcomes in the age of pervasive digital technology” (p. 1400).  There is a difference because the “open, flexible affordances of pervasive digital technology are fundamentally shifting the nature of innovation processes and outcomes in several ways.

The three traits in this list are

  1. Importance of digital technology platforms;
  2. Emergence of distributed innovations; and,
  3. the prevalence of combinatorial innovation.

Digital technology platforms

A platform is “the central focus of the innovation” and “acts as a foundation upon which other firms can develop” other products. The analysis here is at the level of industry.

Two perspectives on the tole of the platform

  1. to harness the convergence and generativity of digital technology;
    The platform creates an ecosystem to integrate and orchestrate heterogeneous actors. The question becomes “how to design, build, and sustain a vibrant platform”.
  2. to allow the building a platform of both products and digital capabilities.
    e.g. leveraging ERP through addition of others tools to utilise shared data resources. Or, leveraging single systems to design/control multiple products.

Important implications arising from “platform”

  • “organisations must be designed to manage the delicate balance of generativity and control in the platform” (p. 1400)
    Important: the authors talk about this at the industry level (e.g. Apple controlling iOS), but it potentially has something useful to say about universities and the LMS.  In particular, the call for “more work is needed in this area to consider the role of power, knowledge, culture, and institutional norms in creating and  managing platform generativity in multisided markets” (p. 1401)
  • conventional norms of ownership, roles and rules are challenged;
    The use of more “standardized tools to design, produce, and support products and services throughout the organization and its value chain they share more data and processes across organizational boundaries (p. 1401)
  • “innovation activities increasingly become horizontal as efficiencies are gained by applying the same innovation activities and knowledge across multiple products or platforms”
    e.g. the development of the same app on different platforms, or the same software module being used in different products. Implying the need for organisations to “create generative platforms of knowledge, skills, learning processes, structures and strategies” that enable the crossing of boundaries to enable this sharing. Boundaries limit innovation and growth.

Distributed innovations

Digital technology has reduced cost of communication and coordination leading to the dispersion and democratisation of innovation.

As a result, the locus of innovation activities is increasingly moving twoard the periphery of organizations (p. 1401)

It also increases “the heterogeneity of knowledge resources needed in order to innovate” (p. 1401). While all innovation requires this.  The convergence of pervasive digital technology intensifies the need.

Organisational implications include:

  • The need for “knowledge resources” that dynamically change and are hetergeneous bring into question much of what happens within an organisation;
  • Distributed innovation requires that “others be enabled to innovate as well” (p. 1402)
    e.g. such as through APIs, open source/different licenses etc, but these clashes with existing social norms, organising principles, and role separations.
  • Emergence of new industrial structures;
    e.g.  niche players versus dominant players.
  • the introduction of new forms of risk;

Combinatorial innovation

The ability to create “new products or services by combining existing modules with embedded digital capabilities” (p. 1402) linking this to Arthur (2009).

Organisational implications (these have morphed into my messages, in this section the authors’ point isn’t always clear)

  • Give up on up-front design and fixed/complete products;
    The traditional form of “physical” module design through decomposition proceeded from the design of the final product first. Combinatorial innovations of pervasive digital technologies are “most often designed without fully knowing the ‘whole’ design of how each module will be integrated with another (Gawyer, 2009)” (p. 1402) An approach that assumes a known product boundary and a fixed life cycle is no longer suitable. Combinatorial innovations means that a product/service is always incomplete. New, more dynamic and permeable product bounaries are the norm.
  • New forms of creativity, especially constrained serendipity (Faraj et al 2011).
    “fostering serendipity online may become a critical dynamic capability for firms” – what affordances of digital tools can support this.
  • replacement of traditional s-curve diffusion with contagion models of diffusion;
    Including the idea that innovations will mutate due to combinatorial innovations.
  • heightened complexity of the innovation process.
    Creating a brand new collection of risks as “more heterogeneous modules” are produced by diverse actors and then combined to create new innovation.

Articles in the issue

The above was the editors introduction to a special issue of a journal. Building on the above they provide links to and introduce the articles.  Bits that resonated with me are worked in here. (I’d love to dig into these particular papers some more, but sadly I don’t have access, at least not digitally).

Boudreau KJ (2012) Let a thousand flowers bloom? An early look at large numbers of software app developers and patterns of innovation. Organ. Sci. 23(5):1409–1427.

Links to the platform idea. Based on analysis of app sales data it is argued/suggested that

  • traditional mix-and-match innovation strategies of modular products are not a match for platform-based innovation;
  • increasing the developers for a platform increases the diversity of applications;
  • increase in diversity stimulates innovation within the platform;
  • but adding more similar products has the opposite effect;
  • Hence the aim of building a digital platform is not economy of scale, but to increase heterogeneity

The authors have this to say (p. 1404)

His article clearly shows how the generative nature of affordances of pervasive digital technology is deeply related to both social and technical heterogeneity and how the locus of innovations and the success of platforms is moving toward the periphery

Austin RD, Devin L, Sullivan EE (2012) Accidental innovation: Supporting valuable unpredictability in the creative process. Organ. Sci. 23(5):1505–1522.

Austin et al (2012) apparently “discover 5 key themes that characterise unpredictable innovations” and from there propose “six design principles for digital technology to increase the benefits of accidental innovation while controlling for its cost”


Kallinikos, J., Aaltonen, A., & Marton, A. (2010). A theory of digital objects. First Monday, 15(6). doi:10.1145/1409360.1409388

Majchrzak, A., & Markus, M. L. (2012). Technology Affordances and Constraints in Management Information Systems (Mis). In E. Kessler (Ed.), Encyclopedia of Management Theory.

Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). The new organizing logic of digital innovation: An agenda for information systems research. Information Systems Research, 21(4), 724–735. doi:10.1287/isre.1100.0322

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zammuto, R. F., Griffith, T. L., Majchrzak, a., Dougherty, D. J., & Faraj, S. (2007). Information Technology and the Changing Fabric of Organization. Organization Science, 18(5), 749–762. doi:10.1287/orsc.1070.0307

Testing out the Heatmap #moodle block

Following is a quick report on installing and playing with the Moodle Heatmap block by Michael de Raadt. It’s inspired by @damoclarky’s Moodle Activity Viewer (MAV). Both tools modify Moodle web pages by overlaying a heatmap.

Knowing Michael I’m assuming the block will work perfectly. But I’m particularly interested to see how broadly it works within Moodle. Based on my vague undestanding of Moodle, I believe it will have some limitations that MAV doesn’t. This is of particular interest because the theoretical perspective offered by the SET and BAD mindsets predicts that Heatmap will have some limitations (and strengths), that MAV doesn’t.


Heatmap is a Moodle block, so installation should be just a case of downloading the code and installing via a standard Moodle web interface.

Easy and straight forward.  3 settings to configure – I’ll stick with the default and done.

Screen Shot 2016-05-13 at 11.35.56 am.png


To actually use the block all I should have to do is add the block to a course page and away we go.I’m testing this on a local server that only I use.

Here’s the before “Heatmap is turned on” shot covering a small part of the siteScreen Shot 2016-05-13 at 11.39.38 am.png

Turn editing on, add the block and hey presto the heatmap is displayed.

Screen Shot 2016-05-13 at 11.40.52 am.png

“Who are you” is more “popular”. Been clicked on 67 times by 1 user.  Where as “What you will learn” has only been clicked on 6 times.  The one use is me, the only user that can use this Moodle server.

The Heatmap block looks like this – providing a few extra stats

Screen Shot 2016-05-13 at 11.44.36 am.png

On other pages

The question I have is will it work on other pages? Apparently blocks can be made sticky throughout a site.  Actually, have turned it on across the entire Moodle install.

What I’m hoping is that when I view the “Who are you book” I’ll see the heatmap appear there. But no.

Screen Shot 2016-05-13 at 12.14.28 pm.png


But there is truth in advertising

The Heatmap block page has the following description

The Heatmap block overlays a heatmap onto a course to highlight activities with more or less activity to help teachers improve their courses.

The links on the book above are not activities.  So, Heatmap is not designed to do this

Where as MAV works on links.

Which highlights some confusion around the naming of MAV. The A for Activity has two possible meanings. In Moodle it means a particular type of plugin. Outside of the Moodle community it could be seen to cover where ever and whatever the students click on.

Testing out the Moodle search book block

Earlier this year – as part of the Moodle Open Book project – I made some changes to Search Book block for Moodle. The hope being that my institution might install this on it’s Moodle installation, which in turn would allow my students and I to search the ~70 Moodle books that make up the “learning path” for my course.

Well it is almost there. It’s in the test environment and the following reports on some testing of the Search Book block. In summary, it all appears to be working.

It will be really interesting to see how this changes the behaviour and experience of the student in my course next semester. I believe the current (and past) students would have liked to have this functionality. I know it would have made my tasks a lot easier.

Much thanks to

  • Eloy Lafuente for developing the block in the first place.
  • The Moodle devs at USQ who fixed further problems.

(and one other person who I’m sure made a contribution, but I can’t find the details)

Populate a course

First step is to back up my existing course and upload to the test environment. Mainly so that there is a collection of content to search.

Add the block

By default the “search books” block doesn’t appear in the test environment.  Need to add the block.


  1. Is there a need to promote this change amongst people who use the Moodle book (and others)?
    The addition of a new block isn’t going to be obvious to most people. There’s no point in automatically adding it to all courses, as it’s only useful for those people using the Book resource.
  2. I’ll need to modify my course prep material a bit to include mention of this facility so students actually know that it’s there.
  3. I wonder whether people will get confused between the “search forum” and “search books” blocks?


Search for something certain to be there: edc3100

Screen Shot 2016-05-10 at 9.31.13 am.png

As expected quite a few results.  Quick test of search results reveal finding pages that actually contain the search phrase.

Navigating amongst the different pages of search results appear to work.

Screen Shot 2016-05-10 at 9.33.26 am.png


  • Results are ordered by the order of the books in the course list.
    e.g. the Assessment material in my course is found near the end of the search because they are located in the final section of the course site.  This will cause some problems with searching for assignment related information.
  • Need to rethink/experiment with structure of EDC3100 material

Search for exactly a phrase: “creative commons”

Screen Shot 2016-05-10 at 9.36.43 am.png

Significantly more results than I initially expected, and some of the search results (e.g. the second result from the above list – shown below) doesn’t actually include the search string in the visible text.

Screen Shot 2016-05-10 at 9.38.14 am.png

But that’s because the HTML for the image includes the following HTML. The search string “creative commons” appears in the title tag for the image.

[code lang=”html”]
<img title="Creative Commons Creative Commons Attribution 2.0 Generic License" src="×15.png" alt="Creative Commons Creative Commons Attribution 2.0 Generic License">



  • This might cause some confusion for users.
    I wonder how prevalent this might be. How much of the HTML in Moodle books contains meaningful descriptions?
  • Potential feature request for an advanced search facility – exclude/include HTML in the search

Search for a phrase: creative commons

Screen Shot 2016-05-10 at 9.47.47 am.png

As expected returns a few extra results.

Search for phrase mixed up: creative copyright

Screen Shot 2016-05-10 at 9.48.53 am.pngAppears to work as expected.

Search for “must include word”: copyright +creative

Doesn’t make any difference to search results to the above.

Search for content missing a word: copyright -creative

Screen Shot 2016-05-10 at 9.51.33 am.png As expected

Exploratory search: “assignment 3”

Searched first for assignment 3 and got 231 results. Search for “assignment 3” return 102 results.


  • As expected above the assignment specification for Assignment 3 was search result 100 or so.  This is due to the structure of my course site and the search block’s ordering of results by the order they appear in the course.
  • Raises questions of whether it’s possible or worth it to integrate some form of ranking of results. At the very least if the search phrase appears in the title of  the page, should it be ranked higher?

How does BIM allocate blog posts to prompts

The following is a response to the following query.


BIM is a plugin for the Moodle LMS. BIM is “Designed to support the management of individual student blogs (typically external to Moodle) as personal learning/reflective journals”. Students create their individual blogs (or anything that produces a RSS/Atom feed) and register it with BIM. BIM then mirrors all posts within the Moodle course and provides functionality to support the management and marking.

A part of that functionality allows the teacher to create “prompts”. The design of the original tool (BAM) assumed that students would write posts that respond to these prompts. These posts would be marked by teaching staff.

BAM (and subsequently BIM) was designed to do very simple pattern matching to auto-allocate a student post to a particular prompt. It also provides an interface that allows teaching staff to manually change the allocations.

Defining a prompt

A prompt in BIM has the following characteristics

  • title;
    A name/title for the prompt. Usually a single line. The original design of BIM assumed that this title was somewhat related to the title of a blog post. The advice to students was to include the title of the prompt in the title of their blog post, or in the body of the blog post.
  • description; and,
    A collection of HTML intended to describe the prompt and the requirements expected of the student’s blog post.
  • minimum and maximum mark.
    Numeric indication of the mark range for the post. Used as advice only. If the marker goes outside the range, they get a reminder about the range and it’s up to them to take action.


Auto-allocation only occurs during the mirror process. This is the process where BIM checks each student’s feed to see if there are new posts.

When BIM finds a new post from a student blog it will loop through all of the un-allocated prompts. i.e. if this student already has a blog post allocated to the first prompt, it won’t try to allocate any more posts to that prompt.

BIM will allocate the new post to an unallocated prompt if it finds the prompt title in either the body of the blog post, or the title of the blog post. BIM ignores case and it tries to ignore white space in the prompt title.

For example, if this blog post is the new blog post found by BIM, then BIM will make the following decisions

  1. ALLOCATE: the post to a prompt with a title of “does BIM allocate blog posts“.
    This matches exactly the title of this blog post.
  2. ALLOCATE: the post to a prompt with a title of “DOES    BIM ALLOCATE   BLOG POSTS“.
    BIM ignores case and white space, hence this matches the title of this blog post
  3. ALLOCATE: the post to a prompt with a title of “Auto-allocation“.
    The body of this post includes the word Auto-allocation.
  4. DO NOT ALLOCATE: the post to a prompt with a title of “does BAM allocate blog posts“.
    (Assuming that the above line didn’t appear in this post) This particular phrase (see the A in BAM) would not occur in the title or the body of this post, and hence not be matched.


Powered by WordPress & Theme by Anders Norén