Assembling the heterogeneous elements for (digital) learning

Category: elearning Page 1 of 32

Exploring knowledge reuse in design for digital learning

This post continues an on-going exploration of knowledge reuse in design for digital learning. Previous posts (one and two) started the exploration in the context of developing an assemblage to help designers of web-based learning environments create a card interface (see Figure 1). Implementing such a design from scratch requires a diverse collection of knowledge that is beyond most individuals. It is hoped that packaging that knowledge into an assemblage of technologies will allow for that knowledge to be used and reused (within Blackboard 9.1) by more people and subsequently have a positive impact on the learning environment and experience.

The card inteface is a simple example of this work. The requirements of the card interface are fairly contained and pre-defined. The next challenge is to explore if and how this can be expanded to something more difficult and open-ended.

Figure 1: Card interface example

Problem: developing and maintaining online learning content

Back in 2015 @abelardopardo wrote a blog post titled Re-visiting authoring: Reauthoring which starts

Creating learning resources is getting incredibly difficult. Gone are the days in which a bunch of PDFs or PPTs were the only resources available to students. In a matter of years, learning resources have to be engaging, interactive, render in all sorts of device..

This thread from the Blackboard community site provides evidence of the problem elsewhere and directly echoes my own experiences with the Blackboard LMS.

I’m finding that relying primarily on the Blackboard Content Editor to post materials in the course shell as HTML is a time relatively consuming process. I am concerned that, despite training, some faculty may find maintaining these courses too technically challenging. Many faculty have been posting their entire courses as MS word docs. (Behnke, 2018)

Andecdotal observations of my local context suggests that learning modules (online content) with Blackboard generally come in the following categories:

  • Nothing.
  • Word documents or Powerpoint files.
  • Collections of Blackboard content items (e.g. the image on the Blackboard link) – with variable design quality.
  • High-end versions designed and implemented by teams of specialists.

The distribution between categories seems to lean heavily toward the first three categories. Contributing factors appear to include: the institutional assumption that individual teachers are largely responsible for producing learning resources; the availability of specialists to help is limited to strategic projects; and, the difficulty of using the Blackboard 9.1 tools to generate learning modules.

As a specialist assigned to a strategic project, my task has been to help a brand new program set up their course sites, including learning modules. Echoing Behnke’s (2018) quote I’ve found using the Blackboard tools too time consuming for outcomes of limited quality.

Hence I needed a solution to the authoring problem that would enable the quick creation and on-going maintenance of good quality online learning modules. A solution with a low floor (i.e. easy enough that an “average” teacher could use it) and a high ceiling (i.e. capable of creating advanced features and high quality). A solution that worked with the tools to hand in my current context.

Solution: the Content Interface

The last couple of weeks have seen the development of an assemblage of technologies currently labelled (unimaginitevely) the Content Interface. Figure 2 is a screen shot of an example learning module produced using the Content Interface. This blog post was also produced using the first two steps of the Content Interface process. The following sections outline the three step Content Interface process used to produce learning modules.

1. Create and edit content as a Word document

Microsoft Word, of if you’d prefer LibreOffice, is used to create and edit content that is saved as a Word document (.docx). The Word document must be structured using styles, including some styles specific to the Content Interface (e.g. Note, Reading, Activity, Embed). For example, this Word document was used to produce the learning module shown in Figure 2. That Word document and the learning module is actually an introduction to the Content Interface and illustrates the use of styles. Feel free to download the Word document for the learning module in Figure 2 and compare its contents with Figure 2. You can also download the Word document used to produce this blog post.

Figure 2: Content interface example – Blackboard

2. Convert to HTML using Mammoth

Once editing is complete, the Word document is uploaded to a Web form that converts it into clean HTML. This is done using a locally configured version of Mammoth.js (a Javascript version of Mammoth). Using a Click to copy button on the form, the HTML produced by Mammoth is copied into the clipboard and then pasted into a Blackboard content area. Or, as with this blog post any other web publishing service such as WordPress. It’s just HTML.

3. Transform the HTML

Since Mammoth produces very nice semantic HTML it’s fairly easy to transform using Javascript. Figure 2 is an example of the current transformation that is done by a combination of Javascript and CSS. Each learning module page in Blackboard has a Javascript file included to perform transformations, including:

  • Dividing the document into sections based on Heading 1 and displaying the sections via an Accordion interface.
  • Allowing any embedded HTML code (e.g. YouTube video) to be displayed.
  • Transforming a growing number of higher level semantic elements.
    For example, the Reading shown in Figure 2. If you examine the Word document from which the learning module in Figure 2 was produced,you will see that the text for the activity is displayed as normal text. No icon of someone reading a book. If examine the style, you’ll find that the text does have the Word style Reading applied to it. When it detects text with this style, the Javascript/CSS performs the transformation showin in Figure 2.
  • Ensure any non-Blackboard links open in a new Window.
    By default, Blackboard generates an error if any attempt is made to open a non-Blackboard link in the current browser window.

Is it any good?

Eating my own dog food

For a start, it works for me. I’m eating my own dog food. I find I’m able to prepare learning modules (i.e. HTML) quickly and easily. It’s also much easier to work on learning modules provided by someone else. I’m also finding it very useful for me at the moment as I often find having time available to write blog posts coincides with an absence of network connectivity. Not a problem when using a Word processor. In addition, using Word/LibreOffice also means that I can use Zotero for citation management, as well as all the other features (and the foibles) of contemporary Word processors.

At the very least I can see myself using this process, what about others?

Learning modules for four LMS sites

I’ve been working with three different academics helping them create learning modules for four different course sites. Most of these learning modules are being produced in the last couple of weeks leading up to the start of semester when the academics are busy. So far, discussions with those staff have generated positive comments about the improvement in the quality of the end product and about the value of working in Word, rather than the Blackboard interface.

This week we’ve discovered that sharing Word documents via OneDrive (part of the institution’s technology infrastrucutre) provides some promising benefits. Such documents are shared with the course teaching and development team via a link from the Blackboard learning module page. This provides a single point to go to for the learning module. OneDrive provides the ability to edit online and also provides version control. More exploration needed here.

Other specialists I work with are also talking about the promise the content inteface offers for use in other courses.

Week 1 of trimester starts this week. Still too early to have feedback from students.

Abelardo’s conditions

In his blog post, Abelardo identifies seven conditions he was using when looking for a solution. Table 1 is a summary of his conditions and a note on how well (or not) the Content Interface approach meets each condition.



Content Interface

Content focus

No need to worry about visual appearance, table of contents, responsiveness etc.

Yes, but more work possible/required.

Mammoth only translate semantic information. Formatting and further transformation done via Javascript/CSS.

More work is required on the Javascript/CSS to provide ToC.

Support complex textual structures

e.g. cross-referencing, sections, subsections, figures, links etc.


Word/LibreOffice provides much of this.

Javascript/CSS provides additional structures (e.g. Notes, Readings, Activities and more to come)

Support for interactive elements

Embedding videos, MCQs etc.


Insert any HTML embed code in a document and apply the Embed style

Use HTML as the underlying format

HTML5 in particular


At least in terms of publishing as HTML anywhere HTML is taken.

Support collaborative production

Version control etc.

Early indications, yes.

Experiments with OneDrive to share the Word documents appears to provide this.

Run on your own machine

No complex interface in an online tool, push a button to publish remotely

Yes, more work to do.

You author using Word. Work to be done on the one button publishing.

Problems and challenges

Perhaps the biggest limitations and source of challenge with this process is the use of Microsoft Word as the main authoring format. Even though most of the academics I work with use Word as their primary word processor there are issues. Word’s foibles as an authoring platform (e.g. see Figure 3 and the associated explanation), the stretching of Word’s styles functionality through this process; and, a tendency for many people not to really understand how to use Word as intended (e.g. Ben-Ari & Yeshno, 2006). Hence there’s a question about the mechanics of the process. However, early experience shows there may be some hope.

Figure 3: xkcd’s explanation of one of the challenges of using Word processors

There’s also the question of whether or not the “write in Word and publish in the LMS” process will be an abstraction too far. In particular, the increasing use of semantic elements in Word. A practice that challenges the typical formatting driven use of Word. Intermingled with this is that while the content interface may help reduce the cognitive load associated with the technical aspects of authoring, will this translate to an increased focus on design for learning?

On-going development

The content interface has been a working concern for less than two weeks. It is hoped that a lot more development will be done to refine this process and its output. Some current plans include:

  • One button publishing;
    Rather than manually upload the Word document and then copy and paste the HTML code, the hope is we can implement a Publish button in Blackboard that automates this process, perhaps connected with OneDrive.
  • Program/project specific designs;
    Currently all learning modules get the same, fairly limited design (i.e. Figure 2). It would be fairly easy to modify the Content Interface to use different visual designs for different programs or other purposes.
  • Alternate interface designs;
    The accordion interface shown in Figure 2 could be changed. For example, a simple page interface and hopefully more contemporary and effective designs.
  • Integration of Blackboard content items and tools; and,
    Blackboard provides a range of items/tools that can be included in a learning module (e.g. quizzes, assignments, discussion forums etc). The aim is to modify the Content Interface to allow such Blackboard tools to be integrated into content at appropriate places.
  • Higher level semantic elements.
    Current semantic elements (e.g. Reading, Activity and Note) are fairly low level. All that happens is that some additional HTML/CSS is added. A good long term goal would be to allow the use of higher level semantic elements that equate to learning designs/activities. For example, allow the use of a Debate style in a Word document which would set up an online environment that helps implement and orchestrate a debate.


Behnke, J. (2018). Content editor HTML vs. PDF? Retrieved February 24, 2019, from

Ben-Ari, M., & Yeshno, T. (2006). Conceptual Models of Software Artifacts. Interacting with Computers, 18(6), 1336–1350.

Preparing my digital "learning space"

The following documents the (hopefully) last bit of extra work I have to undertake to prepare the digital “learning space” for EDC3100, ICT and Pedagogy. It’s work that has taken most of my working day. At a time when I can’t really afford it.  But it’s time I have to spend if I want to engage effectively in one of the most fundamental activities in teaching – know thy student.

End result

The work I’ve done today allows me to easily access from within the main digital learning space for EDC3100 (the Moodle course site) three different types of additional information about individual students.

It’s also an example of how the BAD mindset is able to work around the significant constraints caused by the SET mindset and in the process create shadow systems, which in turn illustrates the presence of a gap (i.e. yawning chasm) between what is provided and what is required.

The shadow system gapAdapted from Behrens and Sedera (2004)

What are they studying? What have they done before?

This student is studying Early Childhood education. They’ve completed 21 prior courses, but 5 of those were exemptions. I can see their GPA (blurred out below). They are studying via the online mode and is located in Queensland.

Screen Shot 2016-03-04 at 1.17.07 pm

How much of the course activities they’ve completed and when

This particular student is about half way through the first week’s material. They made that progress about 5 days ago. Looks like the “sharing, reflecting and connecting” resource took a while for them to complete. More so than the others – almost two hours

Screen Shot 2016-03-04 at 1.17.15 pm

What they’ve written on their blog and how they are “feeling”?

This student has written two blog posts. Both are fairly positive in the sentiment they express. Through the second is a little less positive in outlook.

Screen Shot 2016-03-04 at 1.26.04 pm

Reasons for the post

There are a number of reasons for this post:

  1. Reinforce the point about the value of an API infrastructure for sharing information between systems (and one that’s open to users).
  2. Document the huge gap that exists between the digital learning spaces universities are providing and what is actually required to implement useful pedagogies – especially when it comes to what Goodyear and Dimitriatdis (2013) call “design for orchestration” – providing support for the teacher’s work at learn time.
  3. Make sure I document the process to reduce the amount of work I have to do next time around.
  4. Demonstrate to the EDC3100 participants some of the possibilities with digital technologies, make them aware of some of what happens in the background of the course, and illustrate the benefits that can come from manipulating digital technologies for pedagogical purposes.
  5. Discover all the nastly little breaks in the routine caused by external changes (further illustrating the unstable nature of digital technologies).

What will I be doing

I’ll be duplicating a range of institutional data sources (student records and Moodle) so that I can implement a range of additional pedagogical supports, including:

Hopefully, I’ll be able to follow the process vaguely outlined from prior offerings. (Yep, that’s right. I have to repeat this process for every course offering, would be nice to automate).

Create new local Moodle course

I have a version of Moodle running on my laptop. I need to create a new course on that Moodle which will the local store for information about the students in my course.

Need to identify:

  • USQ moodle course id – 8036
  • local course id – 15
    Create the course in Moodle and get the id
  • group id – 176
    Create the group in the course
  • context id – 1635
    select * from mdl_context where instanceid=local_course_id  and contextlevel=50
  • course label – EDC3100_2016_S1
    One of the values defined when creating the course.
  • Update MoodleUsers::TRANSLATE_PARAMETERS
  • Update ActivityMapping::TRANSLATE_PARAMETERS
  • enrolid – 37
    select * from mdl_enrol where courseid=local_course_id and enrol=’manual’;

Create BIM activity in new course

Need to identify

  • bim id – 9

Enrol students in the course

Ahh, returning to Webfuse scripts, the sad, depleted remnants of my PhD.

~/webfuse/lib/BAM/3100/3100_support/participants/ is a script that will parse the Moodle participants web page, extract data about the enrolled users, and insert them appropriately into the database for my local Moodle course.

Initial test, no-one showing up as a participant. But add myself as teacher.

  1. Figure out that the “show all participants” option is hidden down the very bottom of the page.
  2. Save the page to my laptop
  3. Edit the script to update course details
  4. Test that it parses the HTML file (in case changes have been made by the institution or by the new version of Moodle) – looking good.
  5. The finding of old students appears to be working.
    Oh nice, easy way to identify repeating students.  Need to save that data.
  6. Run the script
  7. Fix the errors
    • Duplicate key inserting into groups
    • missing required parameter COURSE_ID 111
      Complaint from MoodleUsers class – need to update TRANSLATE_PAREMETERS above
    • Particpants still not appearing, something missing — have to update the script. Done.

Took a while, but that should further automate the process for next time.

Add some extras

The above step only adds in some basic information about the student (USQ Moodle ID, email address). TO be useful I need to be able to know the sector/specialisation of the student, their postal code etc.

This information comes from a spreadsheet generated from the student records. And the data added into a “special” table in the Moodle database. This year I’m using a different method to obtain the spreadsheet, meaning that the format is slightly different. The new process was going to be automated to update each night, but that doesn’t appear to be working yet. But I have a version, will start with that.

  1. Compare the new spreadsheet content
    Some new fields: transferred_units, acad_load. Missing phone number.
  2. Add columns to extras table.
  3. Update the parsing of the file

Seems to be working

Activity data

This is to identify what activities are actually on the study desk.

Another script that parses a Moodle web page to extract data. Currently re-writing some of the activities, wonder how that will work. Actually, seem to have designed for it.  Does a replace of the list, not an update


  1. Add in the course id for the new course
  2. ??? may be update the script to handle that parameterised section titles

Seems to be working

Activity completion data

Now to find out which activities each student has completed. Another script, this time parsing a CSV file produced by Moodle.


  1. Update the script with new course data
  2. Unable to find course id – update
  3. Having problems again with matching activity names
    1. EDC3100 Springfield resources
      it shouldn’t be there. Turn off activity completion and get new CSV file
    2. For “.”???.
      First field is a . should be empty May need to watch this.
  4. Parses okay – try checkStudents
    Getting a collection of missing students.

    1. Are they in the local database at all? – no
    2. Have they withdrawn, but still in activity completion – yes.
  5. Seems to have worked

Student blog data

Yet another scraping of a Moodle web page.   ~/BIM/

  1. Update the config
  2. Check the parsing of the file
    1. Only showing a single student – the last one in the list
      For some reason, the table rows are missing a class. Only the lastrow has a class. Given I wrote the BIM code, this might be me. The parsing code assumes no class means it’s the header row.  But seems to work.
  3. Check the conversion process
    1. Crashed and burned at me – no Moodle id – hard code my exclusion
  4. Check insertion
  5. Do insertion
  6. Check BIM activity
  7. Check mirror for individual student – done
  8. Run them all – looks like there might be a proxy problem with the cron version.  Will have to do this at home – at least wait until it finishes.

Greasemonkey script

This is the user interface end of the equation.  What transforms all of the above into something useful.


  • gmdocs/moreStudentDetails.user.js
    • Add the Moodle course id – line 331
  • phpdocs/api/getUserDetails.php
    • map the USQ and local Moodle ids
    • map USQ course id to BIM
    • add in the hard coded week data
    • Modify the module mapping (hard coded to the current course) — actually probably don’t need to do this.
  • Download the modified version of the greasemonkey client – http://localhost:8080/fred/mav/moreStudentDetails.user.js
  • Test it
    • Page is being updated with details link
    • Personal details being displayed
    • Activity completion not showing anything
      • Check server
        • Getting called – yes
        • Activity completion string is being produced
        • But the completion HTML is empty – problem in displayActivityStructure
        • That’s because the structure to display (from updateActivityStructure) is empty – which is actually from getActivityMapping
        • getActivityMapping
          • **** course id entered incorrectly
    • Blog posts showing error message
      Problem with type with the course id
  • Can I add in the extra bits of information – load, transferred courses
    • Client

Sentiment analysis

This is the new one, run the blog posts through indico sentiment analysis


  • update the BIM id




Behrens, S. & Sedera, W. (2004) Why do shadow systems exist after an ERP implementation? Lessons from a case study. IN WEI, C.-P. (Ed.) The 8th Pacific Asia Conference on Information Systems. Shanghai, China.





What if our digital technologies were protean? Implications for computational thinking, learning, and teaching

David Jones, Elke Schneider

To be presented at  ACCE’2016 and an extension of Albion et al (2016).


Not for the first time, the transformation of global society through digital technologies is driving an increased interest in the use of such technologies in both curriculum and pedagogy. Historically, the translation of such interest into widespread and effective change in learning experiences has been less than successful. This paper explores what might happen to the translation of this interest if the digital technologies within our educational institutions were protean. What if the digital technologies in schools were flexible and adaptable by and to specific learners, teachers, and learning experiences? To provide initial, possible answers to this question, the stories of digital technology modification by a teacher educator and a novice high school teacher are analysed. Analysis reveals that the modification of digital technologies in two very different contexts was driven by the desire to improve learning and/or teaching by: filling holes with the provided digital technologies; modelling to students effective practice with digital technologies; and, to better mirror real world digital technologies. A range of initial implications and questions for practitioners, policy makers, and researchers are drawn from these experiences. It is suggested that recognising and responding to the inherently protean nature of digital technologies may be a key enabler of attempts to harness and integrate digital technologies into both curriculum and pedagogy.


Coding or computational thinking is the new black. Reasons given for this increased interest include the need to fill the perceived shortage of ICT-skilled employees, the belief that coding will help students “to understand today’s digitalised society and foster 21st century skills like problem solving, creativity and logical thinking” (Balanskat & Engelhardt, 2015, p. 6), and that computational thinking is “a fundamental skill for everyone” (Wing, 2006, p. 33). Computational thinking is seen as “a universal competence, which should be added to every child’s analytical ability as a vital ingredient of their school learning” (Voogt, Fisser, Good, Mishra, & Yadav, 2015, p. 715). Consequently, there is growing worldwide interest in integrating coding or computational thinking into the school curriculum. One example of this is the Queensland Government’s #codingcounts discussion paper (Department of Education and Training, 2015) which commits the government “to making sure that every student will learn the new digital literacy of coding” (p. 9). It appears that students also recognise the growing importance of coding. The #codingcounts discussion paper (Department of Education and Training, 2015) cites a Microsoft Asia Pacific survey (Microsoft APAC News Centre, 2015) that suggests 75% of students (under 24) in the Asia Pacific “wish that coding could be offered as a core subject in their schools” (n.p.). While not all are convinced of the value of making coding a core part of the curriculum it appears that it is going to happen. Balanskat & Engelhardt (2015) report that 16 of the 21 Ministries of Education surveyed already had coding integrated into the curriculum, and that it was a main priority for 10 of them. Within Australia, the recently approved Technologies learning area of the Australian Curriculum includes a focus on computational thinking combined with design and systems thinking as part of the Digital Technologies subject. This is the subject that is the focus of the Queensland government’s #codingcounts plan and it has been argued that it may also “provide a framework upon which female participation in computing can be addressed” (Zagami, Boden, Keane, Moreton, & Schulz, 2016, p. 13). The question appears to have shifted from if coding or computational thinking should be integrated into the curriculum, toward questions of how and if it can be done effectively in a way that scales for all learners?

These types of questions are especially relevant given the observation that despite extensive efforts over the last 30+ years to eliminate known barriers, the majority of teachers do not yet use digital technologies to enhance learning (Ertmer & Ottenbreit-Leftwich, 2013). It appears that the majority of teachers still do not have the knowledge, skills, resources, and environment in which to effectively use digital technologies to enhance and transform student learning. The introduction of computational thinking – “solving problems, designing systems, and understanding human behaviour, by drawing on the concepts fundamental to computer science” (Wing, 2006, p. 33) – into the curriculum requires teachers to move beyond use of digital technologies into practices that involve the design and modification of digital technologies. In recognition of the difficulty of this move, proponents of integrating computational thinking are planning a range of strategies to aid teachers. One problem, however, is that many of these strategies seem to echo the extensive efforts undertaken to encourage the use of digital technologies for learning and teaching that have yet to prove widely successful. At this early stage, the evaluation and research into the integration of computational thinking into the curriculum remains scarce and with a limited amount of “evidence as to how far teachers really manage to integrate coding effectively into their teaching and the problems they face“ (Balanskat & Engelhardt, 2015, p. 15).

However, attempts to integrate coding or computational thinking into the curriculum are not new. Grover and Pea (2013) identify the long history of computational thinking, tracing it back to recommendations for college students in the 1960s and to Papert’s work with Logo in K12 education in the 1980s. By the mid-1990s, Maddux and Lamont Johnson (1997) write of “a steady waning of interest in student use of the Logo computer language in schools” (p. 2) and examine a range of reasons for this. In the late 1990s, the dotcom boom helped increase interest, but it did not last. By the 2000s the overall participation rate in IT education within Australia declined.  With an even greater decline in enrolments in software development subjects, and especially in female participation (Rowan & Lynch, 2011). The research literature has identified a range of factors for this decline, including the finding that “Students in every participating school joined in a chorus defining the subject as ‘boring’” (Rowan & Lynch, 2011, p. 88). More recently the rise of interest in computational thinking has led to the identification of a range of issues to be confronted, including: “defining what we mean when we speak of computational thinking, to what the core concepts/attributes are and their relationship to programming knowledge; how computational thinking can be integrated into the curriculum; and the kind of research that needs to be done to further the computational thinking agenda in education” (Voogt et al., 2015, p. 716). In this paper, we are interested in exploring the related issue of how and if widespread common perceptions of digital technologies may be hindering attempts to harness and integrate digital technologies into both curriculum and pedagogy.

What if the digital technology environments within education institutions do not mirror the environments in contemporary and future digitalised societies? What if our experience within these limited digital technology environments is negatively impacting our thinking about how to harness and integrate digital technologies into curriculum and pedagogy? What if thinking about digital technology has not effectively understood and responded to the inherent protean nature of digital technologies? What if the digital technologies provided to educators were protean? Might this have an impact on attempts to harness and integrate digital technologies into curriculum and pedagogy? It is these and related questions that this paper seeks to explore.

The paper starts by drawing on a range of literature to explore different conceptions of digital technologies. In particular, it focuses on the 40+ year old idea that digital technologies are the most protean of media. Next, the paper explains how stories of digital technology modification by a high school teacher and a teacher educator were collected and analysed to offer insights into what might happen if our digital technologies were protean. Analysis of these stories is then discussed and used to develop an initial set of implications for practice, policy, and research for attempts to harness and integrate digital technologies into curriculum and pedagogy. The paper suggests that an educational environment that is rich with protean digital technologies appears likely to have a range of positive impacts on attempts to harness and integrate digital technologies into curriculum and pedagogy. However, such an environment requires radically different mindsets than currently used within educational institutions, and is thus likely to be extremely challenging to create and maintain.

Digital technology: A protean meta-medium, or not?

The commonplace notions of digital technologies that underpin both everyday life and research have a tendency to see them “as relatively stable, discrete, independent, and fixed” (Orlikowski & Iacono, 2001, p. 121). Digital technologies are seen as hard technologies, technologies where what can be done is fixed in advance either by embedding it in the technology or “in inflexible human processes, rules and procedures needed for the technology’s operation” (Dron, 2013, p. 35). As noted by Selwyn and Bulfin (2015) “Schools are highly regulated sites of digital technology use” (p. 1) where digital technologies are often seen as a tool that is: used when and where permitted; standardised and preconfigured; conforms to institutional rather than individual needs; and, a directed activity. Rushkoff (2010) argues that one of the problems with this established view of digital technologies is that “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). This hard view of digital technologies perhaps also contributes to the problem identified by Selwyn (2016) where in spite of the rhetoric of efficiency and flexibility surrounding digital technologies, “few of these technologies practices serve to advantage the people who are actually doing the work” (p. 5). Digital technologies have not always been perceived as hard technologies.

Seymour Papert in his book Mindstorms (Papert, 1980) describes the computer as “the Proteus of machines” (p. viii) since the essence of a computer is its “universality, its power to simulate. Because it can take on a thousand forms and can serve a thousand functions, it can appeal to a thousand tastes” (p. viii). This is a view echoed by Alan Kay (1984) and his discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered” (p. 59). In describing the design of the first personal computer, Kay and Goldberg (1977) address the challenge of producing a computer that is useful for everyone. Given the huge diversity of potential users they conclude “any attempt to specifically anticipate their needs in the design of the Dynabook would end in a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (Kay & Goldberg, 1977, p. 40). To address this problem they aimed to provide a foundation technology and sufficient general tools to allow “ordinary users to casually and easily describe their desires for a specific tool” (Kay & Goldberg, 1977, p. 41). They aim to create a digital environment that opens up the ability to create computational tools to every user, including children. For Kay (1984) it is a must that people using digital technologies should be able to tailor those technologies to suit their wants, since “Anything less would be as absurd as requiring essays to be formed out of paragraphs that have already been written” (p. 57). For Stallman (2014) the question is more fundamental, “To make computing democratic, the users must control the software that does their computing!” (n.p.).

This perceived 40-year-old need for individuals to use protean digital technologies to make their own tools in order to fulfil personal desires resonates strongly with the contemporary Maker movement. A movement that is driven by a combination of new technologies that increase the ease of creation, a cultural shift toward do-it-yourself practices, and is seeing people increasingly engaged in creating and customising physical and virtual artefacts. Martinez and Stager (2013) make this link explicit by labelling Seymour Papert as the “Father of the Maker Movement” (n.p.). Similarly, Resnick and Rosenbaum (2013) note the resonance between the Maker movement and a tradition within the field of education that stretches from Dewey’s progressivism to Papert’s constructionism. Resnick and Rosenbaum (2013) see tinkering “as a playful style of designing and making, where you constantly experiment” (p. 165) for which digital technologies – due to their association with logic and precision – may not always appear suitable. A perception reinforced by the evolution of digital technologies after the work of Kay and Goldberg in the 1970s.

The work of Kay, Goldberg, and others at Xerox PARC on Dynabook directly and heavily influenced Apple, Microsoft, and shaped contemporary computing. However, Kay and Goldberg’s conception of computers as a protean medium where tool creation was open to every user did not play a part in that shaping (Wardrip-Fruin & Montfort, 2003). In fact, there’s evidence that digital technologies are getting less modifiable by the end-user. Writing about how our relationship with computers is changing, Turkle (1995) argues that we “have become accustomed to opaque technology” (p. 23). Where early computer systems encouraged, even required, people to understand the mechanism of the computer, the rise of the GUI interface hides the mechanism behind the simulation of a desktop or other metaphor. Limiting users to clicking prepared icons and menus. Desktop personal computers once had an architecture that enabled enhancement and upgrading. While increasingly mobile devices are typically “not designed to be upgraded, serviced or even opened, just used and discarded” (Traxler, 2010, p. 5). The decision by Apple to prevent the creation of executable files on the iPad means “that you can’t make anything that may be used elsewhere. The most powerful form of computing, programming, is verboten” (Stager, 2013, n.p.). But it’s not just the design of technology that hardens digital technologies.

As noted above, Dron (2013) argues that technology can be hardened by embedding it “in inflexible human processes, rules and procedures” (p. 35). Resnick and Rosenabuam (2013) make the point that designing contexts that allow for tinkerability is as important as designing technologies for tinkerability. The affordance of a digital technology to be protean is not solely a feature of the technology. An affordance to be protean arises from the on-going relationship between digital technologies, the people using it, and the environment in which it is used. Being able to code, does not always mean you are able to modify a digital technology. Selwyn and Bulfin’s (2015) positioning of schools as “highly regulated sites of digital technology use” (p. 1) suggest that they are often not a context that are designed for tinkerability through the provision of protean digital technologies.

Even though the context may not provide protean digital technologies, this hasn’t stopped educators modifying digital technologies. Jones, Albion and Heffernan (2016) examine and map stories of digital technology modification by three teacher educators by the traces left in the digital landscape and the levels of modification. Table 1 provides an overview of the levels of digital technology modification used by Jones et. al. (2016). It ranges from simply using a digital technology as is, through changing its operation via configuration options (internal and external), modifying the operation of a digital technology by combining or supplementing it with other digital technologies, and finally to coding. Table 1 suggests that digital technologies can be modified via configuration, combination, and coding.

Table 1: Levels of digital technology modification (Albion et al., 2016)

Type of change Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site by changing Moodle course settings
External configuration Change operations of a tool using means outside of the tool Inject CSS or Javascript into a Moodle site to change its appearance or operation
Customization Change the tool by modifying its code Modify the Moodle source code, or create/install a new plugin
Supplement Use another tool to offer functionality not provided by existing tool Implement course level social bookmarking through Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tool Require students to use external blog engines, rather than the Moodle blog engine



This paper uses a qualitative case study to describe and explore the potential value, impact, and issues faced by educators when they seek to treat digital technologies as protean. The aim being to offer some initial responses to the question “what if our digital technologies were protean?” As this is an attempt to understand a particular social phenomenon as it occurs in real-life it is well-suited to the case study method (Aaltio & Heilmann, 2010). Data for this case study is drawn from the authors’ own experiences as educators. For Jones this draws on his experiences as a teacher educator at the University of Southern Queensland from commencement in 2012 through 2015. During this time his main teaching responsibility was for a large – over 300 students split evenly between on-campus and online students – 3rd year course within the Bachelor of Education. For Schneider, this draws on her experience as a teacher at secondary schools (neither her current school) within south-east Queensland in 2014 and 2015 teaching grades 7 to 12 in IT and Business subjects.

The authors’ experiences provide a number of advantages for the purpose of exploring the potential impact of protean digital technologies. Both authors have: formal tertiary education in fields related to the development of Information Technology; undertaken professional work within Information Technology; and, later trained as Secondary IPT teachers. Consequently, both authors see digital technologies as more inherently protean than those without an IT background, and have the knowledge and skills necessary to modify existing, somewhat less than protean, digital technologies. While not an activity currently broadly available to all educators, the authors’ experience and knowledge provide an indication of what might be possible if digital technologies available to educators were more protean. At the same time, the authors have different cultural backgrounds (Australia and Canada). The case also explores the impact of protean digital technologies within two very different educational contexts: tertiary and secondary education. The tertiary education context involves a large course with hundreds of students in both on-campus and online modes. This large and diverse student cohort means that there is significant use of digital technologies with online students learning solely via digital technologies. The secondary education context involves a greater number of smaller student cohorts with digital adoption in a state of flux and still primarily delivering teaching and assessing learning with traditional, non-digital means.

The authors engaged in an iterative and cyclical process that involved the gathering, sharing, discussing, and analysing stories of how, why, and what digital technologies they had modified while teaching. Both authors drew on personal records and writings in the form of tweets, blog posts, email archives, and other documents to generate a list of such stories. These stories (Jones: 16, Schneider: 10) were written up using a common format, shared via a Google document, generated on-going discussion, and led to an iterative process of analysis to identify patterns and implications. A major part of the analysis was grouping the stories of digital technology modification via: the purpose (e.g. improve administration, model good practice, teaching, or learning); cause (e.g. inefficient systems, non-existent systems, missing functionality); impact (e.g. save time, improve learning); and, the type of change (as per Table 1). From this analysis a number of evident themes were extracted and are described in the next section.

Themes evident in stories of protean technologies

Upon reading each other’s stories, both authors were immediately struck by the level of commonality between the stories both had told. Not so surprising was that all stories told of attempts to improve learning, teaching, or both. However, even though these stories were taking place in very different types of educational institutions there were three common themes prevalent in stories from both authors. The three themes were: filling holes (14 stories); modelling effective practice (12 stories); and, mirroring the real world (7 stories). There were, however, significant differences in the amount of coding required for these stories and the levels of digital technology modification undertaken.

In terms of coding, eventually none of Schneider’s ten stories involved the use of coding. Two of her stories did initially involve coding (Yahoo Pipes and Java), but she subsequently implemented other modifications that did not require coding. Seven of Jones’ sixteen stories involved coding using Perl, PHP, or jQuery/Javascript. This suggests the digital technologies can be modified without necessarily being able to code. However, it does raise questions about the reasons between the greater prevalence of coding in Jones’ stories. Is it due to the greater reliance on digital technologies within the specific context? Is it his longer work history within higher education? Was Jones less fearful of getting in trouble for wandering away from officially mandated practices? Is it his longer engagement with modifying digital technologies for learning and teaching? Or, are there other factors at play?

Figure 1 describes the level of digital technology modification (as per Table 1) evidence in the stories from each author (some stories involved more than one level of modification). All but one of Schneider’s stories involved supplementing or replacing digital technologies provided by the school. This suggests some significant perceived limitations with the school digital technology environment. Jones’ stories were almost evenly balanced between configuring provided digital technologies, or supplementing/replacing them with different digital technologies.Story Modification.png

Figure 1: Number of stories per author for each level of digital technology modification

Four of Schneider’s stories and ten of Jones’ stories of digital technology modification were designed to fill holes in the functionality provided by institutional technologies. In her very first story (Digital grading using Excel) Schneider outlines her use of Excel spreadsheets to supplement the school’s requirement that teachers update paper-based student profiles located within a dedicated physical folder kept in the head-of-department’s office. Her use of Excel spreadsheets to supplement the required practice provided necessary support for teacher tasks such as maintaining student progress records and discussing progress with individual students. Practices that the school practice did not support – the hole to be filled. In the story “Web scraping to contact not submits” Jones describes a similar hole in an institutionally provided technology. In this story, the University’s online assignment management system provides no mechanism by which students who have not submitted an assignment and have not received an extension can be identified and contacted. Instead, Jones had to use a combination of Perl scripts, regular expressions, manual copying and pasting, and an email client to fill the hole. The value and difficulty in making this particular modification is illustrated by the following quote from a third-year student who was contacted via this modification.

Thank you for contacting me in regards to the submission. You’re the first staff member to ever do that so I appreciate this a lot.

Six of Schneider’s stories and six of Jones’ stories of digital technology modification were intended to improve student learning. These were all driven by a combination of modelling the effective use of digital technologies and/or adopting enhanced pedagogical practices. In “Moviemaker to introduce teacher and topics” Schneider describes how the production by her of a movie trailer for her subject is intended to model the use of digital technologies to visually present information, but also to engage students. In “Course barometers via Google forms” Jones  describes how functionality provided by the University LMS is replaced with Google forms as a way to more effectively gather student feedback, but also model a technology that they may be used by students in their practice. That both authors primarily teach in subjects related to the use of digital technologies would appear to suggest that prevalence of the modelling theme may be reduced for teachers of other areas.

Four of Schneider’s stories and three of Jones’ stories suggest that the institutionally provided digital technologies do not always appropriately mirror the capabilities of real-world technologies and subsequently negatively impact learning and teaching. Both authors share stories about how the visual and content capabilities of institutional learning management systems fail to mirror the diversity, quality, and capabilities of available online technologies, including social networking software. Consequently, both authors tell stories of creating teaching related websites on external blog engines. In “Creating a teaching website with Edublogs” Schneider outlines the visual and functional limitations of the official Learning Management System (LMS) and how use of Edublogs saved teacher time, was more visually appealing, and provided a more authentic experience to students of services they are likely to encounter in the real-world. Schneider also tells stories where computer hardware and network bandwidth provided by the school to students is supplemented through use of personal resources from both students and herself. The story “Encourage student use of phone hot-spots” tells of how the widespread inability of school Internet connections to fulfil learning needs was addressed by encouraging those students with access to use their mobile phone hot spots.

In general, the modification of institutional digital technologies does not come without problems, risks, or costs. Both authors make mention of the additional workload required to implement the changes described, especially when such changes aren’t directly supported or encouraged by the institution.  Such cost can be assuaged through on-going use of the changes and the benefits they generate. However, these types of changes can challenge institutional polices and be frowned upon by management. In “Hacking look and feel” Jones  describes how an institutionally mandated, default look and feel for course websites was modified to avoid a decrease in functionality. A story that also describes how the author had to respond to a “please explain” message from the institutional hierarchy and was for a time seen as “hacking” the institution’s online presence. Similarly, in “Encouraged students to hot-spot with their phones to connect to the web” Schneider describes one digital technology modification that both broke institutional policy, but also enhanced student learning. It is not hard to foresee situations where the outcomes of these stories may well have been considerably more negative for those involved.

What if? Discussion, implications and questions

The perception of digital technologies as protean does not appear widespread within educational institutions. What if our digital technologies were protean? Since designing the context for tinkerability is important (Resnick & Rosenbaum, 2013), what if the context within educational institutions were designed to enable, encourage, and support all teachers and learners in the modification of digital technologies to create the tools they see as necessary to best support their learning and teaching? Understanding and correctly predicting the potential implications and outcomes of such a radical transformation of the complex environment of an education institution is difficult. Hence the following are presented as a tentative exploration of some possible future states and are seen more as questions for exploration and confirmation than firm predictions. The assumption underpinning the following implications and questions is that the experience of the authors described above can be used to generate some indications of what might happen if our digital technologies were protean.

Filling holes – bricolage

One of the reviewers of this paper made the following observation

Some of the tinkerability/evidence of protean behaviour sound rather like the old idea of a kludge – a ‘quick and dirty’ workaround for some computer processes

As noted earlier in the paper, almost 40 years ago, Kay and Goldberg (1977) recognised that any digital technology that attempted to anticipate the needs of a diverse user population would end up as “a disastrous feature-laden hodgepodge which would not be really suitable for anyone” (p. 40). Over recent years the digital technologies used within educational institutions are increasingly enterprise information systems. Systems – such as Learning Management Systems – intended to fulfil the needs of the entire institution and are perhaps more likely to fulfil the prediction of Kay and Goldberg. Jones, Heffernan and Albion (2015) offer a range of additional examples of how institutionally mandated digital technologies are often not suited to specific educational aims and contexts and thus generate the need for ‘digital renovation’.  An example of Koopman’s and Hoffman’s (2003) description of how some “work-arounds are necessary because the computer or software as originally designed simply doesn’t address the problem or task at hand” (p. 72). Koopman and Hoffman (2003) argue that workarounds should not be seen as users departing from officially condoned uses of technology (illustrated above by the increased chance of organisational censure the authors digital renovation risked), but rather as the legitimate practice of adaptive design where the users are helping finish the design of the digital technologies.

A perspective mirrored by Turvey (2012) who argues that the construction of pedagogical tools does not end with production, but instead such tools continue to be refined through “use within a complex ecology of mediating influences, as teachers exercise agency over the development of their professional practice” (p. 114). Further echoed by the argument of Mishra and Koehler (2006) that “there is no single technological solution that applies for every teacher, every course, or every view of teaching” (p. 1029) and that instead quality teaching “requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate context-specific strategies and representations” (p. 1029). Jones, Heffernan and Albion (2015) describe how the protean possibilities of existing digital technologies can be used to engage in ‘digital renovation’ and thus create educational possibilities specific to particular teaching contexts.

Would digital technologies that are protean better support teachers engaging in digital renovation activities that “fill the holes” between those digital technologies and the context-specific requirements of learner and teacher? Would teacher engagement in context-appropriate digital renovation activities lead to improvements in the quality of teaching and learning? If existing digital technologies are largely not protean, what is the nature of the “holes” that are currently experienced by learners and teachers? What impact does an inability to “fill these holes” have on teachers and their workload, sense of agency, their perception of digital technologies, their learners etc.?

Modelling the effective use of digital technologies

The digital technologies subject from the technologies area of the Australian Curriculum defines computational thinking as “A problem solving method that involves various techniques and strategies in order to solve problems that can be implemented by digital systems” (ACARA, 2014). Workarounds, kludges, and digital renovation are examples of the application of computational thinking by users to solve problems that they face. Engaging in digital renovation allowed Schneider to model the application of computational thinking for her secondary computing students. With the incorporation of the Australian Curriculum’s digital technologies subject into the compulsory curricula, the advantages of being able to do this now expand to a majority of teachers. However, as noted above there is the question about whether or not this broader sample of teachers have the experience, knowledge and skills to take advantage of this opportunity. To address this problem a range of professional development opportunities are being made available to teachers.

In the context of ‘technologising literacy education’, Lankshear and Bigum (1999) develop and describe four principles for “guiding further developments in technologizing classrooms” (p. 445) and then show how those principles are seen differently by an ‘insider’ mindset and an ‘outsider-newcomer’ mindset. The first of these principles is ‘Teachers first’. This principle recommends that teachers must first be aided in “making use of new technologies to enhance their personal work before learning to use them in their teaching” (p. 453). The argument is that in order for teachers to be able to make appropriate pedagogical decisions around new technologies “they must first know how to use those technologies for their own purposes (and any benefits of doing so) for their own purposes” (p. 453). Lankshear and Bigum (1999) argue that the intent of this principle is “easy to subvert” (p. 460) by practices “designed to put teachers into classrooms with improved technological skills and understandings, but within the confines of the newcomer-outsider world view” (p. 460). On the other hand, an insider world view focuses both on the importance of addressing teachers’ on-going needs, but also on developing new alliances and articulations around learning, teaching, and the new technologies. Professional development alone is not likely to be sufficient to allow teachers to model computational thinking. Protean digital technologies would seem to be at least a catalyst, if not a pre-requisite, for teachers and others to be able to begin modelling computational thinking in the context of the requirements of the digital technologies subject.

Would the widespread availability of protean digital technologies better enable teachers to develop and model computational thinking? What impact would this have on student learning? Will the absence of protean digital technologies hinder teachers’ ability to develop and refine their computational thinking abilities? Can protean digital technologies help support the creation of new alliances and articulations around learning, teaching, and digital technologies within schools? What other types of support and changes would be required to develop such alliances and articulations? What new alliances and articulations would or should be developed?

Mirror the real world

The introduction of the digital technologies subject into core curricula is being done to ensure that students leave school with the skills necessary to engage in a digital world. It has been suggested that within Australia the introduction of the “compulsory Digital Technologies curriculum may provide a framework upon which female participation in computing can be addressed” (Zagami, Boden, Keane, Moreton, & Schulz, 2016, p. 13). On the other hand, in critiquing school mathematics Lockhart (2009) suggests that “there is surely no more reliable way to kill enthusiasm and interest in a subject than to make it a mandatory part of the school curriculum” (p. 36). A major part of Lockhart’s (2009) critique of school mathematics is a complaint about “the lack of mathematics in our mathematics classes” (p. 29). A problem that arises from a complex set of factors including “that nobody has the faintest idea what it is that mathematicians do” (p. 22) which leads to “forced and contrived” (p. 38) attempts to explain how what happens in mathematics classes as relevant to daily life. Margolis, Estrella, Goode et. al. (2008) found  that classroom practices associated with the teaching of computer science in American schools “can be disconnected from students’ lives, seemingly devoid of real-life relevance” (p. 102). Echoes of the limited relevance problem was found by Rowan & Lynch (2011) in post-compulsory information technology secondary courses in Australia. Margolis et al (2008) argue that it is important that teachers be able to demonstrate to students the relevance and significance of computer science to students’ lived experience, but identify that typically teachers have not received any support in developing approaches that meet this need.

The renewed interest in computational thinking and digital technologies arise from visions of the future, such as that seen by the Queensland Government where digital technologies are “fundamentally transforming the world of work and generating new ways of doing business on a global scale” (Department of Education and Training, 2015, p. 11). A vision of a future real world that is very different from the experience learners and teachers have of digital technologies within schools. As identified by Selwyn and Bulfin (2015), an experience heavy on regulation, standardisation, pre-configuration, directed activity, and on institutional and not individual needs. Suggesting that the prevalent school digital environment is unlikely to help prepare learners and teachers well for the future, fundamentally transformed world. Suggesting also that the teaching of computational thinking within schools may fall into the same trap as the type of school-based mathematics critiqued by Lockhart.

Are current, school-based digital environments suitable for preparing learners and teachers “to understand today’s digitalised society and foster 21st century skills like problem solving, creativity and logical thinking” (Balanskat & Engelhardt, 2015, p. 6)? Would an environment with the widespread availability of protean digital technologies better mirror this future world? What challenges exist in making school-based digital environments better mirror a future world that has been fundamentally transformed by digital

Discussion and Conclusions

This paper has posed the question “What if our digital technologies were protean?” To provide some initial responses to this question the paper has explored what is meant by protean digital technologies and analysed stories of digital technology modification from a high-school teacher and a teacher educator. Analysis of these stories revealed that these educators were driven to modify the available digital technologies while attempting to improve aspects of learning and/or teaching. These attempts at improvement aimed to: fill holes in the functionality provided by the digital technologies; model effective practice with digital technologies; or, better mirror real world digital technologies. Only seven of twenty-six stories of digital technology modification required use of coding. The majority of digital technology modification stories involved the configuration or combination of digital technologies, often to replace digital technologies provided by the organisation. Using this experience as a foundation, the paper has used a range of literature to develop some initial suggestions for what might happen more broadly within education if our digital technologies more protean. Given the complex nature of education and the difficulty of predicting the future, these suggestions are framed as questions for further exploration and confirmation, rather than prediction. However, the authors do suspect that the impact of more protean digital technology within education would be positive for both the teaching of computational thinking, and more broadly for the use of digital technology to enhance learning and teaching.

Actually exploring whether or not this is the case will be quite a challenge. Not the least because the idea of protean digital technologies is diametrically opposed to the existing digital technology environment within most educational institutions, and indeed broader society. Enabling more protean digital technologies within education would need to engage with existing widely held perspectives and practices around difficult issues such as accountability, efficiency, resourcing, risk management, and student safety. This task is made more difficult by the question about whether or not those engaged  with such discussions bring – as identified by Lankshear and Bigum (1999) – an ‘insider’ or ‘outsider-newcomer’ mindset. An ‘outsider-newcomer’ sees “the world as the same, but just more technologised” where the insider sees how pervasive and protean digital technologies means that the world – and subsequently educational institutions – “is radically different” (Lankshear & Bigum, 1999, p. 458). The insider view appears more in line with the espoused rationale behind that rise of computational thinking and coding in schools. However, there remain questions about how much of the rhetoric around digital technology-enabled transformation of society. More pragmatically there is the question of how to provide protean digital technologies within education institutions? A question that might be answered by drawing on research on creating computationally rich environments for learners. Such as Grover and Pea’s  (2013) potential principles including: low floor, high ceiling; support for the “use-modify-create” progression; scaffolding; enable transfer; support equity; and, be systemic and sustainable. Principles that might fruitfully be used to break education out of its traditional norms and structures and allow us to finally explore the question “What IF schools were not encumbered by traditional norms and structures, and technology, social capital and pedagogies were used to their true realisation or potential?”


Aaltio, I., & Heilmann, P. (2010). Case Study as a Methodological Approach. In A. J. Mills, G. Durepos, & E. Wiebe (Eds.), Encyclopedia of Case Study Research. (pp. 67–78). Thousand Oaks, CA: Sage Publications.

ACARA. (2014). Computational thinking – Glossary term. Retrieved from 2 July 2016

Balanskat, A., & Engelhardt, K. (2015). Computing our future: Computer programming and coding – Priorities, school curricula and initiatives across Europe. Brussels. Retrieved from

Department of Education and Training. (2015). #codingcounts: A discussion paper on coding and robotics in Queensland schools. Brisbane, Australia. Retrieved from

Dron, J. (2013). Soft is hard and hard is easy: learning technologies and social media. Form@ Re-Open Journal per La Formazione in Rete, 13(1), 32–43.

Ertmer, P. a., & Ottenbreit-Leftwich, A. (2013). Removing obstacles to the pedagogical changes required by Jonassen’s vision of authentic technology-enabled learning. Computers & Education, 64, 175–182.

Grover, S., & Pea, R. (2013). Computational Thinking in K-12: A Review of the State of the Field. Educational Researcher, 42(1), 38–43.

Jones, D., Albion, P., & Heffernan, A. (2016). Mapping the digital practices of teacher educators: Implications for teacher education in changing digital landscapes. In Proceedings of Society for Information Technology & Teacher Education International Conference 2016 (pp. 2878–2886). Chesapeake, VA: Association for the Advancement of Computing in Education.

Jones, D., Heffernan, A., & Albion, P. (2015). TPACK as Shared Practice: Toward a Research Agenda,. In L. Liu & D. Gibson (Eds.), Research Highlights in Technology and Teacher Education 2015 (pp. 13–20). Waynesville, NC: AACE.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Kay, A., & Goldberg, A. (1977). Personal Dynamic Media. Computer, 10(3), 31–41.

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70–75.

Lankshear, C., & Bigum, C. (1999). Literacies and new technologies in school settings. Pedagogy, Culture & Society, 7(3), 445–465.

Lockhart, P. (2009). A Mathematician’s Lament: How school cheats us out of our most fascinating and imagintive art forms. New York: Bellevue Literary Press.

Maddux, C. D., & Lamont Johnson, D. (1997). Logo: A retrospective. Computers in the Schools, 14(1/2), 1–8.

Margolis, J., Estrella, R., Goode, J., Jullison Holme, J., & Nao, K. (2010). Stuck in the shallow end: Education, race, and computing. Cambridge, MA: MIT Press.

Martinez, S. L., & Stager, G. (2013). Invent to learn: Making, tinkering, and engineering in the classroom. Torrance, CA: Constructing Modern Knowledge Press.

Microsoft APAC News Centre. (2015). Three out of four students in Asia Pacific want coding as a core subject in school, reveals Microsoft study | Asia News Center. Retrieved January 20, 2016, from

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Orlikowski, W., & Iacono, C. S. (2001). Research commentary: desperately seeking the IT in IT research a call to theorizing the IT artifact. Information Systems Research, 12(2), 121–134.

Papert, S. (1980). Mindstorms: children, computers, and powerful ideas. New York: Basic Books.

Resnick, M., & Rosenbaum, E. (2013). Designing for Tinkerability. Design, Make, Play: Growing the next Generation of STEM Innovators, 163–181. doi:Resnick, M.; Rosenbaum, E. (1993). Designing for tinkerability. In Design, Make, Play: Growing the Next Generation of STEM Innovators (pp. 163–181). New York: Routledge.

Rowan, L., & Lynch, J. (2011). The continued underrepresentation of girls in post-compulsory information technology courses: a direct challenge to teacher education. Asia-Pacific Journal of Teacher Education, 39(2), 83–95.

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Selwyn, N. (2016). The digital labor of digital learning : notes on the technological reconstitution of education work. Retrieved January 25, 2016, from

Selwyn, N., & Bulfin, S. (2015). Exploring school regulation of students’ technology use – rules that are made to be broken? Educational Review, 1911(October), 1–17.

Stager, G. (2013). For the love of laptops. Adminstr@tor Magazine. Retrieved January 30, 2016, from

Stallman, R. (2014). Comment on “We can code IT! Why computer literacy is key to winning the 21st century.” Mother Jones. Retrieved January 26, 2016, from

Traxler, J. (2010). Will student devices deliver innovation, inclusion, and transformation? Journal of the Research Centre for Educational Technology, 6(1), 3–15.

Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.

Turvey, K. (2012). Constructing Narrative Ecologies as a Site for Teachers’ Professional Learning with New Technologies and Media in Primary Education – E-Learning and Digital Media Volume 9 Number 1 (2012). E-Learning and Digital Media, 9(1), 113–126.

Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory education: Towards an agenda for research and practice. Education and Information Technologies, 20(4), 715–728.

Wardrip-Fruin, N., & Montfort, N. (2003). New Media Reader. Cambridge, MA: MIT Press.

Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.

Zagami, J., Boden, M., Keane, T., Moreton, B., & Schulz, K. (2016). Female participation in school computing : reversing the trend. Retrieved from

Exploring "post adoptive usage" of the #moodle Book module – a draft proposal

For quite some time I’ve experienced and believed that there how universities are implementing digital learning has some issues that contribute to perceived problems with the quality of such learning and its associated teaching. The following is an outline of an exploratory research project intended to confirm (or not) aspects of this belief.

The following is also thinking out loud and a work in progress. Criticisms and suggestions welcome. Fire away.

The topic of interest

Like most higher education institutions across the global, Australian universities have undertaken significant investments in corporate educational technologies (Holt et al., 2013). If there is to be any return on any investment in information technology (IT), then it is essential that the technologies are utilised effectively (Burton-Jones & Hubona, 2006). Jasperson, Carter and Zmud (2005) suggest that the potential of most information systems is underutilised and that most “users apply a narrow band of features, operate at low levels of feature use, and rarely initiate extensions of available features” (p. 525).

While Jasperson et al (2005) are talking broadly about information systems, it’s an observation that is supported by my experience and is likely to resonate with a lot of people involved in university digital/e-learning. It certainly seems to echo the quote from Prof Mark Brown I’ve been (over) using recently about e-learning

E-learning is a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly (Laxon, 2013)

Which begs the question, “Why?”.

Jasperson et al (2005) suggest that without a rich understanding of what people are doing with these information systems at “a feature level of analysis (as well as the outcomes associated with those behaviours)” after the adoption of those systems, then “it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle” (p. 549). I’m not convinced that the capability of universities to manage the post-adoptive life cycle is as good as it could be.

My experience of digital learning within Universities is that the focus is almost entirely on adoption of the technology. A lot of effort is placed into deciding which system (e.g. LMS) should be adopted. Once that decision is made that system is implemented. The focus is then on ensuring people are able to use the adopted system appropriately through the provision of documentation, training, and support. The assumption is that the system is appropriate (after all it wouldn’t have been adopted if it had any limitations) and that people just need to have the knowledge (or the compulsion) to use the system.

There are only two main types of changes made to these systems. First, is upgrades. When the adopted system is upgraded, the institution ensures that it maintains currency and upgrades. Second, are strategic changes. That is, senior management want to achieve X, system doesn’t do X, modify system to do X.

It’s my suggestion that changes to specific features of a system (e.g. LMS) that would benefit end users are either

  1. simply not known about; or,
    Due to the organisations lack of any ability to understand what people are experiencing and doing with the features of the system.
  2. are starved of attention.
    Since these are complex systems. Changing them is expensive. Thus only strategic changes can be made. Changes to fix features used by small subsets of people can never be seen as passing the cost/benefit analysis.

I’m interested in developing a rich understanding of the post-adoptive behaviours and experiences of university teachers using digital learning technologies. I’m working on this because I want to identify what is being done with the features of these technologies and understand what is working and what is not. It is hoped that this will reveal something interesting about the ability of universities to manage digital technologies in ways that enable effective utilization and perhaps identify areas for improvement and further exploration.

Research Questions

From that, the following research questions arise.

  1. How do people make use of a particular feature of the LMS?
    Seeking to measure what they actually did when using the LMS for actual learning/teaching. Not what they describe they did, or what they intend to do.
  2. In their experience, what are the strengths and weaknesses of a particular feature?
    Seeking to identify what they thought the system did to help them achieve their goal and what the system made harder.

Following on from Jasperson et al (2005) the aim is to explore these questions at a feature level. Not with the system as a whole but with how people are using a specific feature of the system. For example, what is their experience of using the Moodle Assignment module, or the Moodle Book module?

Thinking about the method(s)

So how do you answer those two questions?

Question 1 – Use

The aim is to analyse how people are actually using the feature. Not how they report their use, but how they actually use it. This suggests at least two methods

  1. Usability studies; or,
    People are asked to complete activities using a system whilst within a controlled environment that is capturing their every move, including tracking the movement of their eyes.On the plus side, this captures very rich data. On the negative side, I don’t have access to an usability lab. There’s also the potential for this sort of testing to be removed from context. First, the test appears in the lab, a different location than the user typically uses. Second, in order to get between user comparisons it can rely on “dummy” tasks (e.g. the same empty course site).
  2. Learning analytics.
    Analysing data gathered by the LMS about how people are using the system.On the plus side, I can probably get access to this data and there are a range of tools and advice on how to analyse it. On the negative side, the richness of the data is reduced. In particular, the user can’t be queried to discover why they performed a particular task.

Question 2 – Strengths and Weaknesses

This is where the user voice enters the picture. The aim here is to find what worked for them and what didn’t within their experience.

Appear to be three main methods

  1. Interviews;
    On the plus side, rich data. On the negative side, “expensive” to implement and scale to largish numbers and a large geographic area.
  2. Surveys with largely open-ended questions; or,
    On the plus side, cheaper, easier to scale to largish numbers and a large geographic area etc. On the negative side, more work on the part of the respondents (having to type their responses) and less ability to follow up on responses and potentially dig deeper.
  3. LMS/system community spaces.
    An open source LMS like Moodle has openly available community spaces in which users/developers of the system interact. Some of the Moodle features have discussion forums where people using the feature can discuss. Content analysis of the relevant forum might reveal patterns.
    The actual source code for Moodle as well as plans and discussion about the development of Moodle occur in systems that can also be analysed.
    On the plus side, there is a fair bit of content in these spaces and there are established methods for analysing them. Is there a negative side?

What’s currently planned

Which translates into an initial project that is going to examine usage of the Moodle Book module (Book). This particular feature was chosen because of this current project. If anything interesting comes of this, the next plan is to repeat a similar process for the Moodle Assignment module.

Three sources of data to be analysed initially

  1. The Moodle database at my current institution.
    Analysed to explore if and how teaching staff are using (creating, maintaining etc) the Book. What is the nature of the artefacts produced using the Book? How are learners interacting with the artefact produced using the Book?
  2. Responses from staff at my institution to a simple survey.
    Aim being to explore relationships between the analytics and user responses.
  3. Responses from the broader Moodle user community to essentially the same survey.
    Aim being to compare/contrast with the broader Moodle user community’s experiences with the experiences of those within the institution.

Specifics of analysis and survey

The analysis of the Book module will be exploratory. The aim is to develop analysis that is specific to the nature of the Book.

The aim of the survey is to generate textual descriptions of the users’ experience with the Book. Initial thought was given to using the Critical Incident Technique in a way similar to Islam (2014).

Currently the plan is to use a similar approach more explicitly based on the Technology Acceptance Model (TAM). The idea is that the survey will consist of a minimal number of closed questions mostly to provide demographic data. The main source of data from the survey will come from four open-ended questions, currently worded as

  1. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module more useful in your teaching.
  2. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module less useful in your teaching.
  3. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module easier to use in your teaching.
  4. Drawing on your use, please share anything (events, resources, needs, people or other factors) that have made the Moodle Book module harder to use in your teaching.

Future extensions

The analysis of Moodle usage might be usefully supplemented with interviews with particular people to explore interesting patterns of usage.

It’s also likely that the content analysis of the Moodle community discussion forum around the Book will also be completed. That’s dependent upon time and may need to wait.

Analysis of the Moodle source code repository or the tracker may also be usefully analysed. However, the focus at the moment is more on the user’s experience. The information within the repository and the tracker is likely to be a little too far away from most users of the LMS.

It would be interesting to repeat the institutionally specific analytics and survey at other institutions to further explore the impact of specific institutional actions (and just the broader contextual differences) on post-adoptive behaviour.


Burton-Jones, A., & Hubona, G. (2006). The mediation of external variables in the technology acceptance model. Information & Management, 43(6), 706–717. doi:10.1016/

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., … Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387–402. Retrieved from

Islam, A. K. M. N. (2014). Sources of satisfaction and dissatisfaction with a learning management system in post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Anyone capturing users' post-adoptive behaviours for the LMS? Implications?

Jasperson, Carter & Zmud (2005)

advocate that organizations strongly consider capturing users’ post-adoptive behaviors, overtime, at a feature level of analysis (as well as the outcomes associated with these behaviors). It is only through analyzing a community’s usage patterns at a level of detail sufficient to enable individual learning (regarding both the IT application and work system) to be exposed, along with the outcomes associated with this learning, that the expectation gaps required to devise and direct interventions can themselves be exposed. Without such richness in available data, it is unlikely that organizations will realize significant improvements in their capability to manage the post-adoptive life cycle (p. 549)

Are there any universities “capturing users’ post-adoptive behaviours” for the LMS? Or any other educational system?

There’s lots of learning analytics research (e.g. interesting stuff from Gasevic et al, 2015) going on, but most of that is focused on learning and learners. This is important stuff and there should be more of it.

But Jasperson et al (2015) are Information Systems researchers publishing in one of the premier IS journals. Are there University IT departments that are achieving the “richness in available data…(that) will realize significant improvements in their capability to manage the post-adoptive life cycle”?

If there is, what does that look like? How do they do it? What “expectation gaps” have they identified? What “direct interventions” have they implemented? How?

My experience suggests that this work is limited. I wonder what implications that has for the quality system use and thus the quality of learning and teaching?

What “expectation gaps” are going ignored? What impact does that have on learning and teaching?

Jasperson et al (2005) develop a “Conceptual model of post-adoptive behaviour” shown in the image below. Post-adoptive behaviours can include the decision not to use, or change how to use. A gap in expectations that is never filled, is not likely to encourage on-going use.

They also identify that there is an “insufficient understanding of the technology sensemaking process” (p. 544). The model suggests that technology sensemaking is a pre-cursor to “user-initiated learning interventions”, examples of which include: formal or informal training opportunities; accessing external documentation; observing others; and, experimenting with IT application features.

Perhaps this offers a possible explanation for complaints about academics not using the provided training/documentation for institutional digital learning systems? Perhaps this might offer some insight into the apparent “low digital fluency of faculty” problem.

conceptual model of post-adoptive behaviours


Gašević, D., Dawson, S., Rogers, T., & Gasevic, D. (2015). Learning analytics should not promote one size fits all: The effects of instructional conditions in predicating learning success. The Internet and Higher Education, 28, 68–84. doi:doi:10.1016/j.iheduc.2015.10.002

Jasperson, S., Carter, P. E., & Zmud, R. W. (2005). A Comprehensive Conceptualization of Post-Adaptive Behaviors Associated with Information Technology Enabled Work Systems. MIS Quarterly, 29(3), 525–557.

The CSCW view of Knowledge Management

Earlier this week I attended a session given by the research ethics folk at my institution. One of the observations was that they’d run training sessions but almost no-one came. I’ve heard similar observations from L&T folk, librarians, and just about anyone else aiming to help academics develop new skills. Especially when people spend time and effort developing yet another you beaut website or booklet that provides everything one would want to know about a topic. There’s also the broader trope developing about academics/teachers being digitally illiterate, which I’m increasingly seeing as unhelpful and perhaps even damaging.

Hence my interest when I stumbled across Ackerman et al (2013) a paper titled “Sharing knowledge and expertise: The CSCW View” with the abstract

Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers’ viewpoints on what has become called ‘knowledge sharing’ and ‘expertise sharing’. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, froman analytical standpoint, roughly represent ‘generations’ of research: an ‘object-centric’ and a ‘people-centric’ view.We also survey the challenges and opportunities ahead.

What follows are a summary and some thoughts on the paper.

Thoughts? Possibilities?

The paper’s useful in that it appears to give a good overview of the work from CSCW on this topic. Relevant to some of the problem being faced around digital learning.

All this is especially interesting to me due to my interest in exploring the design and impact of distributed means of sharing knowledge about digital learning

Look at Cabitza and Simone (2012) – two levels of information, and affording mechanisms – as informing design. Their work on knowledge artifacts (Cabitza et al, 2008) might also be interesting.

Brown and Duguid’s (2000) Network of Practice is a better fit for what I’m thinking here.

CSCW has a tendency to precede development with ethnographic studies.

Learning object repositories?

Given the fairly scathing findings re: the idea of repositories, what does this say about current University practices around learning object repositories?

Is digitally illiterate a bad place to start?

The “sharing expertise” approach would appear to assume that the people you’re trying to help have knowledge to share. Labeling teachers as digitally illiterate would appear to mean you couldn’t even conceptualise this as a possibility. Is this a core problem here?

The shift from system to individual practice

At some level the shift in the CSCW work illustrates a shift from focusing on IT systems to a focus on individual practices. The V&R mapping process illustrates some of this.

Context and embedding is important

Findings reinforce the contextual and situated nature of knowledge (is that a bias from the assumptions of these researchers?). Does this explain many of the problems currently being faced? i.e. what’s being done at the moment is neither contextual nor situated? Would addressing this improve outcomes?


A topic dealt with by different research communities (Information Systems, CSCL, Computer Science) each with their particular focus and limitations. e.g. CS has developed interesting algorithms but “Empirical explroations into the practice of knowledge-intense work have been typically lacking in this discourse” (p. 532).

The CSCW strength has been “to have explore the relationship between innovative computational artifacts and knowledge work – from a micro-perspective” (p. 532)

Uses two different terms that “connote CSCW’s spin on the problem” i.e.

that knowledge is situated in people and in location, and that the social is an essential part of using any knowledge…far more useful systems can be developed if they are grounded in an analysis of work practices and do not ignore the social aspects of knowledge sharing. (p. 532)

  1. Knowledge sharing – knowledge is externalised so that it can be captured/manipulated/shared by technology.
  2. Expertise sharing – where the capability/expertise to do work is “based on discussions among knowledgeable actors and less significantly supported by a priori externalizations”

Speak of generations of knowledge management

  1. Repository models of information and knowledge.
    Ignoring the social nature of knowledge, focused on externalising knowledge.
  2. Sharing expertise
    Tying communication among people into knowledge work. Either through identifying how best to “find” who has the knowledge or on creating online communities to allow people to share their knowledge. – expertise finders, recommenders, and collaborative help systems.
    Work later scaled to Internet size systems and communities – collectives, inter-organisational networks etc.

Repository model

started with attempts “to build vast repositories of what they knew” (p. 533).

it should be noted that CSCW never really accepted that this model would work in practice (p. 534)…Reducing the richness of collective memory to specific information artifacts was utopian (p. 537)

Findings from various CSCW repository studies

  • Standard issues with repository systems

    particularly difficulty with motivating users to author and organize the material and to maintain the information and its navigation

  • Context is important.

    Some systems tackled the problem of context by trying to channel people to expertise that was as local as possible based on the assumption that “people nearby an asker would know more about local context and might be better at explaining than might experts”.

    Other research found “difficulties of reuse and the organisation of the information into repositories over time, especially when context changed…showed that no organisational memory per se existed; the perfect repository was a myth” (p. 534)

  • Need to embed.

    such a memory could be constructed and used, but the researchers also found they needed to embed both the system and the information in both practice and in the organizational context

  • situated and social.

    CSCWin general has assumed that understanding situated use was critical to producing useful, and usable, systems (Suchman 1987;Suchman and Wynn 1984) and that usability and usefulness are social and collaborative in nature (p. 537)

  • deviations seen as useful

    Exceptions in organizational activities, instead of being assumed to be deviations from correct procedures, were held to be ‘normal’ in organizational life (Suchman 1983) and to be examined for what they said about organizational activity, including information handling (Randall et al. 2007;Schmidt 1999) (p. 537)

  • issues in social creation, use, and reuse of information.

    • issues of motivation,
      Getting information is hard. Aligning reward structures a constant problem. The idea of capturing all knowledge clashed with a range of factors, especially in competitive organisational settings.
    • context in reuse,
      “processes of decontextualisation and recontextualisation loomed over the repository model” (p. 538). “This is difficult to achieve, and even harder to achieve for complex problems” (p. 539).
    • assessments of reliability and authoritativeness,
      de/recontextualisation is social/situated. Information is assessed based on: expertise of the author, reliability, authoritativeness, quality, understandability, the provisional/final nature of he information, obsolescense and completeness, is it officialy vetted?
    • organizational politics, maintenance, and
      “knowledge sharing has politics” (p. 539). Who is and can author/change information impacts use. Categories/meta data of/about data has politics.
    • reification
      “repository systems promote an objectified view of knowledge” (p. 540)

Repository work has since been commercialised.

Some of this work is being re-examined/done due to new methods: machine learning and crowd-sourcing.

Boundary objects – “critical to knowledge sharing. Because of their plasticity of meaning boundary objects serve as translation mechanisms for ideas, viewpoints, and values across otherwise difficult to traverse social boundaries. Boundary objects are bridges between different communities of practice (Wenger 1998) or social worlds (Strauss 1993).” (p. 541)

“information objects that have meaning on both sides of an intra-organisational or inter-organisational boundary”.

CSCW tended to focus on “tractable information processing objects” (p. 542) – forms etc. – easier to implement but “over-emphasis on boundary objects as material artifact, which can limit the analytical power that boundary objects bring to understanding negotiation and mediation in routine work”

Example – T-Matrix – supporting production of a tire and innovation.

Cabitz and Simone (2012) identify two levels of information

  1. awareness promoting information – current state of the activity
  2. knowledge evoking information – triggering previously acquired knowledge or triggering/supporting learning and innovation

Also suggest “affording mechanisms”

Other terms

  1. “boundary negotiating” objects
    Less structured ideas of boundary objects suggested
  2. knowledge artifacts – from Cabitza et al (2013)

    a physical, i.e., material but not necessarily tangible, inscribed artifact that is collaboratively created, maintained and used to support knowledge- oriented social processes (among which knowledge creation and exploita- tion, collaborative problem solving and decision making) within or across communities of practice…. (p. 35)

    These are inherently local, remain open for modification. Can stimulate socialisation and internalisation of knowledge.

common information spaces – common central archive (repository?) used by distributed folk. Open and malleable by nature. A repository is closed/finalised, CIS isn’t. Various work to make the distinction – e.g. degrees of distribution; kinds of articulation work and artifacts required, the means of communication , and the differences in frames of participant reference.

Various points made as to the usefulness of this abstraction.


  • Assembly – “denote an organised collection of information objects”
  • Assemblages – “would include the surrounding practices and culture around an object or collection” (p. 545)

How assemblies are put together and their impacts is of interest.

Sharing expertise

Emphasis on interpersonal communications over externalisation in IT artifacts. “ascribed a more crucial role to the practices of individuals” (p. 547). A focus on sharing tacit knowledge – including contextual knowledge.

tacit/explicit – Nonaka’s mistake – explicit mention of the misinterpretation of Polanyi’s idea of tacit knowledge. The mistaken assumption/focus was on making tacit knowledge explicit. When Polanyi used tacit to describe knowledge that was very hard, if not impossible to make explicit.

Tacit knowledge can be learned only through common experiences, and therefore, contact with others, in some form, is required for full use of the information. (p. 547)

Community of practice “roughly be defined as a group that works toegher in a certain domain and whose members share a common practice”.

Network of practice (from Brown and Duguid, 2000) – members do not necessarily work together, but work on similar issues in a similar way.

Community of Interest – defined by common interests, not common practice. Diversity is a source of creativity and innovation.

I like this critique of the evolution of use of CoP

Intrinsically based in their view of ‘tacit knowledge,’ the Knowledge Management community appropriated CoP in an interventionist manner. CoPs were to be cultivated or even created (Wenger et al. 2002), and they became fashionable as ‘the killer application for knowledge management practitioners’ (Su andWilensky 2011, p. 10) with supposedly beneficial effects on knowledge exchange within groups. (p. 547)

CSCW didn’t use CoPs in an interventionist way – instead as an analytical lens.

Social capital – from Bourdieu – “refers to the collective abilities derived from social networks”. Views sharing “in the relational and empathic dimension of social networks” (p. 548).

Nahapiet and Ghoshal (1998) suggest it consists of 3 dimensions

  1. Structural opportunity (‘who’ shares and ‘how’);
    Which is where the technical enters the picture.
  2. Cognitive ability (‘what’ is shared);
  3. Relational motivation (‘why’ and ‘when’ people engage)

Latter 2 dimensions not often considered by system designers.

The sharing approach places emphasis on “finding-out” work. Where knowledge is found by knowing/asking others and in finding the source, de-contextualising and then re-contextualising. Often involves “local knowledge” – which tends to have an emergent nature. What’s important is only known in the situation at hand and who holds it evolves within a concrete situation.

People finding and expertise location

Move from focusing on representations of data to the interactions between people – trying to produce and modify them. Tackling technical, organisational and social issues simultaneously.

Techniques include: information retrival, network analysis, topics of interest, expertise determination.

Profile construction can be contentious – privacy, identification of expertise. Especially given “big data” approaches to analysing and identification.

Expertise finding’s 3 stages: identification, selection, escalation.

Need to promote awareness of individual expertise and their availability – “based in ‘seeing’ others’ activities” (p. 551)

“people prefer others with whom they share a social connection to complete strangers” (p. 553) – no surprise there – but people known directly weren’t chosen as they were deemed not likely to have any greater expertise. Often people who were 2 or 3 degrees of separation away.

Profiles also found by one study to be often out of date. Explored “peripheral awareness” as a solution.

Open issues

  • Development of personal profiles.
  • Privacy and control.
  • Accuracy.

Finding others Lot of work outside CSCW.

CoI in the form of web Q&A communities have arising on the Internet. With research that has studied question classification, answer quality, user satisfaction, motivation and reputation.


  • more money = more answers, but not necessarily better quality.
  • charitable contributions increased credibility of answers “in a nuanced way”?
  • Altruism and reputation building two important motivations

Recent research looking at “social Q&A” – how people use social media to answer – two lines of research (echoing above)

  1. social analysis of existing systems;
    Looking at: impact of tie strength on answer quality, org setting, response rates when asking strangers – especially with quick, non-personal answers, community size and contact rate.
  2. technical development of new systems

Future directions

Interconnected practices: expertise infrastructures

Increasing inter-connectedness

  • may cause “experts” to become anonymous.
  • propel new types of interactions via micro-activities – microtasking environments make it easy/convenient to help
  • Collaboratively constructed information spaces – wikipedia – numerous papers examiner how it was constructed, including work looking more broadly at Wikis
  • Other research looked at github, mozilla bug reports etc.
  • And work looking at social media, microblogging etc and its use.


Ackerman, M. S., Dachtera, J., Pipek, V., & Wulf, V. (2013). Sharing Knowledge and Expertise: The CSCW View of Knowledge Management. Computer Supported Cooperative Work (CSCW), 22(4-6), 531–573. doi:10.1007/s10606-013-9192-8

Re-purposing V&R mapping to explore modification of digital learning spaces


Apparently there is a digital literacy/fluency problem with teachers. The 2014 Horizon Report for Higher Education identified the “Low Digital Fluency of Faculty” as the number 1 “significant challenge impeding higher education technology adoption”. In the 2015 Horizon Report for Higher Education this morphs into “Improving Digital Literacy” being the #2 significant challenge. While the 2015 K-12 Horizon Report has “Integrating Technology in Teacher Education” as the #2 significant challenge.

But focusing solely on the literacy of the teaching staff seems a bit short sighted. @palbion, @chalkhands and I are teacher educators working in a digitally rich learning environment (i.e. a large percentage of our students are online only students). We are also fairly digitally fluent/literate. In a paper last year we explored how a distributive view of knowledge sharing helped us “overcome the limitations of organisational practices and technologies that were not always well suited to our context and aims”.

Our digital literacy isn’t a problem, we’re able and believe we have to overcome the limitations of the environment in which we teach. Increasingly the digital tools we are provided by the institution do not match the needs we have for our learning designs and consequently we make various types of changes.

Often these changes are seen as bad. At best these changes are invisible to other people within our institution. At worst they are labelled as duplication, inefficient, unsafe, and feral. They are seen as shadow systems. Systems and changes that are undesirable and should be rooted out.


Rather than continue this negative perspective, @palbion, @chalkhands and I have just finished a rough paper that set out to explore if there was anything valuable or interesting to learn from the changes we made to our digital learning spaces. Our process for this paper was

  1. Generate a list of stories of the changes we made to our digital learning/teaching spaces.
    Using a Google doc and a simple story format (descriptive title; what change was made; why; and, outcomes) each of us generated a list of stories of where we’d changed the digital tools/spaces we use for our teaching.
  2. Map those stories using a modified Visitor and Resident mapping approach.
    The stories needed to be analysed in someway. The Visitors & Residents approach offered a number of advantages – more detail below.
  3. Reflect upon what that analysis showed and about potential future applications of this approach.

What follows is some reflection on the approach, a description of the original V&R map, and a description and example of our modified V&R map.

Reflection on the approach

In short, we (I think I can say we) found the whole approach interesting and could see some potential for broader use. In particular, the potential benefits of the approach include:

  1. Great way to start discussions and share knowledge.
    Gathering stories and analysing them using the V&R process appear to be very useful ways for starting discussions and sharing knowledge. Not the least because it starts with people sharing what they are doing (trying to do) now, rather than some mythical ideal future state.
    Reports from others using the original V&R mapping process suggest this is a strength of the V&R mapping approach. Our experience seems to suggest this might continue with the modified map we used.
  2. Doesn’t start by assuming that people are illiterate.
    Neither @palbion or I think we’re digitally illiterate. We have formal qualifications in Information Technology (IT). @chalkhands doesn’t have formal qualifications in IT. Early on in this process she was questioning whether or not she had anything to add. She wasn’t as “literate” as @palbion and I. However, as we started sharing stories and mapping them that questioning went away.
    The V&R approach is very much based on the idea of focusing on what people do, rather than who they are or what they know (or don’t). It doesn’t assume teaching staff are digitally illiterate and is just interested in what people do. I think this is a much more valuable starting point for engaging in this space. It appears likely to provide a method for helping universities follow observations from the 2015 Horizon Report that solving the “digital literacy problem” requires “individual scaffolding and support along with helping learners as they manage conflict between practice and different contexts” and “Understanding how to use technologies is a key first step, but being able to leverage them for innovation is vital to fostering real transformation in higher education” and “that programs with one-size-fits-all training approaches that assume all faculty are at the same level of digital literacy pose a higher risk of failure.”
  3. It accepts that the ability for people to change digital technologies is not only ok, it is necessary and unavoidable.
    Worthen (2007) makes the point that those in charge of institutional IT (including digital learning spaces) want to prevent change while the people using digital systems want the technology to change

    Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable, and compliant with an ever increasing number of government regulations

    Since the CIOs are in charge of the technology (they have the power) the practice of changing digital systems (without having gone through the approved governance processes) is deemed as bad and something to be avoided. This is due to change, especially in learning and teaching if you accept Shulman’s (1987) identification of the “knowledge based of teaching” laying (emphasis added)

    at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The original V&R map

The original V&R map is (example in the image below) a cartesian graph with two axes. The X-axis ranges from visitor to resident and describes how you perceive and use digital technologies. A visitor sees a collection of disparate tools that are fit for specific purposes. When something has to be done the visitor selects the tool, gets the job done, and leaves the digital space leaving no social trace. A resident on the other hand sees a digital space where they can connect and socialise with others. The Y-axis ranges from Institutional to Personal and describes where use of digital technologies fits on a professional or personal scale.

The following map shows someone for whom LinkedIn is only used for professional purposes. So it’s located toward the “Institutional” end of the Y-axis. Since LinkedIn is about leaving a public social trace for others to link to, it’s located toward the “Resident” end of the X-axis.

Our modified V&R map

Our purpose was to map stories about how we had change digital technologies within our role as teacher educators. Thus the normal Institutional/Personal scale for the Y-axis doesn’t work. We’re only considering activities that are institutional in purpose. In addition, we’re focusing on activities that changed digital technologies. We’re interested in understanding the types of changes that were made. As a result we adopted a “change scale” as the Y-axis. The scale was adapted from software engineering/information systems research and is summarised in the following table.

Item Description Example
Use Tool used with no change Add an element to a Moodle site
Internal configuration Change operation of a tool using the configuration options of the tool Change the appearance of the Moodle site with course settings
External configuration Change operation of a tool using means external to the tool Inject CSS of Javascript into a Moodle site to change its operation
Customization Change the tool by modifying its code Modify the Moodle source code, or install a new plugin
Supplement Use another tool(s) to offer functionality not provided by existing tools Implement course level social bookmarking by requiring use of Diigo
Replacement Use another tool to replace/enhance functionality provided by existing tools Require students to use external blog engines, rather than the Moodle blog engine.

Since we were new to the V&R mapping process and were trying to quickly do this work without being able to meet, some additional scaffolding was placed on the X-axis (visitor-resident). This provide some common level of understanding of the scale and was based on a specific (and fairly limited) definition of “social trace”. The lowest level of the scale was “tools used by teachers” which meant no social trace. The scale gradually increased the number of people involved in the activities mediated by the digital technology. “Subsets of students in a course” to “All students in a course” and right on up to “Anyone on the open web”.

The following image is the “template” map that each of used to map out our stories of changing digital technologies.

Modified V&R map template

An example map and stories

The following image is the outcome of mapping my stories of change. A couple of example stories are included after the image.

My V&R change map

Know thy student

This story involves replacing/supplementing existing digital tools, but is something that only I use. Hence Visitor/Replacement.

What? A collection of Greasemonkey scripts, web scrapping, local database/server designed to help me know my students and what they were doing in the Study Desk. Wherever there is a Moodle user profile link in Moodle, the script will add a link [ details ] that is specific for each user. If I click on that link I see a popup window with a range of information about the student

Why? Because finding out this information about a student would normally take 10+ minutes and require the use of multiple different web pages in two different system. Many of these pages don’t exactly make it easy to see the information. Knowing the students better is a core part of improving my teaching.

Outcomes? It’s been a god send. Saving time and enabling me to be more aware of student progess.

Using links in student blog posts

A fairly minor example of change. There’s a question of whether it’s just “use” or “internal configuration”? After all, it’s just using an editor on a web page to create some HTML. It was bumped up to “internal configuration” because of an observation that hyperlinks were not often used by many teachers. Something I’m hoping that @beerc will test empirically.

What? Some comments I write on student blog posts will make use of links to offer pointers to relevant resources.

Why? It’s more useful/easy to the students to have the direct link. Hence more likely to make use of the suggestion.

Outcomes? Minor anecdotal positive comments. Not really known

Early indications and reflection

The change scale worked okay but could use some additional reflection. In particular we raised some questions about whether many of the “replacement” examples of change (including those in my map above) are actually examples of supplement.

On reflecting on all this we made some initial observations, including

  1. Regardless of perceived levels of digital literacy we all engaged in a range of changes to digital technologies.
  2. Not surpisingly, the breadth/complexity of those changes increased with greater digital literacy.
  3. In the end very few of our changes were “replacement”. Almost all were focused more on overcoming perceived shortcomings with the provided tools, rather than duplicating their functionality.
  4. Most of changes tended to congregate towards the “visitor” end of the X-axis. Not surprising given that all of the digital technologies provided by the institution are not on the open web.
  5. Almost all of the stories that involved “replacement” were based on moving out onto the “open web”. i.e. they were all located toward the “resident” end of the X-axis.
  6. Changes were being made due to two main reasons: improving the efficiency of institutional systems or practices; or, customising digital technologies to fit the specific learning activities we wanted to implement.

Technology required by teachers to customise technology-enhanced units

This is the 2nd post (first here) looking at Instructional Science 43(2) on the topic of “Teachers as designers of technology enhanced learning”. This post looks at Matuk et al (2015)

In summary

  1. The claim is that the ability for teachers to customise is positive for learning.

    Teachers’ involvement in curriculum design is essential for sustaining the relevance of technology-enhanced learning materials. Customizing – making small adjustments to tailor given materials to particular situations and settings – is one design activity in which busy teachers can feasibly engage. Research indicates that customizations based in evidence from student work lead to improved learning outcomes (p. 229)

  2. Customisations by a four middle/high school teachers are examined to see how these customisations were afforded
  3. Identified 3 4 types of customisations (the abstract says 3 and then proceeds to list these 4)
    • “devising timely instructional interventions to provide individualised guidance”
    • “planning activities and adjusting milestones to align with students’ progress”
    • “modifying existing materials to better integrate content into overall curriculum plans”
    • incorporating scaffolds to better address students’ needs
  4. Identified 3 technology features that support customisations
    • A system that logs student work for teachers’ inspection;
    • tools for conducting dynamic, formative assessment; and,
    • an authoring environment that supports re-design of units at multiple levels of granularity

In this paper, we argue that teachers’ effectiveness in customizing TEL materials also relies on the affordances of the tools available to them, particularly in their ability to make students’ ideas visible (p. 232)

Preliminary design principles “for flexibly adaptive curriculum materials based on the premise of making student work visible as evidence to inform teachers’ customizations” (p. 250)

  1. Provide an interface for browsing logged responses;
    i.e. display responses and revisions “and give teachers a persistent record of their students’ thinking”.
  2. Integrate scaffolds that make student thinking explicit;
    i.e. make students’ thinking processes visible to teachers to enable formative advice. Strong link here with learning process analytics (Lockyer et al, 2013)
  3. Provide technologies to monitor real-time progress;
  4. Offer flexible, accessible authoring tools that support testing and refinement.

Challenges for future technologies: a research and design agenda

  1. How do we design interfaces and real-time displays that make students’ logged data both accessible to, and usable by teachers?
  2. How can we make the underlying instructional framework transparent such that the curriculum materials themselves guide teachers’ customizations?
  3. How can authoring tools be designed that both take advantage of teachers’ expertise and respect their time?

Some of the findings echo some of the ideas from learning analytics, but more directly from a teacher perspective.


  1. The participants and context for this study was fairly limited. What types of customisations and features to support customisations might be identified by examining the work of other teachers in other contexts. Especially contexts that make significantly greater use of digital technologies (e.g. largely online university courses)?
  2. This paper appears to focus on teacher’s redesigning technology-enhanced “curriculum materials”, almost a content focus. What differences do you have to consider if you see digital technology as part of the learning space? As the environment in which learning occurs, not just the curriculum?
  3. The idea of educative curriculum materials – “curriculum materials with an additional tools and resources to aid teachers in attending to changing classroom dynamics, reflecting on their practices, and seeking new approaches to solving problems” (p. 233) – resonates with the idea of Context Appropriate Scaffolding Assemblages (CASA) including the idea of a CASA that allows course designers (teacher educators) to annotate their digital learning spaces (course sites) with explanations and rationalisations behind the designs. Perhaps something useful for other teacher educators, but also for pre-service teachers (links to an idea that @palbion has previously mentioned).
  4. How does this papers purpose/context

    existing research establishes that technology can support teachers’ customizations. It also characterizes broad categories of the kinds of customizations teachers make. Still, little is known about the specific ways by which technology enables customizations, especially those based in students’ ideas.

    link and inform the purpose/context of the paper(s) we’re thinking of?
    Links somewhat back to questions #1 and #2. A different context and a broader notion of digital technologies. Also perhaps a focus more on the type of digital knowledge required of teachers. “Affordance” as in “affordance of a technology for customisation” is a relational term. It’s dependent on the functionality of the technology and the teachers capability to perceive and perform tasks with that functionality.

  5. The technology-enhance units being customised here can be customised “without the need for programming skills” (p. 234). Might not this limit the type of customisations that teachers can undertake? Might not teachers with programming skills want to make different customisations and thus require different affordances from the systems? The customisations identified in this paper are very dependent on the nature of the system and the affordances it offered. Would a more open system combined with a teacher with programming skills identified more and different customisations/technology features?
    Something that the authors identify later

    Our findings raise questions for future research about how teachers’ different prior knowledge of their students and of the subject matter, their individual skills with tech- nology, and their personal orientations toward their roles as teachers and designers, influence their interpretations and responses to their students’ work. They also raise questions about how these interactions are manifested in teachers’ customizations. (p. 250)

  6. Is this observation

    A recent review of 30 technology-based inquiry-learning environments identified only eight, including WISE, that support teachers’ customizations (Donnelly et al. 2014) (pp. 234-235)

    indicative of a broader problem around digital technologies? i.e. they are generally not designed to be modified by teachers. There’s an aspect of that around the LMS, what about more broadly? How does this fit in with various perspectives about the (de-)professionalisation of teachers?

It’s all about putting the context back in

Reading the 4 types of customisation that were identified puts me in mind of the reusability paradox described as the tension between these two observations

  • “The more context a learning object has, the more (and the more easily) a learner can learn from it.”
  • “To make learning objects maximally reusable, learning objects should contain as little context as possible.”

And my current pet argument that the mindset underpinning the design and implementation of digital technologies for learning and teaching has a (strong) tendency to remove context and hence reduce pedagogical value.


What strikes me about the four customisations is that they are all about modifying the “technology-enhanced units” to insert more context. e.g. providing individual guidance, align with students’ progress, better integrate content into overall curriculum plans, and better address needs. All these talk about teachers modifying the “technology” to better respond to context.

Which resonates strongly with Shulman’s (1987) suggestion that

the key to distinguishing the knowledge base of teaching lies at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

And also picks up a quote from this paper

The relationship between teachers and curriculum has been characterized as one between designers and their tools (Brown 2009). In designing curriculum, teachers combine available materials with their own knowledge and expertise to craft instructional experi- ences (Brown and Edelson 2003). (p. 232)

Animated gif of reusability paradox showing a trend to putting more context into the object



The authors argue that

materials that yield to teachers’ modifications better respond to the classroom’s changing needs, constraints, and resources…research finds that teachers who attend to students’ ideas design more effective instruction and formative feedback (Black and Wiliam 2010) (p. 230)

But the various constraints of the classroom setting mean that

their customization decisions tend to be driven by issues of practicality and feasibility (Boschman et al. 2014) rather than by evidence from students’ ideas

Reasons why materials are changed and how are outlined with some supporting references. Labelled as curriculum customizations (Brown and Edelson, 2003). Largely guided by experience, practicalities etc.

Customisation may be a process of differentiation leading to learning gains. “This process demands a degree of expertise” (p. 231). “Customisations based in students ideas have been shown to lead to improved learning outcomes (Ruiz-Primo and Furtak, 2007)…,em>how teachers understand their students’ thinking also influences the kinds of customizations they make” (p. 232)

The role of technology in supporting customisation

The relationship between teachers and curriculum has been characterized as one between designers and their tools (Brown 2009)…Thus, by understanding how teachers use tools to aid their practice, we can further define their facilitating roles. (p. 232)

Apparently Schwartz et al (1999) make a point related to the need to provide flexibly adaptive materials that can support teacher customisation without losing integrity. Which brings up the interesting point

because whereas teachers’ adaptations of materials to local conditions can sometimes lead to improved student learning, it is also possible that they deviate from the intended value of the innovation (p. 232)

TEL materials and afford/guide customisations. Many examples of TEL curriculum material that have done this. Also mentions educative curriculum materials as materials with additional tools and resources – e.g. annotations on documents viewable by a teacher that offers suggestions for implementation and described the rationale behind these designs.

The context

Case studies arise from use of the Web-based Inquiry Science Environment a system used by 9900+ teachers, 80,000+ students, and with 8,000 different customised WISE units (at the time of writing). Up to date statistics are available from the web site

Essentially appears to be a collection of established units in the form of web pages, animations etc supported by various functions (e.g. concept maps). It does have an authoring environment that “allows users to copy and modify existing units without the need for programming skills” (p. 234).

This is interesting

A recent review of 30 technology-based inquiry-learning environments identified only eight, including WISE, that support teachers’ customizations (Donnelly et al. 2014) (pp. 234-235)

Cases of customisation and the role of technology

Much detailed description. Explaining how and why the four teachers customised the WISE units in response to their students. Shows the origins of the four types of customisation.


Teachers used different tools based on a range of factors:

students’ differing needs; the conceptual and linguistic challenges most prominent in teachers’ regard; teachers’ own instructional goals; and teachers’ orientations toward technology, pedagogy and their roles as designers with respect to the curriculum materials (p. 248)

There was variability in modes of customisation – variability in level of digital changes

These differences in customization mode might be explained by teachers’ familiarity with, and orientations toward technology; as well as to the support available for using that technology (Inan and Lowther 2009; Koehler and Mishra 2008; Zhao et al. 2002)….If teachers did indeed vary in their facilities and familiarities with technology, then with consistent amounts of training, their customization strategies would come to more closely resemble one another. But another explanation for teachers’ differences is their perceptions of themselves as designers (Cviko et al. 2013) and as research participants in curriculum development projects such as WISE. (p. 249)

The last point is perhaps interesting.

How can technologies offload the effort involved in giving individualised guidance?

“logistic constraints of the classroom can limit what teachers can do” (p. 253) Mainly talks about automation as the tactic. Fairly limited discussion and something a lot of machine intelligence guys are working on.


Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459. doi:10.1177/0002764213479367

Matuk, C. F., Linn, M. C., & Eylon, B.-S. (2015). Technology to support teachers using evidence from student work to customize technology-enhanced inquiry units. Instructional Science, 43, 229–257. doi:10.1007/s11251-014-9338-1

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–21. Retrieved from

University e-learning: Removing context and adding sediment

The following is the outlines the core of the argument used in a talk to folk at UniSA today titled “The perceived uselessness of the Technology Acceptance Model (TAM) for e-learning”. The argument is that the mindset underpinning the implementation of institutional e-learning within Universities focuses on widespread reuse across an institution (and sometimes beyond). As a result institutional e-learning has a tendency to remove considerations of context, which in turn reduces/removes any chance of learners and teachers perceiving any usefulness or ease-of-use from the provided systems and processes.

The end result is that rather than enabling high quality learning experiences, institutional e-learning practices are creating sediment that clogs up any attempt to create high quality learning experiences. The following offers on possible explanation for why this is the case and offers a possible solution.

Example – “Know thy student”

The ability to know thy student is of central importance to learning and teaching. However, research around learning analytics has identified that institutional e-learning systems do a particularly poor job at supporting this fairly central task.

Seven years ago Dawson and McWilliam (2008, p. 3) found that

current LMS present poor data aggregation and similarly poor visualisation tools in terms of assisting staff in understanding..student learning behaviour

Two years ago Corrin et. al. (2013, p. 204) found that

A common request that emerged across the focus groups was the ability to correlate data across systems

If I want to know who one of my students is, where they are located, what type of teacher they are studying to become (e.g. Early Childhood, Primary, Secondary, Special Education etc), what activities they’ve completed on the course site, and what course related posts they’ve written on their blog I have to (as summarised by the following image) spend 10+ minutes wandering around 3 different websites.


And while the above diagram uses simple and consistent black boxes to represent each of the web pages I use to get the information. The reality is actually much more complex. As is shown by the following image. It’s a full screen dump of the Activity Completion report in Moodle. Each of the rows in the massive table represent a student in my course. Each of the columns represents an activity they are asked to complete on the course site. A tick in a particular box indicates that they have completed that activity. Given the size and complexity of this representation it’s actually quite hard to identify whether or not a student has completed an activity.


Lesson from TAM – people won’t use this

The Technology Acceptance Model (TAM) proposes that people are much more likely to use a system if they perceive the system to be

  1. easy to use; and,
  2. useful.

Do you perceive the above system to be useful and easy to use?

I don’t. Which is why I (and assume most other teaching staff) don’t use it.

Given that this is blindingly obvious, and that both Dawson et al (2008) and Corrin et al (2013) have already identified this problem, why hasn’t the problem been fixed?

SET mindset – removing context, usefulness, and ease of use

Institutional e-learning – like much in contemporary corporate Universities – is driven by a SET mindset.

Amongst the many problems with the SET mindset is that it must focus on reuse. The learning objects/systems that the SET mindset creates must be usable (at least) across an entire institution. This is contributed to by each of the components of the SET mindset.

  1. Strategic – the emphasis of the SET mindset is on strategic planning. Strategies that are important for the organisation. Strategic planning separates the planning from the doing. It separates the planning from the context.
  2. Established – the SET framework has an established view of digital technologies. i.e. it’s hard, expensive and subsequently wrong to modify or customise technology. Thus the organisation must use the same technology. It can’t be modified to respond to contextual needs.
  3. Tree-like – the SET framework breaks big, difficult problems down into lots of little parts that are solved separately. Each of the parts of the organisation is focused on their little part of the problem and doing it well. There can’t be sufficient focus on the useful whole (e.g. a learning experience). A learning experience is actually a combination of different parts (e.g. branded look and feel, maintaining uptime on Moodle, university policy on extensions etc.) the people with the greatest focus on the whole/the learning experience (i.e. the learners and teachers) have the least capability to modify or control the parts and how they are put together.

In terms of the reusability paradox the SET mindset tends to focus on reuse at the expense of pedagogical value. It removes context (and thus usefulness and ease of use) from the learning objects/systems in order to be able to reuse them in different contexts.


BAD mindset – adding context, usefulness, and ease of use

On the other hand, the BAD mindset tends to put context back into the learning objects/systems. It responds to the needs of a specific context and focuses on maximising usefulness and ease of use within the confines of that context. As a result, the BAD mindset tends to reduce the capability to reuse the learning object/system.


This tendency is contributed to by each of the parts of the BAD mindset

  1. Bricolage – combines doing and planning/design within the specifics of the context. Takes what is available within the context and uses that creatively to scratch and itch. The nature of the solution is dependent on the context, the available resources, and the connections that can be made.
  2. Affordances – the BAD mindset sees digital technology as inherently “protean…to be shaped and exploited…the first metamedium, and as such it has degrees of freedom for representation and expression” (Kay, 1984, p. 59). i.e. digital technology is this hugely flexible resources that can and should e “shaped and exploited” to fit the requirements specific to a context.
  3. Distributed – rather than see the world as a collection of separate boxes never to be questioned, the BAD mindset sees the world as a distributed collection of connections and relationships that exist to be connected and re-connected in new and useful ways.

The BAD solution to “know thy student”

Using the BAD mindset, I’ve implemented a solution to the “know thy student” problem that I’ve used this year. Another post offers a more detailed description of that solution. The above argument suggests that the BAD solution should be hugely more contextually appropriate and thus useful and easy to use than the SET solution. The following table provides a simple comparison between the SET and BAD solutions.

Mindset Where How long
SET 3 separate websites, 9 + web pages 10+ minutes
BAD Where-ever I interact with students on the course web site 3 mouse clicks

I certainly know which solution I use more.

Can this scale?


The “know thy student” solution I’ve been using can’t be used by anyone else as it depends on a mix of technologies and learning designs specific to my context. For example, I don’t think any other course (not taught by me) at USQ uses a combination of activity completion and the BIM module that I use in my course. But the whole point of the BAD mindset is that the specifics of the “know thy student” solution can and should be modified to suit the specifics of the design of your course and your context.

Some of the technologies I use won’t scale to other people. However, the general trend with digital technologies (e.g. the rise of API-centric architectures) is such that it can be easily re-created and scaled. The challenge at the moment is that the SET mindset is holding back the adoption and innovative use of these technological trends.

For example, Peter Albion has customised the Moodle Assignment activity to better suit his needs. Peter’s customisation should (with little or no modification) be able to be used by any teacher who is using the Moodle assignment activity. (It should be especially easy if you are using the Firefox browser, but not all that difficult if you are using another browser)

But not everyone can code. University e-learning systems currently have a starvation problem. That is, a range of projects that should get implemented can’t because there aren’t enough development resources. Customising everything to each specific context/learning design is never going to happen, unless perhaps everyone can do their own coding.

But that’s the point of a Distributed perspective. Not everyone needs to be able to code, though it might be a huge benefit. All you need to do is be connected through your various links and connections to someone who can code. You also need to be within an environment that actively enables people who can code to share what they do in a way that can be re-used, customised, and re-shared.

You need to be in an environment that recognises and responds to Anton Ego’s sentiment in the following image. Rather than a SET-based environment that believes a great artist (programmer) can only come from the IT division (if you’re lucky) and an external consultant (if you’re unlucky). E-learning’s starvation problem is coming from too few people and too few perspectives being allowed and encouraged to engage in modification.

Not eveyone can

CASA – Context-Appropriate Scaffolding Assemblages

While @beerc and I were enjoying the following view of Queenstown post the 2015 ASCILITE conference we started talking about a range of ideas.

Queenstown View

One of those was the idea of CASA – Context-Appropriate Scaffolding Assemblages – as a representation of the type of “systems” that a BAD mindset would produce. Not as a replacement for the types of systems that the SET mindset generates. CASA are meant to be the recombination, reconnection, and mashup of a range of different parts of SET systems in ways that respond to contextual requirements.

Today’s talk at UniSA included an attempt to move the CASA concept forward a bit and give a few more examples of what CASA might look like.

The BAD and CASA acronyms overlap (but not as neatly as the following suggests)

  • Context-Appropriate and Bricolage.

    The focus is to increase pedagogical value (e.g. ease of use and usefulness) but putting more and more context into the e-learning systems. To enable individual teachers (and learners) to scratch the itches they have, rather than have to wait on the organisation or beyond.

  • Scaffolding and Affordances

    The aim is to modify digital technologies and generally make connections that help learners and teachers accomplish tasks that are specific to the context. Echoing the idea of Electronic Performance Support Systems (Hannafin et. al., 2001)

  • Assemblages and Distrubtion

    The focus is the on-going production, destruction, and re-construction of heterogeneous, productive, and desired socio-material connections/relationships. A naive and nascent channeling of Muller (2015) and Introna (2013).

Supplementary Assessment CASA

My faculty has a formal process for how supplementary assessments are meant to be managed using Moodle. To help academics implement this policy the faculty distributed version 2.7 of an 11 page PDF outlining the steps required.

This is an example of how the SET mindset is unable to insert additional context into its systems. Instead of modifying the tool to fit the task, the user has to modify their actions.

Rather than do this, why not implement a CASA that offers support to teaching staff to implement this policy. Support that is located exactly where they need it. i.e. the Moodle assignment activity module as shown in the following image.

supplementary CASA

The idea is that I know I need to set a supplementary assessment. I go to Moodle and add an assessment activity. But since I’ve installed this CASA on my browser, it modifies the traditional Moodle interface and adds a button for “Supplementary”. If I click on that button it scaffolds me through the process for creating a supplementary assessment as per university policy.

The CASA is implemented in via augmented browsing. It’s not a modification of Moodle. Moodle has an inherently tree-like structure and modifying it can be seen as problematic. Though modifying Moodle to implement this (or any) CASA is possible, it’s probably unlikely.

Minute paper CASA

A minute paper is a fairly well-known, simple, and effective strategy for getting feedback from students. However, Stead (2005) found that

the one-minute paper is perhaps not used especially extensively…largely due to lack of knowledge of its existence and the perception that it would be too time-consuming to analyse the responses (p. 118)

If only we had access to a technology that allowed for the simple capture, analysis, visualisation, and querying of data. That might help solve this problem.

Well, what about a minute paper CASA connected with the Moodle Feedback activity. A system that is directly designed to serve this purpose. However, because it’s designed for reuse across a range of feedback contexts configuring the Feedback activity to implement a minute paper takes a bit of work. It is also unlikely that the Feedback activity provides the type of analysis functionality that would be directly specific to the minute paper.

A minute paper CASA could add information about the minute paper to the Moodle interface, thereby increasing awareness (a little). But it could also provide a scaffolded (and perhaps almost entirely automated process) for creating a minute paper. The minute paper CASA could also usefully provide specific learning analytics for the minute paper.

minute paper CASA

Ice-breaker process analytics CASA

Lockyer et al (2013) define learning process analytics as

data and analysis (that) provide direct insight into learner information processing and knowledge application…within the tasks that the student completes as part of the learning design (p. 1448)

One of the learning designs I use early in my course is an ice breaker activity using a discussion forum. Students are asked to post an introduction to themselves and then read through the introductions provided by other students. Their aim is to located someone they think is the “same” as them and someone who is “different”. Once identified they are asked to say “Hi” to those individuals.

It’s not a bad activity. However, because the Moodle discussion forum is designed to be re-used in the broadest possible collection of contexts, it provides no scaffolding specific to this particular learning design. This is where a CASA specific to this learning design could help. Either when I create the discussion forum, or perhaps later when I configure it, I would specify that this discussion forum is being used for this specific learning design.

From then on when I view this specific discussion forum the inteface is modified to provide context-appropriate scaffolding. For example, a “check progress” button might be added to allow me to see where in the process students are up to. It might also provide some scaffolding around how I might encourage some of the laggards.

iceBreaker CASA 1

The CASA might also modify my “know thy students” CASA so that while I’m within this specific discussion forum the display is modified to include information specific to the learning design. In this case, a simple legend showing whether or not the student has completed the three required posts.

iceBreaker CASA 2


The talk also briefly touched on the idea of a CASA for CASA. This idea was previously described in a post looking at a BAD approach to developing distributed TPACK.

What’s next?

The immediate focus (I hope) is on exploring how the “know thy student” “CASA” can be scaled, customised, and tested with other colleagues here at USQ. What challenges are likely to exist in trying to convince the SET-mindset parts of the institution that they need to break BAD? Which of the fears that they have about breaking BAD will be proven? What haven’t we predicted? Will it make any difference?


Introna, L. (2013). Epilogue: Performativity and the Becoming of Sociomaterial Assemblages. In F.-X. de Vaujany & N. Mitev (Eds.), Materiality and Space: Organizations, Artefacts and Practices (pp. 330–342). Palgrave Macmillan.

Müller, M. (2015). Assemblages and Actor-networks: Rethinking Socio-material Power, Politics and Space. Geography Compass, 9(1), 27–41. doi:10.1111/gec3.12192

Stead, D. R. (2005). A review of the one-minute paper. Active Learning in Higher Education, 6(2), 118–131. doi:10.1177/1469787405054237

Computers 'do not improve' pupil results, just like wood 'does not improve' houses

Here we go again. There’s an OECD report on “Students, Computers and Learning” that is doing the rounds. @palbion points to one media report

A report that starts with the observation

Investing heavily in school computers and classroom technology does not improve pupils’ performance, says a global study from the OECD.

The same OECD report has been mentioned in Moodle research discussion forum.

Almost 30 years ago Seymour Papert (1987) wrote “Computer Criticism vs. Technocentric Thinking” in which he asked people to

Consider for a moment some questions that are “obviously” absurd. Does wood produce good houses? If I built a house out of wood and it fell down, would this show that wood does not produce good houses? Do hammers and saws produce good furniture? These betray themselves as technocentric questions by ignoring people and the elements only people can introduce: skill, design, aesthetics. Of course these examples are caricatures. In practice, hardly anyone carries technocentrism that far. Everyone realizes that it is carpenters who use wood, hammers, and saws to produce houses and furniture, and the quality of the product depends on the quality of their work. But when it comes to computers and LOGO, critics (and some practitioners as well) seem to move into abstractions and ask “Is the computer good for the cognitive development of the child?” and even “Does the computer (or LOGO or whatever) produce thinking skills?” (p. 24)

Beyond asking an “obviously absurd” question, there are other problems I have with this study.

What is meant by “use tablets and computers”

The BBC report quotes the OECD’s education director as saying

Those students who use tablets and computers very often tend to do worse than those who use them moderately.

But what does “use tablets and computers” mean?

Papert (1987) again

Stated abstractly, the two studies have the same explicit intention: the children are to be given “programming”– and the purpose of the experiments is to see what happens. But there is no such thing as “programming-in-general.” These children are not given “programming.” They are given LOGO. But there is no such thing as “LOGO-in-general” either. The children encounter LOGO in a particular way, in a particular relationship to other people, teachers, peer mentors, and friends. (4) They don’t encounter a thing, they encounter a culture. (p. 27)

Now read that quote again and replace “LOGO” with “tablets and computers”.

The culture in schools impacts on how students are using the tablets and computers. At best the report has found a correlation and not the cause. The cause would seem likely to arise from the culture and how it impacts that nature and quality of students’ use of “tablets and computers”.

Given that the 2015 Horizon Report for K-12 identifies “Integrating technology in teacher education” as one of the problems facing the use of “tablets and computers” in K-12 education, might not this say something about the culture influencing the way “tablets and computers” are used in schools. Especially given that the 2015 Horizon Report for K-12 suggests that

the most important finding is that the level of a teacher’s digital competence directly correlates with students’ learning outcomes when technology is used

The OECD’s education director understands that how “tablets and computers” are used are important. The BBC report again

But Mr Schleicher says the findings of the report should not be used as an “excuse” not to use technology, but as a spur to finding a more effective approach.

But don’t start blaming the teachers and the teacher educators (at least not solely).

Warning: reliance on anecdote.

Every year I teach a course in “ICT and Pedagogy”. A course with the aim of helping them be able to design effective ways to use tablets and computers for student learning. Each year around 400 of these students go out into schools through Australia and abroad. Each year there are positive and negative stories.

There are stories of classrooms with little or no technology that actually works. Stories of where the only technology available is in a computer lab that the class can access for an hour or two each week. Stories of a full curriculum and a focus on standardised tests. Stories of teachers and school leaders that are only now starting to use technology in their everyday life. What might these stories say about the culture in these schools?

Both the negative and positive stories feature the above elements. Typically the only difference between the negative and positive stories is effort put in by a mentor teacher and/or the pre-service teacher involved. Very rarely are the positive stories a result of the school culture as a whole.

How is “learning” measured?

The BBC report reports the OECD’s education director again

He said making sure all children have a good grasp of reading and maths is a more effective way to close the gap than “access to hi-tech devices”

Which brings me to the question of “learning” is measured?

The summary of the report suggests that learning is being measured in this report “Based on results from PISA 2012”. PISA 2012 “assessed the competencies of 15-year-olds in reading, mathematics and science (with a focus on mathematics) in 65 countries and economies”.

Okay, so learning is being measured by a test on “reading, mathematics and science” and results indicate that “a good grasp of reading and maths is a more effective way to close the gap”.

Using the available data, not the meaningful data

Lastly, it appears that report seems to have taken the available data (i.e. PISA data showing performance on the PISA tests and ICT use) and sought to identify patterns and draw conclusions from that data.

Data about how students actually use ICT. Data about the culture within schools around learning, teaching, and the use of ICT doesn’t seem to have been available and hence it wasn’t considered.


The comments/argument above is based on the BBC report. I have not read the full report. Hence I’m liable to be committing the following offence.


Papert, S. (1987). Computer Criticism vs . Technocentric Thinking. Educational Researcher, 16(1), 22–30.

Helping teachers "know thy students"

The first key takeaway from Motz, Teague and Shepard (2015) is

Learner-centered approaches to higher education require that instructors have insight into their students’ characteristics, but instructors often prepare their courses long before they have an opportunity to meet the students.

The following illustrates one of the problems teaching staff (at least in my institution) face when trying to “know thy student”. It ponders if learner experience design (LX design) plus learning analytics (LA) might help. Shows off one example of what I’m currently doing to fix this problem and ponders some future directions for development.

The problem

One of the problems I identified in this talk was what it took for me to “know thy student” during semester. For example, the following is a question asked by a student on my course website earlier this year (in an offering that included 300+ students).

Question on a forum

To answer this question, it would be useful “know thy student” in the following terms

  1. Where is the student located?
    My students are distributed throughout Australian and the world. For this assignment they should be using curriculum documents specific to their location. It’s useful to know if the student is using the correct curriculum documents.
  2. What specialisation is the student working on?
    As a core course the Bachelor of Education degree, my course includes all types of pre-service teachers. Ranging from students studying to be Early Childhood teachers, Primary school teachers, Secondary teachers, and even some looking to be VET teachers/trainers.
  3. What activities and resources has the student engaged with on the course site?
    The activities and resources on the site are designed to help students learn. There is an activity focused on this question, has this student completed it? When did they complete it?
  4. What else has the student written and asked about?
    In this course, students are asked to maintain their own blog for reflection. What the student has written on that blog might help provide more insight. Ditto for other forum posts.

To “know thy student” in the terms outlined above and limited to the tools provided by my institution requires:

  • the use three different systems;
  • use of a number of different reports/services within those two systems; and,
  • at least 10 minutes to click through each of these.
Norman on affordances

Given Norman’s (1993) observations is it any wonder that perhaps I might not spend 10 minutes on that task every time I respond to a question from the 300+ students?

Can learner experience (LX) design help?

Yesterday, Joyce (@catspyjamasnz) and I spent some time exploring if and how learner experience design (Joyce’s expertise) and learning analytics (my interest) might be combined.

As I’m currently working on a proposal to help make it easier for teachers “know thy students” this was uppermost in my mind. And, as Joyce pointed out, “know the students” is a key step in LX design. And, as Motz et al (2015) illustrate there appears to be some value in using learning analytics to help teachers “know thy students”. And, beyond Motz’s et al (2015) focus on planning, learning analytics has been suggested to help with the orchestration of learning in the form of process analytics (Lockyer et al, 2013). A link I was thinking about before our talk.

Out of all this a few questions

  1. Can LX design practices be married with learning analytics in ways that enhance and transform the approach used by Motz et al (2015)?
  2. Learning analytics can be critiqued as being driven more by the available data and the algorithms available to analyse it (the expertise of the “data scientists”) driving it. Some LA work is driven by educational theories/ideas. Does LX design offer a different set of “purposes” to inform the development of LA applications?
  3. Can LX design practices + learning analytics be used to translate what Motz et al (2015) see as “relatively rare and special” into more common practice

    Exceptionally thoughtful, reflective instructors do exist, who customize and adapt their course after the start of the semester, but it’s our experience that these instructors are relatively rare and special, and these efforts at learning about students requires substantial time investment.

  4. Can this type of practice be done in a way that doesn’t require “data analysts responsible for developing and distributing” (Motz et al, 2015) the information?
  5. What type of affordances can and should such an approach provide?
  6. What ethical/privacy issues would need to be addressed?
  7. What additional data should be gathered and how?

    e.g. in the past I’ve used the course barometer idea to gather student experience during a course. Might something like this be added usefully?

More student details

“More student details” is the kludge that I’ve put in place to solve the problem at the top of this post. I couldn’t live with the current systems and had to scratch that itch.

The technical implementation of this scratch involves

  1. Extracting data from various institutional systems via manually produced reports and screen scraping and placing that data into a database on my laptop.
  2. Adapting the MAV architecture to create a Greasemonkey script that talks to a server on my laptop that in turn extracts data from the database.
  3. Install the Greasemonkey script on the browser I use on my laptop.

As a result, when I use that browser to view the forum post at the top of this post, I actually see the following (click on the image to see a larger version). The red arrows have been added to the image to highlight what’s changed. The addition of [details] links.

Forum post + more student details

Whenever the Greasemonkey script sees a Moodle user profile link, it adds a [details] link. Regardless of which page on my Moodle course sites I’m on. The following image shows an excerpt from the results page for a Quiz. It has the [details] links as well.

Quiz results + more student details

It’s not beautiful, but it’s only something I currently use and I was after utility.

Clicking on the [details] links results in a popup window appearing. A window that helps me “know they student”. The window has three tabs. The first is labelled “Personal Details” and is visible below. It provides information from the institutional student records system, including name, email address, age, specialisation, which campus or mode the student is enrolled in, the number of prior units they’ve completed, their GPA, and their location and phone numbers.

Student background

The second tab on “more student details” shows details of the student’s activity completion. This is a Moodle idea where it tracks if and when a student has completed an activity or resource. My course site is designed as a collection of weekly “learning
paths”. Each path is a series of activities and resources design to help the student learn. Each week belongs to one of three modules.

The following image shows part of the “Activity Completion” tab for “more student details”. It shows that Module 2 starts with week 4 (Effective planning: a first step) and week 5 (Developing your learning plan). Each week has a series of activities and resources.

For each activity the student has completed, it shows when they completed that activity. This student completed the “Welcome to Module 2” – 2 months ago. If I hold the mouse over “2 months ago” it will display the exact time and date it was completed.

I did mention above that it’s useful, rather the beautiful.

Student activity completion

The “blog posts tab shows details about all the posts the student has written on their blog for this course. Each of the blog posts include a link to that blog post and shows how long ago the post was made.

Student blog posts

With this tool available, when I answer a question on a discussion forum I can quickly refresh what I know about the student and their progress before answering. When I consider a request for an assignment extension, I can check on the student’s progress so far. Without spending 10+ minutes doing so.

API implementation and flexibility

As currently implemented, this tool relies on a number of manual steps and my personal technology infrastructure. To scale this approach will require addressing these problems.

The traditional approach to doing this might involve making modifications to Moodle to add this functionality into Moodle. I think this is the wrong way to do it. It’s too heavyweight, largely because Moodle is a complex bit of software used by huge numbers of people across the world, and because most of the really useful information here is going to be unique to different courses. For example, not many courses at my institution currently use activity completion in the way my course does. Almost none of the courses at my institution use BIM and student blogs the way my course does. Beyond this, the type of information required to “know thy student” extends beyond what is available in Moodle.

To “know thy student”, especially when thinking of process analytics that are unique to the specific learning design used, it will be important that any solution be flexible. It should allow individual courses to adapt and modify the data required to fit the specifics of the course and its learning design.

Which is why I plan to continue the use of augmented browsing as the primary mechanism, and why I’ve started exploring Moodle’s API. It appears to provide a way to allow the development of a flexible and customisable approach to allowing “know thy student” respond to the full diversity of learning and teaching.

Now, I wonder how LX design might help?

What might a project combining LX Design and Analaytics look like?

In a bit more than an hour I’ll be talking to @catspyjamasnz trying to nut out some ideas for a project around LX Design and Learning Analytics. The following is me thinking out loud and working through “my issues”.

What is LX Design

I’ve got some vague ideas which I need to work on. Obviously start with a Google search.

Oh dear, the top result is for Learning Experience Design TRADEMARK which is apparently

a synthesis of instructional design, educational pedagogy, neuroscience, social sciences, design thinking, and UI/UX—is critical for any organization looking to compete in the modern educational marketplace.

While I won’t dwell on this particular approach, it does link to some of my vague qualms about LX design. First, there’s a danger of it becoming too much of another collection of meaningless buzzwords used to label the same old practice as conforming to the latest buzzwords. Mainly because the people adopting don’t fully understand it and fail transform their practice. Old wine, new bottles.

Second, there’s the problem of the “product focus” in learning. Where the focus is on building the best product, which troubles me. Perhaps this says more about my biases, but I worry that LX Design will become just another tool (perhaps a very good tool) applied within the dominant SET mindset within institutional e-learning (which is my context). Which not surprisingly is one of my concerns about the direction of learning analytics.

And talking about old wine in new bottles, this post suggests that

Although LXD is a relatively new term in the field of design, there are some established best practices emerging as applied to creating online learning interfaces:

Mmm, not much there that I’d class as something that LXD has provided to the world. e.g. Donald Clark’s current sequence of “10” posts, including “10 essential rules on use of GRAPHICS in online learning”.

Needs and wants of the user?

This overview of User Experience Design (UX Design) – the foundation on which LX design is built – suggests

The term “user experience” was coined by Dr. Donald Norman, a cognitive science researcher who was also the first to describe the importance of user-centered design (the notion that design decisions should be based on the needs and wants of users).

As I wrote last week I’m not convinced that the “needs and wants of users” is always the best approach. Especially if we’re talking about something very new that the user doesn’t yet understand.

Which begs the question:

Who is the user in a learning experience?

The obvious answer from a LX design perspective is that the user is the learner. That the focus should be on the learner has been the broadly accepted in higher education for some time now. But then all models are wrong, but some are useful. In critiquing the raise of the term Technology Enhanced Learning, Bayne (2014) draws on a range of publications by Biesta to critique the focus on learning and learners. I’ve just skimmed this argument for this post, but there is potentially something interesting and useful here.

Beyond this more theoretical question about the value of a “learner focus”, I’d also like to mention something a little closer to home. The context in which I’m framing this post is within higher education’s practice of formal learning. A practice that currently still assumes that there is some value in having a teacher involved in the learning experience. Where “teacher” may not be a single individual, but actually be a small team with diverse roles. Which leads me to the proposition that the “teacher” is also a user within a learning experience.

As I’m employed as a teacher within higher education, I can speak to the negative impact of the blindingly obvious, almost complete lack of user experience design around the tools and systems teachers are required to engage with around learning and teaching. Given the low quality of those tools, it’s no surprise to me that most learning in higher education has some flaws.

This is one of the reasons behind the 4 paths for learning analytics focusing on the teacher (as designer of learning, if you must) and not the learner.

Increasingly, I wonder if the focus on being learner centered is arising from a frustration with the perceived lack of quality of the learning experiences produced by teachers combined with a deficit model of teachers. Which brings me to this quote from Bayne (2014)

points us toward a need to move beyond anthropocentrism and the focus on the individual, toward a greater concern with the networks, ecologies and sociomaterial contexts of our engagement with education and technology.

Impact of LX design for teachers?

What would happen to the quality of learning overall, if LX design were applied to the systems and processes that teachers use to design, implement, support, and revise learning and teaching? Would this help teachers learn more about how to teach better?

Learning analytics

I assume the link between LX design and learning analytics is that learning analytics can provide the data to better inform LX design. In particular, what Lockyer et al (2013) call “process analytics” would be useful

These data and analyses provide direct insight into learner information processing and knowledge application (Elias, 2011) within the tasks that the student completes as part of a learning design. (p. 1448)

One of the problems @beerc and I have with learning analytics is that it really only ever focuses on two bits of the PIRAC framework i.e. information and representation. It hardly ever does anything about affordances or change. This is why dashboards suck and are a broken metaphor. A dashboard without the ability to do anything to control the car are no value whatsoever.

My questions about LXD

  1. Just another FAD? Old wine in new bottles?
  2. Another tool reinforcing the SET mindset? Especially the product focus.
  3. Does LX design have a problem because it doesn’t include complex adaptive systems theory? It appears to treat learner experience design as a complicated problem, rather than a complex problem.
  4. The “meta-learning” problem – can it be applied to teachers learning how to teach?
  5. Where does it fit on the spectrum of: sage on the stage, guide on the side, and meddler in the middle?
  6. How to make it useful for the majority of teachers and learners?
  7. What type of affordances can/should analytics provide LX design to help all involved?


Bayne, S. (2014). What’s the matter with Techology Enhanced Learning? Learning, Media & Technology, 40(1), 5–20. doi:10.1080/17439884.2014.915851.Available

Exploring Moodle's API

API centric architecture is a coming thing in technology circles. It’s the way vendors and central IT folk will build systems. It is also going to be manna from heaven for institutionalised people who are breaking a little BAD.

Moodle has a growing web services API. The following documents some initial exploration of how and if you can “break BAD” with those APIs.


Web services API

Moodle has a capability for plugins to define a Web services API. The question is, how many plugins provide this and how much of Moodle core has exposed APIs. It’s likely to be quite large given APIs are increasingly used for mobile devices.

A quick check of my basic Moodle 2.9 install reveals
[code lang=”bash”]
dj:moodle david$ find . -name services.php

Not a huge number, but at least enough to start playing with (assign and forum are likely to be particularly useful) and there may well be more.

Of course, I should be looking to add a Web services API to BIM. This page will apparently help with that.

That page also includes a template with a test client. Could be useful later on.

What about the Core APIs

Moodle defines a number of Core APIs that are used within Moodle. Are these available via Web services? Some (all?) wouldn’t make sense, but maybe…

External functions API

The external functions API apparently “allows you to create fully parameterised methods that can be accessed by external programs (such as Web services API)”. Searching for evidence of that in my Moodle install is a little more heartening

[code lang=”bash”]
dj:moodle david$ find . -name externallib.php

Just have to figure out if the presence of these implies connections with a Web services API and the ability to access from a client.

Web Services

Which brings me to the Web Services category page. There’s also a web services forum and a related FAQ, which includes:


External services security outlines various ways services can be called and how security is handled.

Using web services on my Moodle instance

As per these instructions and elsewhere

  1. Enabling web services.
  2. Enabling protocols

    Appears REST is enabled by default (don’t think I did this earlier).

Explore – Site administration / Plugins / Web Services – and its range of options

  1. Overview.
    Includes directions on steps for enabling web services for mobile devices and for external systems to control Moodle.
  2. User.
    Need to allocate permission to use web services to specified users.
  3. Add services to be used.
    Which web services can the user use. In this case, a range of “built-in services” were already enabled for “all users” (assuming they have the required capabilities). This might be interesting to test and explore. Includes a broad array of interesting functionality (mod_assign_get_???) but not overly long.

    Adding a service requires specification of the functions to be enabled.

  4. Each service can be configured to a particular user or multiple users.
  5. Create a token – select a user and the service.
  6. And then there’s