Assembling the heterogeneous elements for (digital) learning

Month: March 2017

WTF(udge) does EEI do?

The following a collection of resources associated with a 15 minute presentation to the other teams within USQ’s Office for the Advancement of Learning and Teaching (OALT) explaining what people should know about the Educational Excellence & Innovation (EEI) team at this point in time, and what we’d like to know about the other members of OALT.


Update #1

Following slides updated with new thinking, but also customised to the audience.

Original presentation

Misc URLs

Some of the other resources mentioned in the presentation include

  • Building planes in the sky used as not so good metaphor for how EEI will operate – a heavy reliance on learning by doing.
  • USQ teaching space guide – produced by a USQ project to help staff using new teaching spaces.
  • TPACK framework.
  • S1, 2017 L&T orientation – resource page for the S1, 2017 teaching orientation for new academics. The first big official Professional Learning Opportunity (PLO) organised by EEI.
  • Teaching @ Sydney – an interesting approach to sharing resources and insights around learning and teaching.
  • Smallest Federated Wiki – a better (than a WordPress blog) example of a technology foundation on which to base the sharing of resources around learning and teaching


Boehm, B., & Turner, R. (2004). Balancing agility and disicpline: A guide for the perplexed. Addison-Wesley.

Brews, P., & Hunt, M. (1999). Learning to plan and planning to learn: Resolving the planning school/learning school debate. Strategic Management, 20(10), 889–913.

Bush, Vannevar. “As we may think.” The atlantic monthly 176.1 (1945): 101-108. Retrieved from

de la Harpe, B., & Mason, T. (2012). Not a Waste of Space: Professional Development for Staff Teaching in New Generation Learning Spaces. Melbourne, Australia.

Engelbart, Douglas C. “Augmenting Human Intellect: A Conceptual Framework. Summary Report AFOSR-3223 under Contract AF 49 (638)-1024, SRI Project 3578 for Air Force Office of Scientific Research.” Stanford Research Institute. Retrieved March 1 (1962): 2007. Retrieved from

Hutchins, E. (1991). Organizing work by adaptation. Organization Science, 2(1), 14–39.

Jones, D., Heffernan, A., & Albion, P. (2015). TPACK as Shared Practice: Toward a Research Agenda,. In L. Liu & D. Gibson (Eds.), Research Highlights in Technology and Teacher Education 2015 (pp. 13–20). Waynesville, NC: AACE.

Jones, D., Luck, J., McConachie, J., & Danaher, P. A. (2005). The teleological brake on ICTs in open and distance learning. In 17th Biennial Conference of the Open and Distance Learning Association of Australia. Adelaide.

Jones, D., & Muldoon, N. (2007). The teleological reason why ICTs limit choice for university learners and learning. In R. J. Atkinson, C. McBeath, S. K. A. Soong, & C. Cheers (Eds.), ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007 (pp. 450–459). Singapore.

Kurtz, C., & Snowden, D. (2007). Bramble Bushes in a Thicket: Narrative and the intangiables of learning networks. In M. Gibbert & T. Durand (Eds.), . Blackwell.

March, J. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.

Mintzberg, H. (1989). Mintzberg on Management, Inside our Strange World of Organisations. New York: Free Press.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Motz, B., Motz, B. A., Teague, J. A., & Shepard, L. L. (2015). Know thy students : Providing aggregate student data to instructors. EDUCAUSE Review Online, (September). Retrieved from

Norton, L., Richardson, J., Hartley, J., Newstead, S., & Mayes, J. (2005). Teachers’ beliefs and intentions concerning teaching in higher education. Higher Education, 50(4), 537–571.

Putnam, R., & Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4–15. Retrieved from

Sarlio-Lahteenkorva, S. (2007). Determinants of long-term weight maintenance. Acta Paediatrica, 96(s454), 26–28.

Seely-Brown, J., & Hagel, J. (2005). From push to pull: The next frontier of innovation. The McKinsey Quarterly. McKinsey & Company.

Tribble, E. (2005). Distributing cognition in the globe. Shakespeare Quarterly, 56(2), 135–155.

Trigwell, K. (2001). Judging university teaching. The International Journal for Academic Development, 6(1), 65–73.

Truex, D., Baskerville, R., & Travis, J. (2000). Amethodical systems development: the deferred meaning of systems development methods. Accounting Management and Information Technologies, 10, 53–79.

Weick, K., & Quinn, R. (1999). Organizational change and development. Annual Review of Psychology, 50, 361–386. 


Slide 1, 45: “Blowing Questions” by Brian Yap (葉) available at under Attribution-NonCommercial License

Slide 2, 7, 8, 9, 10, 11, 12, 13, 43: “fudge” by theilr available at under Attribution-ShareAlike License

Slide 3, 4, 5, 40: “construction” by Oregon State University available at under Attribution-ShareAlike License

Slide 6: “Blind Men and the Elephant” by Climate Interactive available at under Attribution-NonCommercial-ShareAlike License

Slide 14, 15, 16, 17: “The Big Picture” by Amanda Slater available at under Attribution-ShareAlike License

Slide 18, 19, 20, 24, 42, 44: “Abraham Lincoln Memorial 1” by Kevin Burkett available at under Attribution-ShareAlike License

Slide 21, 22: “Commandments” by James Perkins available at under Attribution-NonCommercial-ShareAlike License

Slide 23: “Chip Eats with Abandon 1/52 (explored 2015-01-04)” by Bas Bloemsaat available at under Attribution-NonCommercial-ShareAlike License

Slide 25, 26: “Traditional Professional Development” by Jen Hegna available at under Attribution-ShareAlike License

Slide 35: “fat cat” by 紫流 available at*/122530930 under Attribution-ShareAlike License

Slide 37, 38: “SC_Corelle Snowflake Garland (1974)” by catface3 available at under Attribution-NonCommercial-ShareAlike License

Slide 46, 47, 47, 49: “Private home reference library” by warwick_carter available at under Attribution-NonCommercial License

Observations on university L&T portals

At some stage soon I need to start developing a report on “learning and teaching portals”. i.e. how our institution deals with online resources around learning and teaching. There are a few issues with how we do it, and it appears that we’re not alone

Over the last couple of weeks I’ve been helping organise a teaching orientation session for new academic staff. Consequently, I’ve had to spend a fair bit of time engaging with what is there at the moment. What follows is a collection of observations on that experience and on-going discussions.

This experience has been somewhat novel for me because in the last 5+ years working here I’ve tended to avoid the L&T portals because: I could never remember where to find them; or, I was more comfortable finding help elsewhere.

There are two sections here

  1. A list of observed problems with such spaces.
  2. An initial set of possible explanations

Observed Problems

Can’t be found – poor discoverability

Simply finding what online resources are provided by institutions around learning and teaching can be an impossible challenge. There is poor discoverability.

This can be due to the quality of available search engines, meta-data etc. I know that most people avoid using the institutionally provided search engine to search the institutional websites. Instead relying on Googles site: capability.

The problem, however, also exists in category/organisational schemes. For example, trying to find where the link to L&T resources sites on the institutional staff portal has often been a task beyond people. But this problem extends to L&T specific sites.

At least two local L&T “portals” have been developed. Both contain useful information and both have been organised in specific ways. For example, one is organised by week of semester. Starting from about week -4 (4 weeks before semester) and up to week 20 (3 weeks after semester). Resources are organised into the week when they are deemed most appropriate.

Sadly this only works if every teacher follows the same weekly schedule, which is highly doubtful.

The saving grace for this particular portal is that it is hosted in Moodle and the latest version here now provides a half-way decent (but not perfect) search engine.

The big model problem

It’s become quite widely accepted that an educational institution should have an L&T framework. A set of models or structures that help the whole institution understand how learning and teaching happens. That common understanding then helps professional learning, strategic planning, operations etc.

The problem is that developing a L&T framework/model within a University context is incredibly hard due to the diversity of the learning occurs (amongst other factors). It’s so hard that it takes a long time and a lot of effort to develop that framework/model. This creates a problem.

For example, let’s take the draft L&T/LMS framework from RMIT that consists of 6 principles (connected, consistent, inclusive, aligned, clear, dynamic) and four stages

  1. Plan & Design
  2. Build
  3. Review
  4. Go live

Take a look at the Plan & Design section and you’ll see an example of the consistent structure of the framework. A brief description of the stage and then a collection of tabs covering: Context; Roles & Expectations; Learning Activities & Content; Communications; and, Feedback & Assessment. Each of these tabs includes: a description; tips for success; and threshold standard. Each of the threshold standards is described, linked to the principles, and some have links to related resources.

some is an indication of the problem. Visit the review stage and you’ll see a more explicit example

This section will have resources, templates and suggestions of how peers can be used in the design, development and review process.

It’s unfinished. Given that Australian central L&T units (often those tasked with developing this sort of framework) are restructured every 3 years or so, it’s not hard to see a potential problem with completion, let alone embedding the framework long term as part of the institutional culture.

Not only is there a lot of work required to develop an institutional L&T framework, there’s a lot of work to understand it and figure out how your practice fits within it. The RMIT framework is quite understandable and I’ve got quite a lot of knowledge in this area, but even I was starting to get overwhelmed by the long list of threshold standard (going by this page there are 80 of them).

I wonder how much effort teaching academics would invest in understanding the L&T framework, in light of other tasks.

Not there

Like many other institutions, my current institution is rolling out some quite nice, new technology enhanced learning spaces. I helped out with the orientation for new teaching staff in one of these new spaces recently.

It was a nice space for collaborative work. Spaces for small groups with a provided computer and large screen. Problem is we (amongst the more technically literate of staff) couldn’t initially figure out how to turn the computers on. There were no instructions in the room (that we could find) on how to turn these computers on. It was only through someone randomly pressing buttons that the method was discovered.

I’ve since searched for resources that would help provide pedagogical advice for these new rooms. Not there.

In the last couple of weeks a teaching staff member queried whether there was any professional learning opportunities that would help new casual tutoring staff develop some ideas around teaching in tutorials. Not there.

At least, given the poor discoverability around these resources, none can be found. Not quite the same as “not there”, but not far from it.

Out of date

As part of the work on the new orientation session it became apparent that a range of information was out of date. Examples included:

  1. A L&T portal identifying as an Associate Dean Learning and Teaching a staff member that left the institution two years ago.
  2. Changes in a HR system for advertising staff training opportunities breaking instructions from the institutional knowledge base for technology questions.

Not of the web

@cogdog writes the following in this blog post

Frankly I am not sure people should not be teaching online without some level of basic experience being and doing online. I have no idea if this is off base, but frankly it is a major (to me) difference of doing things ON the web (e.g. putting stuff inside LMSes) and doing things OF the web. I am not saying people have to be experts at web stuff, but the web should be like a place they feel like they inhabit, not just visit or witness through a glass plate window.

One assumes that the people helping people to teach online should also be able to do things “OF the web”, rather than just ON it.

There’s been a recent upswing at my institution in the use of Joomag – an “online publishing platform designed for you”. The products I’ve seen are online magazines viewed via Flash. A technology that is deemed by many to be

a fossil, left over from the era of closed standards and unilateral corporate control of web technology. Websites that rely on Flash present a completely inconsistent (and often unusable) experience for fast-growing percentage of the users who don’t use a desktop browser. It introduces some scary security and privacy issues by way of Flash cookies.

But even when the web is used, there appears to be limited basic awareness of fundamental web features, such as hyperlinks.

For example, my institution’s knowledge base for doing stuff with the institutional web-based systems take this standard form.

  1. Go to insert name of some system
  2. Click on the insert a tab name
  3. Click on the insert name of link
  4. various other instructions

There is a sequence of instructions about how to find a particular system hidden away in the depths of the institutional systems. Rather than simply just provide a link directly to it, such as

  1. Visit insert name of link
  2. various other instructions

There’s a chance that this is due to a desire to develop in people the habit of going to the institutional staff portal and finding their way from there. Fine, do that, but still include a link once people have read the instructions.

Some possible explanations

What follows are an initial set of possible explanations for these problems. Hence they also provide some pointers to possible solutions.

Too much focus on the “do it for” path

When trying to help teachers (e.g. by developing an L&T portal), there are four broad paths you can walk

  1. DIT – Do it to the teachers

    You decide what’s best and do it to them.

  2. DIF – Do it for the teachers

    The teachers decide what’s best and you do it for them.

  3. DIW – Do it with the teachers

    You and the teachers work together to figure out what is needed and do it jointly.

  4. DIY – Enabling Teacher do it yourself

    The teachers are enabled to do if for themselves.

The L&T portals that I’ve seen appear to have largely adopted the first two paths, with very little evidence of the last two. In some cases, the DIT/DIF paths are used with the intent to build the foundation for DIW/DIY, but that intent rarely translates into actual DIW/DIY.

Perhaps in part because of the “big model problem”. i.e. the DIT/DIF designers build a wonderful model of how it all works with the assumption that the teachers will grok that model and use it as the basis for their DIW/DIY. The only problem is that it’s too much work to grok the model.

Suggesting a need to start with where people are, increase the level of DIW/DIY.

Perhaps the size of the model involved, it’s distance from the schema of teachers, and who designs it is the defining difference between DIF and DIW? In terms of the L&T portal perhaps the defining difference is about who can organise, contribute and modify the resources that are within the L&T portal?

Limited technology – Kaplan’s law

In this institutional context, L&T portals have been developed with the technology that is available. Either Sitecore (designed for managing a marketing-controlled corporate web presence) or Moodle (designed to support learning and teaching in a formal course context).

Neither of these are a perfect fit for the provision of a L&T professional learning opportunities to a hugely diverse set of teaching staff.

Not open

David Wiley has suggested 5Rs for open – retain, reuse, revise, remix, and re-distribute. i.e. for a resource to be open, you should be able to

  • retain – Make and own copies.
  • reuse – Use in a wide range of ways.
  • revise – adapt, modify and improve.
  • remix – combine two or more.
  • redistribute – share with others.

None of the local portals I’ve seen appear to say anything about openness. Even if they used an open license, none of them actively provide support for (amongst others) the revise R.

In fact, increasingly the resources that are produced are going into an institutional Equella repository as PDFs, or are hosted on institutional systems where permissions are set up to prevent anyone (except a narrow set of people) from adapting, modifying and improving.

Organisational hierarchy

Mishra & Koehler (2006) argue that quality teaching requires (emphasis added)

developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. (p. 1029)

It needs a contextually appropriate integration of technical, content and pedagogical knowledge. Within most universities these bits of knowledge reside in (at least) three separate organisational groupings. None of these organisational groupings are experts in the specific combination of techincal, content and pedagogical knowledge required to generate the best possible learning outcomes in any specific context.

Instead, each organisational grouping tends to focus on their own task/expertise.

Hence at one stage my current institution had a different “L&T portal” for each organisational grouping that offered some level of support for teaching.

Overcoming this is hard.

Not aware of the need

In many cases, the people responsible for L&T portals are simple not aware of the need. They don’t know what the teaching staff are trying to do, so can’t provide the necessary support.

This is somewhat like the situation reported in this post from a few weeks ago. Instructions for some video-conference spaces haven’t caught up with the fact that there’s a trend toward using Zoom for video-conferencing, rather than the Cisco video conference gear. Even though the Director of ICT services has suggested that this move to Zoom for video-conferencing is being actively supported.

Focus on design and development

Most L&T portals and the resources developed for them appear to put almost all of their resources and consideration into the design and production of resources. Very little thought or resourcing appears to be invested in thinking about: evaluating use; updating and improvements; archiving etc.

de la Harpe et al (2014) in developing some online resources for RMIT appear to give this some thought in the following (p. 36)

A library guide was chosen as the SpringShare software that the library guides uses has been adopted by most universities in Australia and would, therefore, be simple for other universities to adopt and adapt. It was also very easy to use and sustainable since the librarians agreed to curate it and ensure it was kept up to date with current links

Nice to plan, but how does reality pan out?

The original project’s web page (;ID=xnbgfx4a17h3) published on the front page of the report is a 404. As is the other RMIT hosted site on the front page. The youtube video still works.

Suggesting something about the advisability of using an institutional web site.

The URL for the libguides of resources produced by the project are still available, but last modified at the beginning of 2016. The resources all appear to work and at first glance appear quite valuable.

RMIT still appear to be using the LibGuide content management system, but it appears my institution doesn’t. As it happens, some of the features of libguides appear to be a good fit for some of what I had in mind.

But technology doesn’t solve all problems.

I was recently pointed this section of the RMIT website. It’s maintained by the “Academic Development Group” (which appears to sit within the College of Business at RMIT) and is focused on teaching spaces

It appears that there is no use made of the teaching spaces online resources generated by an RMIT led OLT project into teaching spaces that are still hosted on the RMIT site.

Pattern entrainment

Earlier this week (I think) @timklapdor provided a link to this Medium article

The article touches on how the mental models people hold of the Web depend heavily on their experiences. It links to findings from Pew Internet that only a very short amount of time has to pass before people’s experiences with technology are very different. It then maps a history of recent Internet technology to understand the different experiences people have had of the Internet. This is important because

These mental models will be diverse — and will keep evolving — but may not include many of the (primarily web-based) concepts and literacies we grew up with — including for some the usefulness and importance of URLs, web standards, markup, accessibility, search engines, and the browser as the primary access point to the online world.

For these users, Facebook (or WeChat in China) is now a primary method for finding, reading and sharing information online. Messenger, SnapChat, Instagram, and WhatsApp apps have become some of their preferred methods of communications. Some in fact have no concept of the internet outside of these platforms.

This touches a bit on the problem of being “on the web” rather than “of the web”.

But it is also indicative of the broader problem created by pattern entrainment. i.e. this tendency how we think to be confined by our experiences.

If you’ve trained as a desktop publisher, some types of multimedia designer, or marketing, then your experience suggests a certain way of using the Internet (online magazine, branding, quality control etc). If you haven’t gained the experience of living with the web, then how can you be expected to design L&T portals that are on the web?

Adding a custom library and a semi-real application – Indicators notebook(s)

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

So the indicators notebooks/platform is on github. The one and only bit of analysis is almost completely useless and still requires a fair bit of set up code. The aims in this post are

  1. Add in a custom library for connecting to the data source.
  2. Add an indicator/notebook that does something kind of useful.

Hopefully, this will lay the ground work to start converting old Perl-based stuff into this new environment and start doing something more useful.

Custom library

The aim here is to replace all of the following

import json
with open('../config.json') as f:
    conf = json.load(f)
from sqlalchemy.engine.url import URL 
from sqlalchemy import create_engine
engine = create_engine(URL(**conf))

To something like

import Indicators

or something slightly more in the correct Python idiom.

That’s done. Will still need to be refined

  1. Better way to specify the path of the config file.

    Hard coded in a file in git is not a great start.

  2. Does it fit with the Python idiom/approach?

Something a little real

Aim here is to do something a little useful to people (or at least me) and to start playing with the visualisation options.

A need I’ve identified in my new role is to have some overall idea of the number of courses, number of teaching staff, number of students etc at my institution. There doesn’t seem to be any easy way to find out and nobody I talk to knows (with a few exceptions).

Aim here is to develop a notebook that shows the number of courses in Moodle per semester.

Lesson learned: In Python, when doing SQL using like and a wildcard – typically % – you need to use %%. As Python reads % as string formatting i.e.

shortname LIKE 'EDC3100_2015_%%'

Years and terms

The first question is how to capture the individual years and terms that I might want to capture individual data for.

I could hard-code this into the notebook, but it will be different at another insitution – or a different data set. So I’m going to try a kludge, add the data to the JSON config file. Like this

  "allYears" : [ 2012, 2013, 2014, 2015 ],
  "allTerms" : [ "2012_1", "2012_2", "2012_3", 
                 "2013_1", "2013_2", "2013_3",
                 "2014_1", "2014_2", "2014_3",
                 "2015_1", "2015_2", "2015_3" ]

This is ugly and will need to be revised, but I’m in a hurry.

Though this raises the question as to whether or not I can access the data now it’s in the Indicators module.

That exploration leads to an additional function in Indicators module to get this variable. This is probably how the problem with moodle prefixes will get fixed.

Yep done. Able to include a prefix in queries. The value is defined in a new config file lms.conf which look slike

  "allYears" : [ 2012, 2013, 2014, 2015 ],
  "allTerms" : [ "2012_1", "2012_2", "2012_3",
                 "2013_1", "2013_2", "2013_3",
                 "2014_1", "2014_2", "2014_3",
                 "2015_1", "2015_2", "2015_3" ],
  "mdl_prefix" : "moodle.mdl_"

Using the prefix in code looks like

import Indicators
import pandas as pd
engine = Indicators.connect()
configuration = Indicators.config()
prefix = configuration['mdl_prefix']
query = "select id,username,firstname,lastname from " + prefix + "user where id<30 "
df = pd.read_sql(query,engine)

Segue – groking data frames

I’m still very new to Python and tend to bring my Perl/PHP frames to programming. Need to spend some time groking “the Python way”. In writing this script it’s become obvious I haven’t yet grokked data frames. Hence reading this on data frames and the following.

Actually, I found this from not at all easily accessible, but there are some nuggets there.

Indexing of data frames has a number of different ways to access elements. iloc is the standard array approach i.e. based on position. indexes can also be more hash like.

Mmmm, more work to do.

The kludgy solution

Have added a Course Offerings notebook that includes code like the following that will produce a simple histogram showing number of courses for each year/term within the database.

This code is the year portion. The term graph is almost identical

yearCounts = {}
for year in configuration['allYears']:
    query = "select count(id) from " + prefix + "course where " +\
             " shortname like '%%_" + str(year) + "_%%'"
    df = pd.read_sql( query, engine)
    yearCounts[year] = df.loc[0]
counts = pd.DataFrame.from_dict( yearCounts,orient='index')

The code for terms generates output like the following

Course per term

Still quite ugly, there are ways to improve the output. A later task. Along with much more learning about Python etc.

Sharing “indicators platform” via github

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Following on from the last post the following documents how to share the “indicators platform” for analytics via github. It’s largely intended to help @beerc. I doubt there’s nothing (at the moment) that makes this inherently interesting for anyone else.

End result

The (almost completely useless) end result of this work is this github repository.

Hopefully, this will form a foundation that will help it get much more interesting, quickly.

Jupyter notebooks and github

It’s not straight forward to share Jupyter notebooks via github. At least if you wish to maintain privacy of the data.

For example, take a look at the first version of the first notebook placed in the repository. Can you see all the programing error messages at the bottom of the page?

Had the SQL command just before this worked, it would contain actual data of real people. This would be bad.

This is because the raw json file that is the notebook will include the data. This is a good thing when you’re viewing it on your computer. It’s a bad thing when you’re sharing it in a public space.

That’s fixed in a more recent version.

To achieve this, it was necessary to follow the instructions on this page, which involve

  1. Installing a python script on your computer so it can be run.
  2. Configuring git to use this script as a filter when sending stuff to github.
  3. Configuring the meta-data for the notebook to ensure that the content of data blocks would be removed before going to github.

    Suggesting that this step would need to be repeated for each notebook likely to have personal information show up in the data.

Testing that this works

My suggestion is

  1. Follow the instructions below.
  2. Modify the SQL in the notebook to make sure it generates an error (i.e. not real private data)
  3. Commit and push the error version back to github

If no error data shows up on github, then it’s working.

How to use this repository

Basic process should be

  1. Get your box set up to run Jupyter notebooks.

    It is generally, a fairly simple process

  2. Clone a copy of the github repository into the directory your notebooks are being stored – creating a directory called Indicators

    You might want to fork my repository first. This will give you your own github repository. We can then share good changes via pull requests.

  3. Create a file called config.json in the parent directory for Indicators with the following content (changed to suit your Moodle database)
      "drivername": "postgresql",
      "database": "",
      "username": "",
      "host": "localhost",
      "port": "5432",
      "password": ""
  4. Open up the notebook and run the cells.

Jupyter notebook indicators platform: Baby step #1 – Finding an ORM

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

The last post documented early explorations of Jupyter notebooks ending with a simple query of a Moodle database. This post takes the first baby steps toward some sort of indicators platform using Jupyter notebooks, Python and github. The focus here is to find some form of ORM or other form of database independent layer.

Problem: the code from the last post was specific to Postgresql. If you’re Moodle database is based on another database that code won’t work. The aim here is to enable some level of sharing of code/analysis/indicators. This means needed a way to keep the code independent of database specifics. This is where object relational mappers (ORMS) enter the picture. See this for an argument why this is a good idea.

Pandas and SQLAlchemy

This book offers some examples using sqlalchemy with pandas. A likely combination. This sqlalchemy cheatsheet offers some useful examples.

Need to install sqlalchemy, it appears. Actually just updated

conda install -c anaconda sqlalchemy

Oh dear, time wasted. Needed to restart the notebook server after that.

Process is

  1. Modify the config stuff to create an sqlalchemy engine.
  2. Read the data

Ends up with the following code

import json
# Put the config outside the Indicators directory, keep it out of git
with open('../config.json') as f:
    conf = json.load(f)
from sqlalchemy.engine.url import URL 
from sqlalchemy import create_engine
engine = create_engine(URL(**conf))
df = pd.read_sql('select id,username,firstname,lastname ' +
                 'from moodle.mdl_user where id<100 ',engine)

The config.json file looks something like the following. The current plan is that this sits above this directory, as this directory and its contents will eventually end up in github

  "drivername": "postgresql",
  "database": "someDbaseName",
  "username": "user", 
  "host": "localhost",
  "port": "5432",
  "password": "myPassword"

What’s next?

That works and seems a reasonable. Some ideas for the next step

  • Figure out how to remove/handle the moodle schema that’s in the SQL above, not to mention the mdl_ prefix on the table.

    Linked to allowing the code to be run across different institution’s easily.

  • Move the config code into a library for this work?
  • Set up the github repository, get this code shared and start working with @beerc on this.
  • Experiment with how the assumptions built into the Perl code I was using can be translated appropriately into this environment.

    How to develop the class hierarchy (or if) using sqlalchemy.

    How the perl idioms translate into python, sqlalchemy and pandas. Pandas has some nice features that might eliminate some practices.

Playing with Python and Jupyter notebooks for analytics

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

This is the third in a series of posts documenting “thinking” and progress around the next step of some bricolage with learning analytics and attempts to make some progress with the Indicators project.

The last post in this series revisited some work I did last year. The aim of this post is to consider and perhaps start playing with what’s next. At this stage, I think that’s Python and Jupyter notebooks, but I could be wrong.

Actually, nope. Jupyter notebooks appear to be the way to go. At least worth of a bit more exploration and experimentation. The following explains and documents the process toward a Jupyter notebook that is actually accessing a reasonable Moodle database on postgresql. The main data source I’ll be working with. It seems possible.

Notes to self:

  • Defining indicators and connection with the literature.

    Remember to revisit and explore further the literature mentioned in section 1.4 of this report on teaching culture indicators. There is literature defining and expanding the conception of indicators

  • How does this nascent approach fit with “learning analytics in Australia”

    This project has a number of frameworks/ways of thinking about learning analytics, especially within institutions. How does some of the early thinking driving this work fit within/contradict those? What can we learn?

    e.g. we obviously fit within cluster 2 – focus on pedagogy, curriculum, learning, but how well does what we’re doing fit within the pattern and what impact will that have on outcomes?

  • What can be borrowed/copied from the Loop tool?

    The Loop Tool is the product from an OLTC funded project. It’s open source, works with Blackboard and Moodle, and I believe uses Python.

    It includes a “data warehouse”, which I assume brings together Blackboard/Moodle data. This could provide a great platform upon which to build this work. Not only leverage work done by smart people, but also provide a cross-platform foundation.

    Only question is what limitations exist in this data because they couldn’t get direct access to the database. Appears it relies on csv and course export files. And some of the code still seems a little less than portable (hard coded paths with specific user home directories). Perhaps indicating a need to look for other cross-platform solutions.

    Beyond code re-use, some of the indicatores/patterns/visualiations generated by that tool might serve as useful things to re-create.

  • How does Data Carpentry link with Software Carpentry and which is applicable to the institution?
  • Four facets of reproducibility as a tool to evaluate approaches to learning analytics.

    e.g. how well does a data warehouse informed approach meet these.

  • Link to intelligence amplification.

    A focus on helping people collaborate around the indicators, rather than simple accept or use.


When “learning analytics” is mentioned within an institution, it seems people are always looking for an application (usually an existing one) such as a data warehouse or tools such as Open Refine or Trifacta. That seems like a combination of Kaplan’s law of instrument and a tendency for most people to see themselves as users of digital technology. That is, to do something with digital technology there must be an application designed specifically to help them with that task, rather than writing code.

This post seeks to argue that “the three biggest positive contributions to reproducible research are iPython (Jupyter) Notebook, knitr, and Galaxy. It positions Juypter and knitr as fitting with those with scripting (coding) experience. While Galaxy are for those who script not so much. Personally (and perhaps self-servingly), I see significant value in being able to script to provide more flexibility and broaden possibilities. Without scripting you are stuck with the model and capabilities of the chosen tool.

This leaves the question to be whether to use Jupyter notebooks or R/knitr. Tony Hirst has begun thinking about comparing these, but still early days. Given I have no real experience with either, it’s difficult to come to a meaningful decision. Suggesting that I’ll tend toward the option that appears most familiar – Jupyter notebooks/Python.

Longer term, there is a glimmer of a hope that using Python to generate the indicators/patterns, might enable later integration of such indicators into a MAV-like tool. This would enable these indicators/patterns to be visible within the learning environment.

As it happens, the argument around Jupyter versus R might not be an argument after all. Apparently – at least at some stage – it is possible to include R code in iPython notebooks. But apparently, Knitr can also include Python code. The comments of that last post reveal a native R kernel for iPython.

Below you will learn how I now know that Jupyter is actually an acronym of the three core languages that the Jupyter notebook approach was designed to support – JUlia, PYThon and R. (it also supports 40 other programming languages)

Getting set up: Python and Juypter

To know I’ve been relying largely on Tony Hirst’s blog posts about his experiences for information on Juypter notebooks. Time to check out other resources.

This appears to be the home for Jupyter notebooks and offers the following definition

The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.

It also points to JupyterHub as a mechanism to multiple single user Jupyter notebook servers. This might be interesting for sharing within an institution. For later consideration.

It also offers advice on installing Jupyter. I’d played with anaconda/jupyter previously, so a quick test and I’m ready to go. Nothing to install.

Follow the instructions on running the notebook and I’m away.

Installing python from the course below.

Reproducible research and Jupyter Notebooks

Thanks again to Tony Hirst, I’m aware of the curriculum for a workshop titled “Reproducible Research using Jupyter Notebooks. This seems a perfect resource for us to get started.

At this stage, I think I’ll turn this into a diary of my attempts to work through this curriculum (without the facilitated workshop). There are 5 modules, each of the following link to the git repo that contains the materials. Sadly, it appears that there isn’t a consistent format for each.

  • Introduction

    Reasonably accessible from the github site.

    Can find slides that provide a gentle into to using notebooks.

  • Data and Project Organization

    Different format, more use of Jeckyl and markdown. Not immediately obvious how to use. Based on previous version, which is a bit more usable. Google doc that documents additional thinking moving beyond the previous version. Appears that the final version is incomplete

    Some of it is focused on data coming in files. Section on how to structure notebooks.

  • Data Exploration

    Here’s the notebook with the content.

  • Automation

    Links to brainstorming Google doc and links to lesson notebooks: one, there’s a second but it’s currently basically empty. The first is not much better

    References the software carpentry resource on Programming with Python

  • Publication and Sharing

    Covers a few interesting topics, has some content, but incomplete.

The documentation lesson includes mention of a gallery of notebooks, including

Other resources

Learning about reproducible research with Jupyter notebooks

Starting with this slide-deck, which is linked from the git repository for the first module in the workshop.

A jupyter notebook can be thought of as a lab/field diary.


  • front-end
    • Web application to develop notebook documents
    • Notebook document representation of the content, including I/O of computations, explanatory text, images, etc.

      These are JSON files saved with .ipynb extension.

  • back-end
    • Notebook server – communication between the kernal and the web browser
    • Kernel responsible for executing the code, different kernels support different languages.

Jupyter architecture

Espoused benefits: Jupyter notebooks for reproducible research

Documentation/literate programming. Exploration and development. Communication and collaboration. Publishing.

Working with notebooks

This slide deck assumes you have Jupyter notebooks installed.

And I’ve created my first notebook. Hello world, ahh markdown how I love you.

A notebook consists of a sequence of cells. Each cell has a type. The default types are: code, markdown, heading and “Raw NBConvert”. Not sure on the last one, but there others a fairly self-explanatory.

NBConvert mechanism to convert notebook into other formats. Can also be included within the python code to allow output to be downloaded as other formats. Explore this more

Important: If the last line of a code cell produces output, then the output is embedded in the notebook just below the cell. If the last line is a variable, then the value of the variable is displayed, including rich output. As long as the semi-colon is not added to the last line. A semi-colon will prevent output generation.

Add a ! to the start of cell and you have a command-line prompt.

And there a nifty macros/shortcut keys – ESC-L – turn on line numbering

Slight problem with the workshop examples and my install. The example that I thought would be interactive, isn’t. The server, however is generating an error that I’ve just spotted

[IPKernelApp] WARNING | Widget Javascript not detected.  It may not be installed properly. Did you enable the widgetsnbextension? If not, then run "jupyter nbextension enable --py --sys-prefix widgetsnbextension"

Mm, nice, the Jupyter notebook itself provides support for checkpoints. Simple, embedded version control? There’s also auto-complete.

Save, run the command suggested, restart and all is fixed and working.

The basic of Jupyter look fine. Doable. Need to upskill on Python, translate existing practices into Python and think about the processes.

Intro to reproducible research with Jupyter

This was meant to be the first of the “sessions” in the workshop. The neat thing that this illustrates is that github will render Jupyter notebooks. Actually, thinking about the above, that’s obvious, but also limited. Notebooks = code + markdown. Github won’t render the code, but it will the markdown, as github uses markdown.

Viewing the raw source of the notebook reveals that it’s a little more complex than that. Remember, notebooks are json. One cell = one json element (whatever it’s called in json). Most of the cells in this notebook are markdown. The markdown is rendered.

Four facets of reproducibility

  1. Documentation
  2. Organisation
  3. Automation
  4. Dissemination

This aren’t a bad set of principles for promoting learning analytics. Also points to some weaknesses in standard organisational practices.

Data exploration

The module’s notebook starts with basics of Python libraries, dataframes. Includes sections on

  • assessing structure and cleanliness of data
  • data cleaning
  • tidy data
  • merging data
  • ..
  • visualisation with matplotlib and seaborn

Given the data we’re working with is largely from a database and is reasonably clean, this isn’t likely to be directly useful. The examples using dataframes could be.

Visualisation with Seaborn could also be useful.


The lesson covers

  • review of good practices – naming conventions…and basic

Not much else.

The model

An early start at grouping what I’ve derived from the above and apply to how we might work.

Jupyter notebooks are created via a web interface. As such, they are stored in a web server’s file space. It can have folders etc, it can link etc. On a laptop this is local. This could be managed using git.

Could try the following

  1. create new directory – Indicators
  2. create a index.pynb

    This becomes the default notebook. Do some initial testing. It could point off to other pynbs

    Initially, could use very basic python. Eventually evolve.

  3. Create a github repo and save it there.
  4. Others could make use of it and contribute
  5. Could export the notebooks as HTML etc and place online? maybe.

Getting started

Accessing postgresql from Python

Possible options

  • pscopg2.

    Only problem is that the connection details are hard-coded into the notebook. Not great security in sharing environment.

    Need: to find a way to support local (secret) database configuration

  • doing it with Jupyter

    This explicitly mentions the config file idea and then connects it with psycopg2 and proceeds to show some good examples specific to pandas

Let’s try it. Mmm, psycogp2 not installed. This works.

conda install -c anaconda psycopg2

A bit of playing with the example and that is working. Success!!!

That’s working. A screen shot of the Jupyter notebook is shown below (data blurred out).

There remain a couple of problems with this in terms of abstraction, but I can see how it might work.

Reflecting on playing with learning analytics

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

Warning: WP_Syntax::substituteToken(): Argument #1 ($match) must be passed by reference, value given in /home/djones/public_html/blog/wp-content/plugins/wp-syntax/wp-syntax.php on line 383

This is the 2nd in 3 posts thinking about learning analytics and how we might engage with it better. The first rambled on about reproducible research and hunted for what we might do.

This post is an attempt to reflect on some work I did last year trying to design a collection of analytics to explore engagement. The aim is to remind myself what was done and try to distill what worked and what didn’t. Hopefully to inform what we might do this year.


The work last year was trying to work

with a group of fellow teacher educators here at USQ to explore what is happening around student engagement with our online courses

The aim was to generate a range of different engagement related analytics, show them to the teacher educators, explore what if anything was revealed about our courses and think about what that might mean for the courses, but also the exploration of analytics. The hope was that the cycle would help identify more contextually useful analytics that could actually reveal something interesting.

What was done

Version 1 of process

The aim was to generate and share analytics based on the diagram to the right (click on it the diagram to expand). The aim was to generate a website where those involved could see the different analytics applied to their respective courses (and each others). This was intended to drive discussion.

A website was generated to do this. Access was limited to those involved in the project. The following links illustrate the generated analytics using a course I taught.

You may see some menus with other course codes, there won’t be any data for other courses. The menus – at times – don’t work all that well.

  • click/grade and other basic indicators
  • time/grade – nada
  • networks & paths
    • Replies to posts network – normal Q&A forum or announcements forum or ice-breaker activity

      You can zoom in and out with trackpad. Click on network map to move it around.

      Click on the red (or any) hexagon and move it around to more clearly see the number of connections.

      Should be able to see a significant difference between the different types of forums.

    • content & sentiment analysis – nada
    • learning design – not a chance

    So only the simple ones done.

    The educators involved could navigate through the generated site and compare their course with others. This was scafolded lightly using a blog with links to specific analytics and questions. It was never done well or in a concerted manner.

    How was it done

    At a high level, I had a perl script that would loop through an array that specified which type of analytic to perform (e.g. clickGrades, dhits, dhitsGrades) and for which course offerings and modes. The array looked a bit like this

    EDC3100 => {
      "2015_1" => {
        clickGrades => [ qw/ all mode=Online mode=Springfield mode=Toowoomba 
    mode=Fraser_Coast / ],
        dhits => [ qw/ all mode=Online mode=Springfield mode=Toowoomba mode=F
    raser_Coast staff / ],
        dhitsGrades => [ qw/ all mode=Online mode=Springfield mode=Toowoomba 
    mode=Fraser_Coast staff / ],

    The script would loop through these entries and execute the following code (modified just a little for clarity/space)

    my $model = $factory->getModel( OFFERING => "${course}_$offering",
                                    ANALYTIC => $analytic );
    my $view = $factory->getView( MODEL => $model, VIEW => $analytic );
    my $string = $view->Display( SUBSET => $subset, COURSE => $course, 
                                 OFFERING => $offering, ANALYTIC => $analytic );
    writePage( OFFERING => "${course}_$offering",
               ANALYTIC => $analytic, SUBSET => $subset,
               STRING => $string );

    Analytics classes

    Underpinning the above were a collection of analytics classes that could also be called directly. For example, the following.

    my $model = QILTers::Analytics::dhits->new( OFFERING => "EDC3100_2015_1" );
    my $view  = QILTers::Analytics::dhits_View->new( MODEL => $model );
    my $string = $view->Display(
                    COURSE => "EDC3100", OFFERING => "2015_1",
                                SUBSET => "mode=Toowoomba,grade=F" );
    print $string;

    This type of MVC architecture was used largely because I had a large code-base (and related experience) from my PhD.

    In this context, the model classes converted institutional specific notions. This included

    • that EDC3100_2015_1 could be used to identify a course
    • access to additional information about users (gpa, grade etc) not stored in Moodle.
    • support the SUBSET mechanism to allow focus on particular campuses, grades etc.

    The view classes were responsible for translating the gathered data into a format that could be handed over to a couple of different client libraries (plotly and cytoscape)

    Reflections and lessons learned

    Misc reflections include

    • It took time to get all this set up, hence the lack of progress.
    • The subset functionality was useful.
    • The production of the web pages was all a bit kludgy
    • Need better support for integrating the analysis into discussions with others.
    • Tightly integrating the navigation with the content was a mistake.
    • The client libraries were quite useful, but required some familiarisation.
    • The code here relies on quite out dated technologies.
    • Once the structure was in place, different analysis could be done fairly quickly (given time)
    • Exploration of different analytics or modes, still required intervention by a coder

    Next step, what are others doing in this area? What might be a better way to do this?

How we are rethinking L&T grants, awards and professional learning

The team I work with have been tasked to rethink how our institution does teaching awards, grants, and professional learning opportunities. i.e. the funding our institution provides to teaching focused staff in recognition of quality work (awards), to develop a good idea (grants), or learn something new (professional learning opportunities). Since I’m strange, we’re using a slightly different approach (at least based on reactions so far) to what others are used to. The following is an attempt to explain the approach and what passes for the thinking behind it.

The process goes something like the following

  1. Identify what’s been done before, here and elsewhere with [grants | awards | PLOs]?
  2. Identify and analyse the features and attributes involved in [grants | awards | PLOs]
  3. Identify a collection of attributes that might work for the institution.
  4. Get some consensus around a particular collection of attributes.
  5. Implement it.

This approach isn’t a 1000 miles removed from a standard product design process. Though in this context, there’s already an expectation of the broad type of product (we will have teaching awards, grants, and professional learning opportunities), the question is what shape (features and attributes) these specific products will have.


  • While the process steps are sequentially numbered, this will never be a truly sequential process.
  • This is not meant to be a process that our group does by ourselves. At the very least, we will be performing this process in as open a way as possible. However, we will also be actively engaging with others throughout.

Why are we doing this?

In short, to avoid following the crowd and to actually try and have an impact.

For example, until recently L&T awards at Australian Universities have almost all tended to focus on aligning with national teaching awards. There now appears to be a small, but growing trend toward moving toward a more explicit focus on aligning institutional L&T awards with institutional strategic priorities. I’m not convinced that following what everyone else is doing is the right thing. Nor am I convinced about the benefits of strategic alignment as the only measure of effectiveness

There’s also long been complaints about the impact of institutional practices around grants, awards, and professional learning. Would be nice to be able to demonstrate some impact.

How does it happen

So how do we do it. Given the current context, our existing processes are extremely responsive (i.e. we’re making it up as we go along). Hopefully we can improve upon that, especially given that we’ll be iterating around this cycle many times (hopefully) over the years.

#1: What’s been done before

The two main methods here are a sector scan and literature review. This has already largely been done for teaching awards. It’s underway with teaching grants and yet to commence with professional learning.

I’m hoping we’ll be able to release some (all?) of these under a open license.

The aim is to get a good grounding in what’s been done before and what was learned.

#2: Analyse features and attributes

The aim here is to try to break apart the “product” (awards, grants or professional learning opportunities) into it’s various features, and then to identify all of the possible attributes/values of those features. The table below is an illustration of this. It features a very early attempt at doing this with teaching awards.

Feature Attributes
Who can apply?
  • Individuals, teams, programs, projects, services.
  • Full-time staff, part-time staff, casual staff, teaching staff, professional staff, students
Who nominates?
  • Self-nomination
  • Students
  • Peers
  • (semi-)automatically via systems & processes evaluating criteria.
  • L&T experts.
  • Institutional leadership.
Who writes the application?
Required evidence?
  • None
  • SELT data
  • Other institutional data
  • Peer review
  • Testimonials
Award categories
  • Various category of staff: new, early career, mid-career, casual, full-time, professional….
  • Institutional organisational unit: faculty, school, discipline.
  • Specific categories aligned with current institutional strategic priorities
  • Levels of difficulty: citation, award
What criteria?
  • # of criteria: ranging from none up to lots
  • Different criteria for each category of grant, especially those with strategic priorities.
  • Same criteria for all grants.
Who judges?
  • internal/external people
  • Institutional leadership
  • teaching experts
  • different judges per organisational unit, or one set for whole institution
  • Popular vote: students and/or staff
What’s the reward?
  • purely financial
  • financial + some in-kind support
  • automatic increment promotion
What can be done with the reward?
  • Ranging from: no constraints how money spent, up to very specific constraints on how it can be spent.
Dissemination? ???

This type of table is informed by the literature and what’s gone before. It’s also informed by trying to think outside the box. Identifying methods that can help scaffold this process, especially broadening participation would be useful.

#3: Identify the collection of attributes that works

The next step is to figure out which collection of features/attributes would best fit the institutional need (raising big questions about whether an institution can have needs and how you identify what those are). This has to be done by a combination of

  1. Understanding the organisational context.
  2. Identifying the aims/needs of the organisation.
  3. Drawing on theory and assumptions (are those different) to identify which attributes would work in the given context and achieve the aims.

It should also involve a lot of discussion with a broad array of people, getting lots of diversity involved.

Hopefully, this stage ends with a reasonable model of how it might work. Perhaps with some implementation details.

#4: Get some consensus

At this stage, there is a need to get institutional buy-in. Agreement that this is a useful way forward and that it should be done.

Exactly what form that will take will depend on the “product”. Teaching awards – at this institution – connect into a formal policy framework. As such, there is a need to get formal consensus around a specific design. Professional learning opportunities largely don’t have the same formal connection, hence a more emergent approach can be taken.

#5: Implement it (and evolve it)

The last step is to get it implemented, evaluate it, evolve it and repeat.

Communication and professional learning for teaching at Universities

One of the significant challenges I’ll be facing in the new position are around how to help the institution effectively engage teaching academics in relevant communication/professional learning that will help them. One of the immediate challenges from this task is identifying examples of good practice that have attempted to unify communication and professional learning for teaching at Universities into something user-centered and contemporary.

One example from Sydney University is used to illustrate what this might look like, are there others?

There is some good work

There are lots of good examples of individual projects. For example, way back in 2012 (5 years ago!!) @sthcrft kicked off the idea of Coffee Courses at UNE. That idea has been adopted and run with at ANU with apparently great success. So much so that an upcoming Coffee Course on Open Educational Practice is being run by some colleagues of mine from USQ.

What about integration at the institutional level?

But the ANU coffee courses appear to be run (quite effectively) by ANU Online. But I wonder how well it integrates/connects with the rest of the ANU activity around helping teaching academics improve their teaching? If I visit the ANU Centre for Higher Education, Learning and Teaching, I can’t immediately see any mention of coffee courses. If I visit ANU Online, I also can’t immediately see mention of coffee courses, but I can see evidence of a raft of other professional learning opportunities. Just like many institutions (including mine) there doesn’t seem to be any integration of communication and professional learning for teaching at Universities.


  • Though ANU is my alma mater, I’ve no understanding of the organisational structure or other supports that may be provided to ANU teaching staff to help them navigate. I could have missed something simple.
  • The ANU online environment seems to be quite a few steps ahead of my own institution in terms of supporting professional learning for teaching.

Teaching@Sydney: A better example?

Earlier this week I saw the following tweet (once again proving even a cursory glance at a poorly curated twitter feed can provide significant benefit)

This led me to discover and explore Teaching@Sydney, which is described as

Teaching@Sydney is a blog, website, newsletter and place for staff and students to contribute and share anything related to teaching and learning at Sydney.

I’ve not talked with anyone behind the design of Teaching@Sydney. There may well be a range of limitations or issues that are only apparent when attempting to use it for periods of time. However, there are numerous aspects of this approach that I think are worthy of copying adapting. This is not to say it isn’t without it’s flaws.

An integrated approach that supports separation

It takes an institutional focus (“branded” as Teaching@Sydney) but brings together all (or at least many) of the parts of the institution and provides space for them.

Roll your mouse over the Faculties heading and then roll over the different faculty names and you see a list of recent posts relevant to those faculties. It allows individual groups to focus on what’s immediately relevant, but at the same time allows everyone to see what other people in other faculties are doing, breaking down silos.

The site also provides links to what appear to be most of the other relevant links around learning and teaching.

It’s social

From the “about”

blog, website, newsletter and place for staff and students to contribute and share anything related to teaching and learning at Sydney

It has a defined process for writing and given the number of posts it seems to be working.

Though, on my quick skim, I didn’t see much evidence of comments on posts. So wondering about whether conversations are generated. Perhaps that happens face-to-face.

The content is openly available

e.g. I came across this site via this tweet, which comes from an associated Twitter account (with 169 followers). Since it was open I – as someone outside Sydney Uni, have been able to go in and look around and gain inspiration from the work.

I’m also able to see quite detailed information about the teaching environment at Sydney Uni. e.g. the post titled “Knowing your students – Deep insight into your cohort”, which provides some discussion of features specific to the institution.

Technology used to improve user experience

Visit the registrations section and you’ll see a list of workshops. Each one has a register button. Click on it and you can register. The same process at my institution is a bit more involved.

What’s missing?

There’s probably quite of list of features that are missing or could be improved. Many of them context specific. There aren’t that many immediately apparent to me.


Open is a priority at my institution. Hence we’d probably lean toward applying an open license on posts.

Social curation of posts

I wonder whether any thought has been given to offering a variety of curated lists of resources for specific purposes. In particular, building into the approach active support for people to create, share and improve specific curated lists. The Sydney site provides various categories for different purposes that appear to be created automatically via WordPress categories. No categorisation scheme is going to suit every purpose. Enabling social curation might increase variety and also enable people to order posts in ways other than date of publication.

Integration with professional learning events

An early approach we’re working on is that every formal, face-to-face professional learning even will have a specific URL where all associated resources can be found. For example, this page around the application process for AAUT awards. Currently we’re doing this using my blog, would make more sense to do this on an institutional site (if there were one).

Explore situated reuse

At the moment, it appears that the content on Teaching@Sydney is only visible on that site. I’m wondering if there’s value in being able to reuse relevant resources within specific teaching contexts. e.g. having information about the pedagogical use of a particular LMS feature visible from within that LMS feature; or, perhaps a mobile app that will provide technical and pedagogical advice on how to use the technology in a given teaching space.

Explore federated approaches

I’m also keen to explore how a more federated approach to some of these types of resources might work and make available

Thinking about (more) reproducible research and learning analytics

There are moves afoot to revisit some of the earlier research arising from Indicators project and hopefully move beyond. To do that well we need to give some thought to updating the methods we use to analyse, share and report what we’re doing. In particular, because we’re now a cross-institutional project. The following captures some early exploration of this problem.

This has actually become the first of 2 or 3 blog posts. This one was getting too large. The sequence of post looks likely to be

  1. (this post) Think a bit more about the requirements: what do we want to share? with whom?
  2. Reflecting on some recent past experience.
  3. Looking at some possibilities

After that, hopefully, we’ll actually do something.


So what would we like to be able to do? In a perfect world we might be aiming for reproducible research. But that’s beyond the scope of what we need right now. At the very least we want to develop a research method/environment that helps make the research that we’re doing more reproducible.

What will be more reproducible?

The Reproducibility spectrum provided by Peng (2011) appears to include the following components

  1. data;

    The data from institutional systems that is in put into the analysis

  2. code;

    What actually does that analysis.

  3. results indicators/patterns; and,

    The final correlations, patterns etc produced by the analysis. Back in the day we used the term pattern

  4. interpretations/hypotheses/conclusions.

    We’ll be layering upon the results we’ll be layering our own interpretations/conclusions.

These components could be thought of being circular, especially if learner and/or teacher action in response to interpretations is added. Learners and teachers may take action in response to interpretations, which in turn generates more data to be analysed by code… Researcher action also plays a part, through going back to re-analyse the data with a different lens, or designing experiments to generate new data. Echoes of the Siemens (2013, p. 13) data loop, but with a slightly different focus.

Siemens (2013) Learning Analytics Model

Making this cycle more reproducible would be useful.

For example, the following graph has the following components

  1. data;

    Data from a Blackboard LMS over numerous years showing the number of times students had clicked on discussion forums. The data was limited to courses where staff participation (measured by clicks on the course site) was high. Combined with student final result (HD, D, C, P, F).

  2. code;

    Either some messy PHP code written by @beerc, some better structured but still messy Perl code written by me, or a combination of both. Probably with the addition of Excel spreadsheets for visualisation.

  3. results;

    The graph below indicating that that average # of clicks on a discussion forum tends to increase the better a student’s final grade is.

  4. interpretations/hypotheses/conclusions.

    One interpretation is that students get better grades by being more dilligent/expending more effort. Hence they click more. Of course this is a hypothesis.

Average student hits on course site/discussion forum for high staff participation courses

To whom?

Indicators architecture

Way back in 2009, the presentation for the first indicators project paper included the following image to the right as an early explanation of what we were doing.

This image can be mapped somewhat onto the components of what from the previous section

  • data = the institutional databases and the LMS independent data.

    At this point in time, I’m guessing we’ll be working with our institutional data. The LMS independent data idea will have to wait

    That said, the recently announced Project Inspire from Moodle HQ might provide some assistance here.

  • code = typically the arrows between components.
  • results = useful data/information
  • intepretations = useful data/information

The T&L practice and Research ‘clouds’ in the above image point to the Who question

  • T&L practice
    1. Teaching staff – interested in reproducing the research on their course, students, learning designs.
    2. Teaching support staff/leadership – interested in reproducing the research on groups of courses.

    Largely interested in working with the results/interpretations.

  • Research

    Could divide research into various groups based on research interests, methods etc.

    Would potentially be interested in accessing the data, code and results/interpretations.

The useful data/information captures results and interpretations aspects of the what from the previous section.

Potentially, we’d like aspects of this work to be more reproducible (/accessible) for the following broad groups

  1. researchers within the project;

    Given we’re at different institutions. We wouldn’t be sharing data, at least not until a lot more necessary work is done.

    However, we would like to share code, results (suitably anonymised), and interpretations.

    The sharing here also includes co-construction/modification of these artefacts. Suggesting a need for version control and collaboration.

  2. researchers outside the project;

    If we take a more open approach, probably the same sharing as with ourselves. However, early on some of us might like to share more final conclusions of what we do.

    Sharing code tends to imply we’re using the same technology. We might be able to agree to this within the project, but outside is never going to happen. But we should perhaps allow for the possibility.

  3. teaching academics
  4. misc. support groups

What about the learner?

I’m very aware that the above discussion (and much of what I’ve written previously) suffers from the learner being absent. At some level, this is for pragmatic reasons and linked to the problem that the placing some limits section tries to deal with. That said, however, the above would be enhanced by broadening consideration to the student. Something which this project – The Learning Analytics Report Card appears to be giving a lot of thought to.

I came to that project from this talk by Amy Collier. A talk which offers Lesson 2: Solutionism decomplexifies too much. Much of what I do likely comes across as solutionism (given my closeness to the subject, I’m not a reliable judge, what do you think?). However, my conception of this post and the purpose of this project is to start a process that results in methods, tools and thinking that allow more people to be engaged with the work we do.

The title of the talk – It should be necessary to start: Critical digital pedagogy in troubled political times – arises from a book We Make the Road by Walking and Collier makes this connection

In the early stages of the book, as Freire and Horton are discussing the spoken format of the book, Freire remarks that they will make the road by walking, meaning that the book will emerge from their conversations, and he adds what seems like an offhand remark that “in order to start, it should be necessary to start.”

This quote connects with my views on this work in two ways. First, this is just a start. Second, and perhaps more importantly, the rationale behind being able to more easily share how and what we do is to make it easier for us and others to “make the road by walking”.

Placing some limits

Trying to support all of the above would be bigger than Ben Hur. Picking up on this being a start and we’ll make the road by walking, the first step is to focus on a subset. The subset we’ll start with – I think – should be

  1. Practices that help the researchers two different institutions share code, results and interpretations.

    In theory, this could eventually expand beyond our little project and be used by others. But that will/may come.

  2. Practices that help the researchers share (appropriately anonymous) results and interpretation with others.

    Initially, this is liable to be limited to people within our respective institutions, but may extend more broadly.

    Actually, I’m thinking there’s value institutionally and from a research perspective in setting up some sort of mechanism by which we can engage more people in asking questions and drawing conclusions about what is found. A platform that helps analysis, but promotes sharing and collaboration around interpretation.

So maybe we’ll start there.

Next post should reflect back on what was done last year.


Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, (August). doi:10.1177/0002764213498851

The teleological reason why ICTs limit choice for university learners and learning

Jones, D., & Muldoon, N. (2007). The teleological reason why ICTs limit choice for university learners and learning. In R. J. Atkinson, C. McBeath, S. K. A. Soong, & C. Cheers (Eds.), ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007 (pp. 450–459). Singapore.

This version is slightly modified. The descriptions in the tables have been refined a touch and the video is now embedded


The application of information and communication technologies (ICTs) to support and enhance learning and teaching (e-learning) provides the potential to significantly increase the flexibility and choice for university learners and learning. The evidence, however, seems to indicate that these advantages are not evident in the majority of e-learning practice. This paper argues that the teleological design process which underpins almost all e-learning within higher education significantly limits the flexibility and choice ICTs can provide. The contribution of this paper is to illustrate how organisational implementation of e-learning has become imprisoned by a dominant and unquestioned epistemological foundation that is limiting understanding. It seeks to improve the understanding that informs e-learning implementation, in order to increase the level of flexibility and choice provided by the institutional implementation of e-learning for learners and learning.

Keywords: e-learning, learning management system, teleological, ateleological


It has been suggested that the application of information and communication technologies (ICTs) to support and enhance learning and teaching (e-learning) is a major force for change in higher education institutions, which will potentially have a profound effect on the structure of higher education (Green & Hayward, 1997). The ASCILITE 2007 call for papers suggests that “informed use of ICT by institutions and their teachers supports flexibility and choice in what is to be learned, how it is learned, when it is learned and how it will be assessed”. There appears to be evidence in the literature, however, that the adoption of e-learning is limited in terms of level of adoption, diversity and the level of flexibility and choice it provides to the majority of students.

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai. If not already adopted, Salmon (2005) suggests that almost every university is planning to make use of an LMS. Indeed, the speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006). Particularly more surprising is the almost universal adoption within the Australian higher education sector of just two commercial LMSs, which are now owned by the same company. Interestingly this sector has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005). Conversely, the mindset in recent times has focused on the adoption of the one-size-fits-all LMS (Feldstein, 2006).

While the majority of perceived drivers are arguably contestable, the need for an LMS remains entrenched in the university sector (Wise & Quealy, 2006). It is important to note, however, that even with the universal implementation of the LMSs, the level of adoption of those systems within many institutions has been limited. Vodanovich and Piotrowski (2005) for example report that of the 74% of faculty surveyed as being positive to use the Internet for education, 70% viewed it as effective but only 47% actually used it for education. Other best practice implementations, recommended by LMS vendors, report no more than 55% staff adoption rates (Sausner, 2005). Most universities are struggling to engage a significant percentage of students and staff in e-learning (Salmon, 2005). Drawing from the experience of Central Queensland University (CQU), only 56.7% of courses offered during the second half of 2006 had course websites (Gonch, 2007), even after almost six years of adopting centralised learning management systems. The limitations inherent in standardised products like the LMS (Black et al, 2007), coupled with the less than encouraging views amongst the academic community about the value of e-learning, explain some of the impediments to widespread institutional adoption. As Allen and Seaman (2005) suggest, only a small minority of academic leaders agree that their faculty accept the value and legitimacy of online education.

The growth of e-learning has evidently been incremental but it has not fundamentally challenged the face- to-face classroom (OECD, 2005, p. 68). Amongst those who have made use of LMS, most have adopted an approach where existing pedagogy is retained and simply transferred to the new medium, the LMS (Salmon, 2005). Evidence suggests that universities are primarily using the LMS for administrative purposes with only a limited impact on pedagogy (OECD, 2005). Most notably, a range of research has found that these systems are used predominantly to transmit course documents to students (Dutton, Cheong, & Park, 2004; Malikowski, Thompson, & Theis, 2006; Morgan, 2003). Data at CQU support these findings, i.e. in the second half of 2006 courses with course websites contained, on average, 23.7 course documents, with a maximum of 165 (Gonch, 2007). Benson and Palaskas (2006) report a similar observation, stating that the majority of usage for LMS involves fairly unsophisticated application of the available tools. Such practices serve to validate other commentators’ view that to date the outcomes for the adoption of LMSs and e-learning have not quite measured up to the hype (Reeves, Herrington & Oliver, 2004; Twigg, 2001; Zemsky & Massey, 2005 cited in Wise & Quely, 2006).

Cradler (2003), however, points out that whether or not e-learning is an effective intervention and resource depends on how it is used and the context in which it was used. This paper argues that many of the limitations of university e-learning implementations are due to the adoption of a design process for the institutional implementation of e-learning that is arguably unsuitable for the context of higher education and especially for the implementation of e-learning. By understanding the characteristics and implications of this dominant design process, this paper seeks to generate insight which can help improve the use of e- learning and increase the level of flexibility and choice available to university learners and learning.

The paper starts with a characterisation of the dominant design process (i.e. teleological design) universities commonly use to implement e-learning. This is done by briefly describing the nine attributes of a teleological design process developed by Introna (1996). The paper then offers brief descriptions and examples of how each of these nine attributes of teleological design work restricts and limits flexibility and choice in e-learning. The paper draws on the authors’ experiences and observations within a single organisation, though it is believed that these experiences will resonate with others, to illustrate some of the limitations of teleological design and their subsequent ramifications. It concludes with some discussion of the implications and suggestions for future work.

Teleological design

Many, if not most, universities follow, or at least profess to follow, a purpose driven approach to setting strategic directions (McConachie, Danaher, Luck, & Jones, 2005). Purpose driven organisational change is concerned with setting and achieving goals or objectives. In such a teleological process or organisation, the essence of success is in the setting and achievement of goals (Introna, 1996). Strategic goals, operational plans, working parties, benchmarking and quality assurance are all common examples of concepts and processes from a teleological design perspective.

In attempting to identify the shortcomings of the teleological approach to design and suggest an alternative, Introna (1996) identifies nine attributes of a design process. These attributes are summarised in Table 1. In order to provide a more concrete example of these attributes Table 1 also draws on the herding cats metaphor which is often used to characterise change management processes in universities (cf. Butler, 1997; Hort, 1997) and also used as the basis for a popular EDS commercial.

How teleological design limits flexibility and choice

The underlying premise of this paper is that a teleological design process is inappropriate for the problem of encouraging widespread, flexible and diverse adoption of e-learning amongst academic staff within a university. This mismatch directly limits the flexibility and choice that e-learning can provide university learners and learning. In the following sections the nine attributes of a teleological design process described in Table 1 is examined in more detail. A brief description of the attribute is given and an example from a single institution is used to demonstrate how this attribute can limit flexibility and choice. While it may be argued that this approach may simply demonstrate the mistakes made at an individual institution, the literature suggests similar challenges at other institutions (e.g. see Seimens, 2006).

Attributes Teleological Herding Cats
Ultimate purpose The identified goal or purpose Getting the cats to town
Intermediate goals Efficiency and effectiveness Minimise cost and maximise condition of cats when they arrive
Design focus Achieving the end result How do we get the cats to town

(Hence the heroism of the herders in the video above)

Designers Explicit designer Cat herders and their support crew
Design scope Individuals concentrate on their part of the problem The various herders and support crew focus on doing their specific task
Design process Problem solving At each step of the journey, the aim is to figure out how to get to the identified destination in the most efficient way and to solve any problems standing in the way.
Design problems Complexity and conflict The cats don’t want to go to town. There might be different perceptions amongst cat herders and support staff around how to get there. Conflict arises.
Management Centralised There will be a hierarchy amongst cat herders. There is a trail boss that is in charge who has the final say. The cats don’t have a say.
Control Master plan The trail boss, perhaps in consultation with others, has an established plan. It may even be explicit. The plan is referred to.

Ultimate purpose

Teleological design is based on the idea of modernism where human rationality and methods of inquiry can achieve their ultimate purpose of discovering and identifying universal truths (Baskerville, Travis, & Truex, 1992). The very definition of a teleological design process is to set and achieve objectives, to be purpose driven (Introna, 1996). When an institution engages in selecting an LMS, the purpose is typically set by a small group, usually organisational leaders, who draw on expert knowledge to perform a diagnosis of the current situation in order to identify some ideal future state and how to get there. Such approaches often involve formal working parties, identification of user types, the development of evaluation rubrics and other rational tools and devices.

This purpose driven approach can only work if the system’s behaviour is stable and predictable (Introna, 1996). If the context of the problem to be solved is neither stable nor predictable, then the most appropriate purpose will likely change along with changes in the context. Universities, however, are currently in an environment of intense change (McNaught 2003). The accelerating tempo of digital technology poses great challenges to institutions including disruptions to conventional planning processes (Duderstadt, Atkins et al. 2002). Higher education’s characteristic continuing change, combined with diverse nature of its students and the range of courses offered compound the complexity inherent in higher education (Jones & O’Shea 2004).

The teleological approach to the analysis and selection of a solution to e-learning is a representation of the dominant, traditional approach to information systems development (ISD). Such an approach relies on a long period of stable information systems maintenance to recoup the costs of the upfront analysis, design and implementation phases (Truex, Baskerville, & Klein, 1999). This approach to ISD leads to stable systems drag, a situation where the information system actually inhibits the organisation’s ability to adapt (Truex et al., 1999). Drawing once again from the experience at CQU, in the mid-1990s CQU had significant organisational structures, processes, expertise and infrastructure set up to support print-based distance education. All of this created inertia which slowed down CQU’s capabilities to adapt to e- learning. A similar inertia can be observed with the widespread adoption of learning management systems slowing down likely adoption of Web 2.0, Personal Learning Environments (PLEs) and other technologies

Intermediate goals

Once the ultimate purpose is chosen the intermediate goal of a teleological design process is to achieve that purpose with efficiency and effectiveness. By definition, any activity or idea that does not move the organisation closer to achieving its stated purpose is seen as inefficient. The best return on investment on a learning management system comes when the cost is diluted to favourable levels when nearly all courses and students are served by it (Warger, 2003). The importance of achieving the stated purpose becomes enshrined with institutional policies and practice which constrain practice within accepted bounds. Individual educators embracing more innovative uses of e-learning faced constraints put in place through institutional structures, policies as well as copyright and intellectual property rights (Dutton & Loader, 2002).

For example, with the 2001 implementation of Peoplesoft at CQU, the task of generating a list of all students in a course became a 20 minute, 26 step process requiring the use of three different applications on a computer running a specific version of Windows located on the CQU network. A significantly simpler web-based alternative was developed and made available to staff, and was widely adopted. Since this system was not implemented as part of Peoplesoft, neither was it developed by the organisational unit responsible for Peoplesoft, it was seen as inefficient, i.e. duplicating functionality (Jones, Behrens, Jamieson, & Tansley, 2004).

The resulting restrictions on experimentation, or activities which are not directly seen to achieve the stated purpose in an efficient manner, limits the capability of the system to learn, to expand its scope of actions (Introna, 1996). The focus on efficiency leads to systems that users perceive to be not sufficiently dynamic and often completely inappropriate for their needs.

Design focus

The focus of teleological design becomes how to achieve the stated ends, how to reduce the distance between the current state and the stated purpose. Such a system can fall into an on-going process of “problem solving” and does not engage in systemic thinking and reflection (Introna, 1996). Pondering whether or not the continual march towards the stated end continues to be sensible is considered to be inefficient and not contributing to the stated end.

Some examples of this limitation from current institutional e-learning practice associated with LMSs include:

  • Job positions and required skills;
    Position descriptions are often written and people selected based on their experience with the specific LMS employed at the institution. It is argued that while the value of skills with the existing system is important, the knowledge is confined to a specific system and can limit considerations of other approaches, which may be more coherent and practical.
  • Focus on workarounds to problems; and,
    The version of Blackboard (6.3) used at CQU has a grade book which performs poorly for large classes. After much problem solving, the identified solution is to develop workarounds. In this case, splitting students into smaller, artificial groups to work within the limitations of the software. This workaround is deemed acceptable even though it creates additional workload for academic staff. The question of whether or not an alternative grade book is possible cannot even be considered or is simply rejected.
  • Influencing long-term strategy.
    In 2002, for various reasons, it was decided that WebCT was no longer a viable solution for e-learning at CQU (Danaher, Luck, & McConachie, 2005; Sturgess & Nouwens, 2004). The name given to the working group set up to decide on the future direction of CQU was the “Learning Management Systems Working Party”. The idea that e-learning equals learning management system had become unquestioned at some levels within the organisation. Within these levels of the organisation there was extremely limited awareness or recognition that there were alternative approaches.

These assumptions are intrinsically teleological, subscribing to the belief that there is a truly right way to do things or achieve goals and thus the focus is on discovering that right way and converge to it (Introna, 1996).

Design scope

Large-scale teleological design, such as the evaluation, selection and implementation of a learning management system within a university, is an incredibly complex process. The traditional solution to this complexity is logical decomposition, i.e. the recursive reduction of the large and complex problem into smaller and more manageable problems that can be solved separately. Logical decomposition artificially separates groups of workers into formally segmented elements of a linear and rationalist process (Jones, Luck, McConachie, & Danaher, 2005). Such decomposition is problematic because the division, and subsequent isolation, of the resulting organisational components do not reflect the rich interdependencies of organisational reality (Truex, Baskerville, & Travis, 2000). This decomposition contributes to the tendency of a teleological design process to lose sight of the whole as each decomposed entity focuses on its much smaller design scope.

For example, in late 2006 a faculty (one entity) at CQU decided to award a collection of supplementary exams, to be sat at a much later date, and that the Blackboard sites for effected courses should be retained until after that date to assist students’ preparations for the exams. When students began to study for the supplementary exam reports trickled in about students having problems accessing the course sites. The Blackboard user support team (another entity) could see no reason within Blackboard why the students could not access the courses. The Blackboard technical support team (another entity) initially could not identify the problem. Indeed, a significant proportion of the students sitting supplementary exams could successfully login. Eventually it was identified that the problem being experienced by about 40 students was because student administration (another entity) had made a decision, many years ago, that once a student is no longer enrolled in courses, even if they have an outstanding supplementary exam, those students become “inactive”. This changes an entry in the student enrolments database which in turn changes the CQU student authentication and authorisation system which they must use to access Blackboard courses.

The division of the entities and the resulting emphasis on their part of the puzzle makes communication difficult contributing to the organisation losing sight of the whole. Increasing use of enterprise technologies is creating highly interdependent relationships between organisational entities and, consequently, the traditional stove-piped organisational structures are inadequate to manage the information-based institution (Hawkins, 2006). The vertical structures created by logical decomposition prevent broader conversations which are important for the quality of a university and its learning. This is particularly true if one subscribes to Laurillard’s (2002) view that the definition of a university is the quality of its academic conversations and not the technologies that service them.

Design process

The design process adopted during teleological design is that of problem solving, i.e. the removal of problems to ensure the organisation is moving closer to the stated ultimate purpose. The traditional problem solving process separates the analysis of needs from the actual act of intervention. It assumes that the designers can fully analyse the situation, determine appropriate goals and then manipulate the system to achieve those goals. It places emphasis on rationality, linearity and clarity of purpose.

However, information systems development (and other interventions in human organisations) are not rational, purposive or goal-driven processes, they are instead subject to human whims, talents and the personal goals of those involved (Truex et al., 2000). Technology is not, of itself, liberating or empowering but serves the goals of those who guide its design and use (Lian, 2000). The tools themselves are never value-neutral but are replete with values and potentialities which may cause unexpected responses (Westera, 2004). Learning management systems can potentially affect teaching and learning in unanticipated ways (Coates et al., 2005). Social systems cannot be “designed” in the same way as technical systems, at best they can be indirectly influenced (Introna, 1996). The emphasis on rationality, linearity and clarity of purpose embodied within planned change models means that they are unlikely to be successful within higher education (Kezar, 2001).

Decision making about the implementation of information systems is not a techno-rational process with many decision makers relying on intuitions or instincts and simple heuristics to simplify decision making (Jamieson & Hyland, 2006). People are not rational in that their decision-making is influenced by a range of cognitive and other biases. For example, the sub-group of CQU’s LMS Working Party which was responsible for performing the technical evaluation of potential learning management systems recommended Blackboard, and came to the following conclusion:

…[The Working Party] strongly feels that the Blackboard product has the best overall technical fit and provides the best opportunity available to meet our tactical needs while minimising support problems and costs (Central Queensland University, 2003).

This was in spite of the observation that a locally grown “system” had been working on existing CQU technical infrastructure for at least six years with significant success (Danaher et al., 2005). Blackboard on the other hand had never been used at CQU. Referring to Ausband, Introna (1996, p. 34) contends that myth “is an active force for ordering reality”, which could explain the conflict of the above assertion with the implementation outcomes. In 2004, another working party was set up to report on problems being experienced with the implementation of Blackboard at CQU. The report of this working party concluded that Blackboard turned out to be a less reliable product than expected, became less stable and had a range of persistent problems to which CQU, as an organisation, was unable to respond (Central Queensland University, 2004). Eventually, CQU expended significant funds in order to purchase a new server platform, from a different vendor than previously used at CQU, to manage the increasing requirements inherent in running Blackboard.


In teleological design the act of designing a solution is typically separated from the actual use of the solution. Design is typically the responsibility of a small group of individuals selected for their understanding and seniority, for the ability to apply rational analysis to the problem and move closer to the stated ultimate purpose. Problems associated with this include the limited diversity represented by a small group of designers and the potential for bounded rationality to limit outcomes.

Even when a concerted effort is made to include broad representation from different organisational entities, it is typically those who have been successful within the current systems and approaches to be selected. For example, academic staff participating in the evaluation of a new LMS are commonly selected from the academic power users of the current LMS. Consequently the successful experiences of the past can play a role in limiting future possibilities.

In the late 1990s when the CQU Online project was tasked with identifying what CQU was doing in terms of online learning and with identifying where it should head, it was a teleological design process. Of the 35 people involved in the formulation of the report, no more than four had any significant experience of online delivery; only 20% of the participants were teaching regularly; there were no students involved; and more than 60% of the participants were senior management or technical support staff (Jones et al., 2005). All could be considered successful within the current organisational practice. The level of understanding and rationality this group could bring to the question of the future of online learning was limited, especially given the relative novelty of online learning in the late 1990s. Incidentally, evidence from the literature suggests a number of institutions have experienced similar dilemmas in terms of limitations of LMS selection models used, e.g. placing emphasis on “what works for me” versus “how does this align with larger organisational learning objectives” (Siemens 2006, p. 11).

A further example of the problems created by a limited set of designers and the subsequent limitation of rationality is demonstrated by the “faddish” adoption of LMSs within universities. Surprise has been expressed at how quickly university learning and teaching, commonly known for its reluctance towards change, has been modified to incorporate the use of learning management systems (West et al., 2006). Pratt (2005) finds connections between the Australian university sector’s adoption of e-learning during the 1990s and the concept of management fashions, fads and bandwagons where a relatively transitory collection of beliefs can legitimise the exercise of mindlessness with respect to innovation with information technology (Swanson & Ramiller, 2004). In particular, given conditions of uncertainty about prevailing technologies organisations may rely on imitation to guide decision making (Pratt, 2005). Is the current trend amongst universities to move towards open source learning management systems (e.g. Moodle) the most recent e-learning fashion? Will an open source learning management system, especially one that is supported within an institution in the same way as a commercial product, really make a significantly different impact than use of a commercial product?

Design management and control

In a teleological design process, a centralised group through direct intervention in line with a master plan fulfils the necessary management and control requirements. These are typically encapsulated in the traditional institutional policies and governance structures. What can be done is typically limited by what is outlined in the policies or recognised, understood and allowed by the participants in the governance structure. Such structures take considerable time to respond to local contextual requirements, if they become aware of them at all, especially if they are a poor fit for the local requirements.

For example, CQU has never had a separate policy or governance framework for the use of ICTs in learning and teaching. These issues have always been dealt with as part of the wider ICT policy and governance framework, predominantly concerning administrative processes. Consequently, the manner in which ICTs are acquired and used for learning and teaching has been essentially handled by IT professionals, while at the same time considering corporate systems. An imbalance in the participation and power within the governance structure between corporate requirements and learning and teaching requirements has led to a significant imbalance in funding. Gartner Consulting, when aiding CQU to develop an ICT strategic plan, observed:

While recognising that the investment in the People Soft ERP system is a necessary pre cursor to expansion of flexible learning to multiple geographies, the significant investment in this system has concentrated attention away from core teaching and learning applications. (Central Queensland University, 2001)

Behaviour such as this is intrinsic within the teleological world-view wherein the “holistic behaviour is traded for reductionist behavior, at severe cost to the system as a whole (Introna, 1996, p. 36).

Design problems

Apart from the complexity that comes from attempting to implement large-scale change, the other major design problem teleological design must face is conflict resolution. In teleological design some group selects the “right” goal. Those who disagree with the chosen goal must be convinced through various change management strategies to agree to the chosen goal. The change management strategies used to achieve this are themselves teleological and in turn create more conflict. Subsequently a great deal of organisational effort is expended trying to resolve internal conflict (Introna, 1996). Effort that is then no longer available for other, more productive activities.

For example, Sturgess and Nouwens (2004) report on a comment demonstrating the differing views around change management and one destined to create conflict. One of the members of the technical group of CQU’s Learning Management Systems Working Group, suggested “that we should change people’s behaviour because information technology systems are difficult to change”. This also illustrates problems that can arise with the design focus of different entities within an organisation being on “their” part of the problem and their common desire to simplify their own difficulties at the expense of the whole.

Discussion and conclusions

Teleological design has, to a large extent, become a single domineering and often-unquestioned concept that underpins much organisational implementation of e-learning within higher education. The adoption of such a single domineering concept imprisons not only thinking about implementation but also thinking about thinking about implementation (Truex et al., 2000). This is especially troubling given the perspective that:

The unique characteristics of higher education are in conflict with the assumptions of teleological models, which assume a clear vision, unambiguous plans, a decision-making chain of command, clear delegation of responsibility, decisions based on facts, and rationality (Kezar, 2001)

A number of authors have pointed out that there is another extreme (Baskerville et al., 1992; Introna, 1996; Kurtz & Snowden, 2007; Seely-Brown & Hagel, 2005). Introna (1996) in discussing the attributes of teleological design offers ateleological design as an alternative. Seely Brown and Hagel (2005) make the distinction between push and pull systems. Kurtz and Snowden (2007) contrast idealistic and naturalistic approaches to sense-making. These differing approaches, to some extent, all represent a similar set of divergent, even dichotomous, world-views.

This paper seeks to demonstrate the misalignment and limitations of practices in a university context from a teleological extreme and, in particular, to demonstrate how such an extreme significantly limits flexibility and choice for learners and learning. It does not, however, follow as a matter of course that the other extreme world-view is more appropriate. Such an extreme would be particularly problematic for e- learning because of the unquestioned dominance of the teleological view throughout the majority of the community. The education and training of all professionals involved with e-learning, almost without exception, is entirely teleological. There are currently significant government requirements for universities to engage in quality assurance, strategic plans and other very teleological practices. It is extremely difficult to adopt an ateleological approach within a heavily teleological context. In addition, an extreme ateleological approach might lead to organisational anarchy, with no overarching plan for bringing together localised energies and initiatives (Jones et al., 2005).

Introna suggests that perhaps “there is a continuum with complete teleological behavior on the one end and absolute ateleological behavior at the end. Moreover, the works of Baskerville and his colleagues (1992); Introna (1996); Kurtz and Snowden, (2007); and Seely-Brown and Hagel (2005) examining the alternate world views offer some insight. Another perspective is to consider the design attributes of a teleological process and when implementing e-learning seek to minimise the limitations inherent in teleological design. Space constraints in the current paper prevent any reasonably informative discussion of this view and how that might be merged into existing practice. However, some initial discussions and experience with this perspective merging within e-learning has taken place in other venues (Jones & Gregor, 2006; Jones et al., 2005) but much more consideration and empirical work is required.


Allen, I. E., & Seaman, J. (2005). Growing by Degrees: Online Education in the United States, 2005. Needham, MA: The Sloan Consortium.

Baskerville, R., Travis, J., & Truex, D. (1992). Systems without method: the impact of new technologies on information systems development projects. In K. E. Kendall (Ed.), The Impact of Computer Supported Technologies on Information Systems Development (pp. 241-251). Amsterdam: North- Holland.

Benson, R., & Palaskas, T. (2006). Introducing a new learning management system: An institutional case study. Australasian Journal of Educational Technology, 22(4), 548-567. Black, E.W., Beck, D., Dawson, K. Jinks, S. & DiPietro M. (2007). The otherside of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments. TechTrends 51(2), 35-53.

Butler, J. (1997). Which is more frustrating: achieving institutional change or herding cats? Active Learning(6), 38-40. Christensen, C. (2003). The Innovator’s Dilemma: Collins.

Coates, H., James, R., & Baldwin, G. (2005). A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning. Tertiary Education and Management, 11(1), 19-36.

Central Queensland University. (2001). Information Technology and Telecommunications Strategic Plan. Rockhampton: Central Queensland University.

Central Queensland University. (2003). Learning management system working party. Rockhampton: Central Queensland University. Central Queensland University. (2004). Report on the operation of the Learning Management System (Blackboard) at Central Queensland University. Rockhampton: Central Queensland University.

Cradler, J. (2003). Research on E-learning. Learning & Leading with Technology, 30(5), 54-57.

Danaher, P. A., Luck, J., & McConachie, J. (2005). The stories that documents tell: Changing technology options from Blackboard, Webfuse and the Content Management System at Central Queensland University. Studies in Learning, Evaluation, Innovation and Development, 2(1), 34-43.

Dutton, W., Cheong, P., & Park, N. (2004). The social shaping of a virtual learning environment: The case of a University-wide course management system. Electronic Journal of e-Learning, 2(1), 69-80.

Dutton, W., & Loader, B. (2002). Introduction. In W. Dutton & B. Loader (Eds.), Digital Academe: The New Media and Institutions of Higher Education and Learning (pp. 1-32). London: Routledge.

Feldstein, M. (2006). Unbolting the chairs: Making learning management systems more flexible. eLearn Magazine, 2006 (1).

Gonch, B. (2007). An inventory of Term 2 2006 online courses at CQU. Rockhampton: Central Queensland University.

Green, M., & Hayward, F. (1997). Forces for Change. In M. Green (Ed.), Transforming Higher Education: Views from Leaders Around the World (pp. 3-26). Phoenix, Arizona: The Oryx Press.

Hawkins, B. (2006). 12 habits of successful IT professionals. EDUCAUSE Review, 56-66. Hort, L. (1997). Herding cats – Managing higher education. Australian Universities Review, 40(2), 45-46.

Introna, L. (1996). Notes on ateleological information systems development. Information Technology & People, 9(4), 20-39.

Jamieson, K., & Hyland, P. (2006). Factors that influence Information Systems decisions and outcomes: A summary of key themes from four case studies. Paper presented at the 17th Australasian Conference on Information Systems, Adelaide, Australia.

Jones, D., Behrens, S., Jamieson, K., & Tansley, E. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Paper presented at the Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania

Jones, D., & Gregor, S. (2006). The formulation of an Information Systems Design Theory for E- Learning. Paper presented at the First International Conference on Design Science Research in Information Systems and Technology, Claremont, CA.

Jones, D., Luck, J., McConachie, J., & Danaher, P. A. (2005). The teleological brake on ICTs in open and distance learning. Paper presented at the Conference of the Open and Distance Learning Association of Australia’2005, Adelaide.

Kezar, A. (2001). Understanding and Facilitating Organizational Change in the 21st Century: Recent Research and Conceptulizations. ASHE-ERIC Higher Education Report, 28(4).

Kurtz, C., & Snowden, D. (2007). Bramble Bushes in a Thicket: Narrative and the intangibles of learning networks. In Gibbert, Michel, Durand & Thomas (Eds.), Strategic Networks: Learning to compete. Blackwell.

Laurillard, D. (2002). Rethinking university teaching: a framework for the effective use of educational technology. London: Routledge.

Lian, A. (2000). Knowledge transfer and technology in education: Toward a complete learning environment, Educational Technology & Society (Vol. 3).

Malikowski, S., Thompson, M., & Theis, J. (2006). External factors associated with adopting a LMS in resident college courses. Internet and Higher Education, 9(3), 163-174.

McConachie, J., Danaher, P., Luck, J., & Jones, D. (2005). Central Queensland University’s Course Management Systems: Accelerator or brake in engaging change? [Electronic Version]. International Review of Research in Open and Distance Learning, 6. Retrieved March 13, 2006 from

Morgan, G. (2003). Faculty use of course management systems: Educause Centre for Applied Research. OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Develompent.

Pratt, J. (2005). The Fashionable Adoption of Online Learning Technologies in Australian Universities. Journal of the Australian and New Zealand Academy of Management, 11(1), 57-73.

Salmon, G. (2005). Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions. ALT-J, Research in Learning Technology, 13(3), 201-218.

Sausner, R. (2005). Course management: Ready for prime time? University Business.

Seely-Brown, J., & Hagel, J. (2005). From push to pull: The next frontier of innovation [Electronic Version]. The McKinsey Quarterly. Retrieved 26 October, 2005 from

Siemens, G. (2006) Learning or management system? A review of learning management system reviews. Learning Technologies Centre, University of Manitoba. Retrieved 17 September 2007 from reference-list.doc.

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3).

Swanson, E. B., & Ramiller, N. C. (2004). Innovating mindfully with information technology. MIS Quarterly, 28(4), 553-583.

Truex, D., Baskerville, R., & Klein, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117-123.

Truex, D., Baskerville, R., & Travis, J. (2000). Amethodical systems development: the deferred meaning of systems development methods. Accounting Management and Information Technologies, 10, 53-79.

Vodanovich, S. J., & Piotrowski, C. (2005). Faculty attiudes towards web-based instruction may not be enough: Limited use and obstacles to implementation. Journal of Educational Technology Systems, 33(3), 309-318.

Warger, T. (2003, July 2003). Calling All Course Management Systems. University Business Retrieved 30 December, 2006, from

West, R., Waddoups, G., & Graham, C. (2006). Understanding the experience of instructors as they adopt a course management system. Educational Technology Research and Development.

Westera, W. (2004). On strategies of educational innovation: between substitution and transformation. Higher Education, 47(4), 501-517.

Wise, L., & Quealy, J. (2006). LMS Governance Project Report. Retrieved 30 December, 2006 from

Powered by WordPress & Theme by Anders Norén