Assembling the heterogeneous elements for (digital) learning

Category: ple Page 1 of 3

Why do (social) networks matter in teaching & learning?

After a week of increasingly intermittent engagements with Twitter I stumbled back into the Twitterverse this afternoon and one of the first things I see is this post from @marksmithers. It is Mark’s response to the call for help from @courosa for his keynote at the Melbourne PLE conference next week. Alec’s question is

Why do (social) networks matter in teaching & learning?

What follows is my response.

Apparent serendipity

It is largely serendipitous that I am posting this. Without Mark being in my network and me happening to dip back into that network today, I would’ve probably missed this thread altogether. I echo Mark’s though that with a network of the appropriate make-up (the balance between similarity and diversity so difficult to achieve) answers to questions crop up in the network as you need them.

For example, I’ve just about finished teaching teaching this course for the first time. There were a number of times during the course when purusing my Twitter feed would highlight some really good resource or example of a topic I had to “teach” that week.

Teaching by learning

Which brings me to a slight disagreement with Mark, though it is achieved by the typically academic practice of arguing about definitions. Mark wrote:

I’m going to ignore the ‘teaching’ word and just concentrate on the ‘learning’ word because that is far more important and far more enabled by the network.

I’m a fan of Downes’ basic theory of teaching and learning

to teach is to model and demonstrate, to learn is to practice and reflect

The courses I’m currently teaching are focused on the inter-connection between Information and Communication Technologies (ICTs), networks and pedagogy (teaching/learning). So I am trying to “teach” these courses by showing the students how I learn about using ICTs and networks to teach them. The main approach to doing this is visibly engaging with and constructing networks. This includes reflections on my teaching posted on this blog, comments on Twitter, bookmarks shared via Diigo etc.

I don’t do this as much as I’d like. It’s difficult, but without these (social) networks it would be far more difficult to share this activity with the students.

Teaching as making connections

The flipside of Downes’ definition is “to learn is to practice and reflect”. Having students engage in (social) networks while they are learning is a great way of making this visible. Something I’m struggling and hoping to increase significantly over time.

I’ve often thought Erica McWilliams’ concept of the “meddler in the middle” (as opposed to the “sage on the stage” or the “guide on the side”) might be an apt metaphor for this. At least as I conceptualise it potentially working in a networked “classroom” (which is really not separated at all from the broader world). i.e. with students actively engaged with (social) networks their practice and reflection – perhaps their knowledge – becomes visible and enables a teacher to meddle in their network but also more broadly in the whole class’ network by encouraging students to engage in activities that lead them to make new connections in their network.

Perhaps more importantly, it opens up the possibility of other students and people from outside the class to encourage students to engage in activities that lead them to make new connections.

Reducing isolation

Earlier this term I observed a student in my course in a tutorial struggling with a particular task. One that most other students had completed. As I watched the student continued on struggling, not making connections with the knowledge needed to complete the task or with other students. I did intervene, but I do wonder just how many of the 250+ students in this course had moments like this?

Following on from the above, I believe/hope that by making students (social) networks more visible it is possible to reduce this sense of isolation.

Podcast for presentations at the PLEs & PLNs symposium

The following basically tells the rationale and approach used to create a (audio) podcast of the presentations from the Personal Learning Environments & Personal Learning Networks Online symposium on learning-centric technology.

I don’t know if anyone else has already done this, but just in case will share.

If you don’t want to be bored by the background, this is the link for the podcast.


I’ve hated the idea of the LMS for quite some time. I even had the chance to briefly lead a project looking at investigating how PLEs could be grown and used within a university, at least before the organisational restructure came. In its short life the project produced a symposium, a number of publications, various presentations and a little bit of software.

Due to the background I had some significant interest in the symposium being organised by George Siemens and Stephen Downes. However, due to other responsibilities, odd times (given my geographical location) for the elluminate presentations and the low speed of my home Internet connection I knew I was unlikely to actively engage. Some of these factors have already prevented my on-going engagement with CCK09.

I probably would have left it there, however, over the last 24 hours two separate folk have mentioned the symposium and almost/sort of guilted me into following up. The one thing I can do at the moment, due to a fitness kick involving a great deal of walking, is listen to mp3s. So, I wanted an easy way to get the mp3s. A podcast sounds ideal for my current practices.

The podcast

Last night I did a quick google and found this page that seems to provide a collection of links to video and audio recordings of presentations associated with the CCK09 course. Including some mp3s from the presentations at the PLEs & PLNs symposium

Rather than download and play silly buggers with iTunes I decided to recreate an approach we used on our first “Web 2.0 course site”. Using the students and staff in the course could tag audio/video for inclusion in a podcast created by Feedburner.

So I followed the same process for these:

I just hope now that I have the time to reflect and write about what I listen to.

Thank you Deidre and Maijann for the encouragement to engage with the symposium. Thanks to those organising the symposium and CCK09 for the resources.

Some potential updates to BAM – a step towards breaking the LMS/CMS orthodoxy

The initial design and use of the Blog Aggregation Management (BAM) system was, in part, designed to try out approaches that leverage the protean nature of information technology. A major part of this is a move to something different, and hopefully better, than the current, broken e-learning orthodoxy within universities that is stuck on the idea of course management systems (CMS – aka learning management systems) as the only possible solution.

The vast majority of what BAM does was designed and implemented over a couple of months almost 3 years ago. Since then we’ve learned a bit about using BAM and also have some time to extend BAM in appropriate ways. This post seeks to explain the next major expansion of BAM, which will see it move further away from CMS orthodoxy. In particular, the plan to expand BAM’s generation of RSS/OPML feeds so academic staff can avoid badly designed web-based management interfaces and use an RSS reader of their choice as the major interface to BAM.

Current limitations of BAM

One of the assumptions underpinning BAM was to significantly question the ability for a university to provide a blogging service that could compete with existing free blog services in terms of reliability, quality of features and quality of support services and resources. This is an extension of one of the principles behind the design of the Webfuse e-learning system (Jones and Buchanan, 1996) within which BAM is currently implemented. This principle is talked about under the heading “Flexibility and don’t reinvent the wheel”

The design of the M&C OLE (online learning environment) will attempt to maximise adaptability by concentrating on providing the infrastructure required to integrate existing and yet to be developed online learning tools. The M&C OLE will provide the management infrastructure and consistent interface to combine existing tools such as WWW servers, online quizzes, assignment submission etc. into a single integrated whole. While a number of the component systems will be developed at CQU, the emphasis is on integrating existing tools into the OLE.

At the moment, BAM provides a management interface for academic staff around existing blogging engines. Actually it is designed so that students can maintain a reflective journal in anything that will produce an RSS feed. The only direct interaction with BAM by students is at the start of term when they register their blog using the interface shown in the next image.

BAM blog registration

Academic staff currently use a web-based interface provided by BAM to track student blog registration and posts, view student posts and mark student posts. See the screenshots in this paper for what they look like. That is, BAM is still stuck in the CMS orthodoxy.

Moving to RSS readers and OPML feeds

Late last year there was a simple extension of BAM to allow academic staff to obtain an OPML feed pointing to all their students’ blogs. This could be imported into an RSS reader of their choice, in order that they could keep a track of posts by their students. There were a number of limitations of this approach:

  • It wasn’t automated.
    Someone had to run a script, generate the OPML feed, send it to the staff member who could then import it. They should be able to do it themselves.
  • It only provided access to the student posts, none of the other BAM services.
    The OPML was using the RSS feeds straight from individual student blogs. It did not go through BAM and consequently could not provide any additional BAM/institutional related information. For example, which students hadn’t yet registered their blog, no direct access to the BAM marking interface, etc.

Implementing BAM “cooked” feeds

The premises on which this extension of BAM is based are:

  • Increasingly people will have an application they use to access, manipulate and store RSS, OPML and other feeds.
    e.g. I believe the Outlook, the spawn of the devil, even supports feed reading now.
  • Using this application(s) to track information of interest will become part of their daily life.
    The “come to me” web will become increasingly important. It’s certainly part of my everyday life at the moment and it is an improvement over the “I go get web”. These two assumptions were the basis for the work in this presentation aimed at adding RSS feed generation to discussion forums in Blackboard course sites.
  • BAM’s interface does provide some additional information about the student, the course etc. that isn’t provided in the “raw” RSS feeds from each students’ blog.

The fundamental idea is that BAM will generate “cooked” RSS feeds and that academic staff will be able to access the feeds for their students via their choice of RSS reader. The outstanding questions are:

  1. What ingredients need to go into the cooking?
  2. What’s the best (and easiest) technical approach to implementing “cooked” feeds?
  3. Why are academics going to use this?
    This is the big one, if they don’t want to use it, then there’s no point doing it. The aim will be that this will be easier and more effective than using the BAM interface.

What the cooked feeds need to do

At the very least cooked feeds will need to support all of the existing functionality provided by BAM and where possible provide additional functionality.

Existing functionality

  • Which students have registered their blogs and which haven’t.
  • A method to view photos and details about the students who fall into either group (registered or not).
  • Provide a link to a “mail merge” facility for those students who fall into either group.
  • View statistics about student blogs – e.g. how many posts in total, when was the last time they posted an entry and a link to the student blog.
  • A marking interface for each post.
  • A question allocation interface for each post.
    BAM was originally designed to implement individual student reflective journals where students are expected at fixed times during a term to answer specific questions. BAM automatically examines each student post and attempts to determine if it is a response to one of these fixed questions.
  • Information about whether the post has been marked or allocated.
  • Whether or not a student has answered all of the necessary questions.
  • Ability for the course coordinator (academic in charge of a course) to view and track student posts for other teaching staff and also the staff’s marking progress.

Potential new features

  • Indication of what new posts there have been since the academic last visited.
    This is essentially what would be provided by an RSS reader.
  • Addition of institutional/student based information to individual blog posts.
    Currently a post to a student blog does not include any information about who the student is, their institutional student number, whether or not the post is a match for one of the required questions they must answer, a link to the marking and question allocation interfaces for BAM posts
  • On the fly copy detection of student posts.
    Currently there is a half-baked script that will compare all student posts against each other to check of copying. There’s questionable educational value for this, but something that is perceived to be useful by staff.
  • A daily summary of activity by related staff and students.
    Each staff member could see a single post that summarises activity by their students. For example, who posted, which questions they answered, who still hasn’t registered, who did register, what copy detection incidents were identified etc. In addition, staff who are supervising other staff could recieve a daily post on the progress of staff. For example, how many of each staff member’s students haven’t registered, haven’t posted, haven’t been marked etc.

    This idea could serve the basis for a broader service associated with courses and perhaps attached to some current work around indicators

Technical implementation

Initial quick ideas might include

  • The provision of two top level feeds for each staff member.
    1. Activity summary – this is the daily summary of activity by related staff and students. Staff and students might be updated as separate feed items. Non-supervisory staff simply wouldn’t see an item of that type. Alternatively, the would see such a feed. It would simply summarise what they did over the last day or so. Staff who are supervising other staff, would also see posts summarising the activity of the staff they are supervising.
    2. Student posts – similar to the existing feed, this would consist of numerous feeds (one per student) summarising what they have posted to their blog.
  • Perhaps a “staff activity” feed.
    Supervisory academic staff might also have a third collection of feeds. The “Staff student activity” feed would include one collection of feeds for each staff member being supervised. This collection of feeds per staff member would be exactly the same as “Student Posts” feed that the supervised staff member would see. This would allow supervisory staff to see the detail, if they wanted to.
  • Cook the individual student blogs
    The individual student feeds would not be from the blog feeds. They would be cooked versions from BAM that will have added a range of additional institutional and BAM related links and information.

Initial implementation ideas might include:

  • Each of the individual feeds would be implemented as simple RSS files on the institution’s server
    i.e. static files that are updated by BAM, but staff are requesting the static files, not a script or similar. The drawback here is that a Perl access module will have to be written to control access to the appropriate folk. The advantage is that some of the “cooking” will require some significant processing (e.g. copy detection). Also the OPML feeds that bring these feeds together for staff could be implemented in a simple hierarchical file system.

    For example, BAM/YEAR/PERIOD/COURSE/Staff/username/{all.opml|summary.rss|students.rss}. And for each student …./COURSE/Students/STUDNUMBER.rss

  • Updating of these feed files would be done at the end of the current BAM “mirror” process.
    Every hour or so BAM goes out and checks each student’s blog for any new entries. If it finds any, it updates a local mirror of the raw RSS file. Could add to the end of this process all of the necessary steps required to “cook” the feeds.


  • What features are missing?
  • What potential implementation approaches have I missed?
  • What problems exist with the above implementation plan?
  • Is the cost/benefit ratio sufficient for me to implement these plans given the PhD etc.?

Reliability – an argument against using Web 2.0 services in learning? Probably not.

When you talk to anyone in an “organisational” position (e.g IT or perhaps some leadership positions) within a university about using external “Web 2.0” tools to support student learning one of the first complaints raised is

How can we ensure it’s reliability, it’s availability? Do we have as much control as if we own and manage the service on our servers? Will they be as reliable and available?

My immediate response has been, “Why would we want to limit them to such low levels of service?”. Of course, it’s a little tounge in cheek and given my reputation in certain circles not one destined to win friends and influence people. There is, however, an important point underpinning the snide, flippant comment.

Just how reliable and available are the services owned and operated by universities? My anecdotal feeling is that they are not that reliable or available.

What about web 2.0 tools?

Paul McNamara has a post titled “Social network sites vary greatly on availability, Pingdom finds” that points to a Social network downtime in 2008 PDF report from Pingdom. The report discusses uptime for 15 social network tools.

A quick summary of some of the comments from the report

  • Only 5 social networks managed an overall uptime of 99.9% or better: Facebook (99.92%), MySpace (99.94%), (99.95%), Xanga (99.95%) and Imeem (99.95%).
  • Twitter – 99.04% uptime
  • LinkedIn – 99.48% uptime
  • Friendster – 99.5% uptime
  • – 99.52% uptime
  • Bebo – 99.56% uptime
  • Hi5 – 99.75% uptime
  • Windows Live Spaces – 99.81% uptime
  • LiveJournal – 99.82% uptime
  • – 99.86% uptime
  • Orkut – 99.87% uptime

Is it then a problem?

The best you can draw from this is that if you’re using one of the “big” social network tools then you are probably not going to have too much of a problem. In fact, I’d tend to think you’re likely to have much more uptime than you would with a similar institutional system.

The social network tool is also going to provide you with a number of additional advantages over an institutionally owned and operated system. These include:

  • A much larger user population, which is very important for networking tools.
  • Longer hours of support.
    I know that my institution struggles to provide 10 or 12 x 5 support. Most big social network sites would do at least 10 or 12 x 7 and probably 24×7.
  • Better support
    Most institutional support folk are going to be stretched trying to maintain a broad array of different systems. Simply because of this spread their knowledge is going to be weak in some areas. The support for a social network system is targeted at that system, they should know it inside and out. Plus, the larger user population, is also going to be a help. Most of the help I’ve received using has come from users, not the official support, of the service.
  • Better service
    The design and development resources of the social network tool are also targeted at that tool. They aim to be the best they can, their livelihood is dependent upon it in a way that university-based IT centres don’t have to worry about.

One reason people don't take to new e-learning technology

In a recent post I started my collection of quotes on this blog. I also talked about the “mere exposure effect” and suggested it’s one reason behind the horseless carriage approach to using new technology. It’s also one reason why people resist new technology – especially e-learning/computer technology.

In working on another post, one directly related to the PhD, I came across this article from EDUCAUSE Quarterly titled “The Three-E Strategy for Overcoming Resistance to Technological Change “.

One of the quotes it uses to as evidence of why adoption of new technology is hard is from a book by Carolyn Marvin

For if it is the case, as it is fashionable to assert, that media give shape to the imaginative boundaries of modern communities, then the introduction of new media is a special historical occasion when patterns anchored in older media that have provided the stable currency for social exchange are reexamined, challenged, and defended.

The EDUCAUSE Quarterly article also says the following

As technology professionals, we often fail to see how intimidating technology can be to the user community.

I’d expand this out to include instructional designers and management. Instructional designers often don’t see how intimidating many of their pedagogical innovations (forget the use of technology) are to many academic staff. Many management folk I’ve seen make similar mistakes, though generally worse. Management generally don’t see how new pedagogy and technology, if used effectively, needs a radically different approach to teaching and learning practice. More importantly they don’t see or engage in the fact that this type of radical change often brings into question many of the accepts administrative processes, policies and organisational structures within institutions.

The article also quotes an EDCAUSE review article titled “My Computer Romance”. An expanded quote from this review article

What kept me from seeing and acting on those benefits? The question interests me, and not only out of self-regard. The question is at the heart of “faculty development,” a crude, even misleading phrase that cannot suggest the trick of imagination needed to bring substantial, important knowledge into plain sight and to develop in faculty the resolve and courage to risk failure. For an academic, “failure” is often synonymous with “looking stupid in front of someone.” For many faculty, and maybe for me back in the 1980s, computers mean the possibility of “pulling a Charlie Gordon,” as the narrator poignantly terms it in Daniel Keyes’s Flowers for Algernon.

This has significant implications for personal learning environments that surely represents a significant shift in practice created by the capabilities of a new medium and offers an even greater opportunity for academics to “pull a Charlie Gordon”. The Quarterly article

finishes it’s introduction with the following paragraph

Consider for a moment the impact of Web 2.0 on a professor working in academia for 20 or 30 years. The flattening of knowledge production and the ease of access to information represented by Web 2.0 technologies in many ways negates the concept of the “sage on the stage” or even traditional notions of scholarship. This world is not what most professors are used to, and many are threatened by and therefore resist this kind of change.

The solution

The Quarterly article

suggests that the solution is a strategy for gaining acceptance of technology that embodies “Three Es”

  1. Evident – as potentially useful in making life easier.
  2. Easy to use – to avoid feelings of adequacy.
  3. Essential – as part of going about their business.

Sort of sounds a bit like the insights from TAM and Diffusion of Innovations.

The wrong view

The Quarterly article finishes with this sentence

Only then will faculty effectively use the complex technical infrastructure that we technologists labor so hard to put into place.

God I hate the mindset that underpins that sentence. Or at least the common mindset amongst “support” folk in higher education. This isn’t limited to just information technology people. Instructional designers, quality folk and management all suffer from this view from time to time.

How do we get these poor ill-informed and/or obstinate academics to use the great technology/idea. If only we could do this we would solve all the problems of learning/teaching/research in one fell swoop.

This has been a problem with most people peddling innovation. Indeed, diffusion theory (Rogers, 1995) one of the best known innovation theories, has been criticised for having a pro-innovation bias that, amongst other effects, can separate members of a social system into the superior innovators group and the inferior recalcitrants group (McMaster and Wastell, 2005).

In this paper (Jones and Lynch, 1999) we talk about

  • developer based; and
    A developer-based focus assumes that the new product will automatically replace the old and that adopters will see the benefits of the new product automatically and in the same way as the developers.
  • adopter-based approaches to software development.
    These approaches focus on the adopters and their setting in order to understand the social context and the social function the innovation will serve.

The Es approach strikes me as someone who comes from a developer-based culture taking the first steps towards a more adopted-based approach. But someone who still has the same underlying belief that we build it and they use it.


Jones, D. and T. Lynch (1999). A Model for the Design of Web-based Systems that supports Adoption, Appropriation and Evolution. First ICSE Workshop on Web Engineering, Los Angeles.

Rogers, E. (1995). Diffusion of Innovations. New York, The Free Press.

Carolyn Marvin, When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century (New York: Oxford University Press, 1988), p. 4.

BAM – making e-learning technology more protean

In a post yesterday I talked about how most applications of e-learning within universities seems to actively prevent students and staff leveraging the protean nature of information technology. That is the nature of computer software to be flexible, malleable and customisable.

The rise of “web 2.0” and related concepts has made it easier to put in place elearning technology that is designed to be more protean. In this post I talk about the Blog Aggregation Management (BAM) project and reflect on some of its ideas and implications for making e-learning technology more protean.

What is BAM?

It’s a research project aimed at extending ideas around how to implement e-learning technology at universities, particularly with an emphasis on what the rise of “web 2.0”, software as a service and other related concepts might mean for this practice.

BAM is a set of Perl scripts that aggregates RSS feeds, it matters little from where those RSS feeds originate, registered to individual students and then provides a range of additional services required by university educators. For example, (click on the screenshots to see larger images)

  • Link with the institutional teaching responsibilities database so that staff can see which of their students have (or haven’t) registered their RSS feed.
    BAM show student blog details
  • Show how many posts each staff member’s students have made.
    BAM show all student posts page
  • If required, award a mark and make comments on a student’s posts.
    BAM mark post page

Because of the original context in which BAM was designed (explained in some of the publications and presentations listed in the next section) there are also some scripts to detect plagiarism between student posts.

Origins of BAM

BAM started life as a more flexible way of implementing student journals in a particular course with the intent of encouraging reflection, increasing interactions between students and staff and hopefully increasing student performance. The initial use of BAM for this purpose is talked about in a number of places including:

  • Two presentations given at CQUniversity in 2006 that are available on Google Video. The first talks about the initial design ideas while the second reflects on the experience about half way through the course.
  • A paper describing the use in the initial course.
  • This initial use was also covered in the ELI Guide To Blogging as one of the three case studies.

System usage – examples of the protean nature

The initial application of BAM was intended to encourage student reflection and interaction between staff and students. It succeeded to varying levels of success depending on the staff involved. BAM has been used in all 8 offerings of that course from 2006 to 2008.

It has also been used in 8 other course offerings for a variety of different purposes. The most different was in the course EDED11448, Creative Futuring. EDED11448 was CQUni’s first “Web 2.0 course site” where all of the services used by students and staff in the courses were hosted on external services including,, Wetpaint, and RedBubble.

For EDED11448, BAM was used, in conjunction with Yahoo Pipes to create the Portfolio and Weblog pages. This was done by

  • Students create RedBubble accounts and using this to create their portfolio and their blog.
  • Students register their RedBubble account with BAM.
  • BAM aggregates both the portfolio and blog and produces aggregated RSS feeds.
  • Pipes is used to turn those RSS feeds into a bit of JSON data that can be used by the Javascript on the course website to present the data.

The same idea has been used to create RSS feeds from BAM that aggregate all a staff members students’ posts into one feed. A number of the courses that use BAM can have hundreds of students and tens of staff.

These application of BAM have moved beyond the original design. The protean nature of BAM includes the following:

  • There is choice in what application they use to generate the RSS feeds.
    In some courses that choice is left to the student. In EDED11448 the course designer made a specific choice – RedBubble – for her purpose.
  • There is choice in what BAM is used for.
    The original use was aimed specifically at individual student reflective journals reviewed and marked by staff. EDED11448 aggregated and made public to all students the work of individual students. In some cases the blog posts haven’t been marked.


I think BAM and the way it operates has the following implications for the practice of e-learning within universities.

  • Increase efficacy and agility while liberating institutional resources.
    This isn’t my view. It’s the one expressed by the authors of the ELI Guide to Blogging. When talking about BAM they say

    One of the most compelling aspects of the project was the simple way it married Web 2.0 applications with institutional systems. This approach has the potential to give institutional teaching and learning systems greater efficacy and agility by making use of the many free or inexpensive—but useful—tools like blogs proliferating on the Internet and to liberate institutional computing staff and resources for other efforts.

  • There is no need to pre-determine and specify all of the technology that staff and students must use.
    Most of the students who have used BAM haven’t really known what a blog is and very few have already had a blog. This lack of knowledge is not a reason to say we must use the blog provided by our LMS in order to minimise confusion. With BAM we recommend that students, who aren’t sure what to do, should make use of to create a blog. But we enable those with more knowledge to be able to use their own.
  • Small pieces loosely joined works.
    To me this is a fundamental characteristic of Web 2.0, the ability to create something larger out of a bunch of small pieces that are all loosely joined. Where each small piece can be replaced or re-tasked depending on the contextual needs. This is simply not possible with traditional enterprise software such as a course management system.
  • There are potential problems but they can be solved, and generally cheaper and easier with this approach.
    The most common question that is asked about BAM is “What happens if a student’s blog provider goes belly up and we can’t access the student’s work?”. This is the “can we depend on external providers” question. The assumption is that organisationally provided systems are more reliable. While that is somewhat questionable, the concern can be mitigated quite easily.

    In BAM’s case, this is done by mirroring. Every hour BAM

    • Visits each student’s RSS feed.
    • If there have been any changes it creates/updates a local copy of the RSS feed.</li

    If the external blog provider ever disappears, we have a copy. These types of problems can be solved.

  • Making existing systems more protean is a good thing.
    A number of benefits arise from systems being more protean. For example, the tools are able to be used for a number of unintended applications and the users are able to use tools that they are familiar with and have a sense of ownership over. For me, this means that making existing systems more protean is a good and worthy thing to do.

Future work

Future work might include:

  • Making BAM more self-serve.
    Currently setting up BAM requires some additional input from technical folk. Wouldn’t be too hard to make this self-serve.
  • Extending the RSS generation capabilities in BAM.
    These are still fairly limited in terms of capabilities. The need some extension in capabilities, especially in increasing the protean nature of such capabilities.
  • Improvements to the BAM interface.
    It was designed by me. Enough said.
  • Enabling more complex group-based manipulation, tagging and commenting within BAM.
    Beyond simple aggregation there is little that can be done. Even marking is not performed with RSS but with databases. One extension might be to create RSS feeds that include comments/marks from markers. Enabling peer marking, commenting and tagging and a range of more complex approaches might also be useful.
  • Looking at supporting privacy capabilities in BAM.
    At the simplest form adding the ability for the student’s RSS feed to be password protected might be useful. At the moment the RSS feed fed into BAM must be freely available. Supporting broader privacy settings makes the tool more flexible.
  • Making existing systems more protean.
    Add RSS feeds to the discussion forums and other features of a learning management system to enable staff and students to start mashing up.
  • Integrating BAM into an existing LMS.
    BAM’s current use is limited to CQUniversity. BAM, at the moment, is essentially a set of scripts that integrate RSS feeds with several CQUniversity systems (online assignment marking, results processing, staff teaching responsibilities, student enrollment etc). This means it doesn’t make sense to sell or release BAM’s code (beyond having people look at it). Another institution would have to rewrite all of BAM to fit with its systems and practices.

    One solution to this might be to integrate BAM with a system like Moodle. These systems already should have data about which staff are responsible for which students, which students are in which course etc.

  • Working closely with a range of different staff to explore and enable different applications of BAM, to extend its protean capabilities and leverage them to improve the learning and teaching experience.
    This is where the real benefit is. Working with staff with different purposes and problems to collaboratively identify approaches and necessary changes to BAM.

VoiceThread as a mechanism for feedback to students

Scott has a post discussing the potential benefits of using VoiceThread as a mechanism for providing feedback to students – both by staff and students. Based on some experience, I agree there is some potential, but I also think there are some issues to be looked at. Details follow.

Scott mentions VoiceThread inlight of some discussion that arose at a session on course analysis and design. One of the participants raised using recorded voice as a way to mark/annotate student assignments. This was as part of a session on teaching strategies. In particular, those framed by Chickering and Gamson’s seven principles for good practice in learning and teaching.

If I’d been organised I would have mentioned the following within that session and given participants a chance to look at some of the example work. However, given time constraints and an increasingly forgetful mind I missed the opportunity. I hope this post might make up for that.

What we’ve done already

In the second half of last year I worked with Markus Themessl-Huber in a 3rd year special topic course for undergraduate psychology students. Our plans to use VoiceThread are sketchily detailed in this post.

Since then the posters have been prepared by students, they’ve been put in VoiceThread and been viewed by some local industry folk at a face-to-face session. But, sadly, due to some local institutional “issues” I have not followed up on this as much as I would like.

For the face-to-face session with local industry folk we made use of this page on our wiki to provide pointers to the students’ posters.

Perhaps one of the best posters (a very subjective measure) is this one on post-natal depression. It includes an introduction from the student and a comment from one of the industry folk attending the face-to-face session.


For various reasons this experiment didn’t achieve all of the goals we wished. However, it did suggest that there was enough of a benefit to continue to explore further uses of VoiceThread as a tool. As a first step towards that we’ve purchased a higher education manager account.

In terms of this experiment some thoughts include

  • Are the students prepared for this?
    I believe some of the students had some problems learning sufficient technical skills to prepare the posters using Word, Powerpoint etc. The students didn’t have to use VoiceThread directly to upload their posters. I wonder how difficult they would’ve found this, or perhaps how much extra work they would have perceived this to be and whether it was worth it.
  • Making things public?
    These posters are publicly available. Some people have some issues with making this work public.
  • Account management – especially to comment.
    VoiceThread requires the creation of yet another account. Even if all you wish to do is comment. There are reasons for this, but I wonder if this further increases the perception of difficulty.
  • Other uses beyond feedback and presentation.
    Already someone else at CQU is keen to use VoiceThread for other purposes. She’s already including use of VoiceThread as an alternate approach to developing learning materials for students. The first will be used in the first half of this year. It shall be interesting to see how this goes, it should be good.
  • The management interface isn’t there.
    The interface VoiceThread uses is pretty good for presenting work and getting comments on it. If you are working with individual presentations. In setting up the web page for the face-to-face session we had to deal with all the students’ posters. This was not easy. The interface didn’t provide the affordances necessary to easily work with large numbers of presentations. This potentially has negative implications for using it as method for markers to make comments on student assignments. When you are marking assignments efficiency is important.

    This is exactly one of the problems that the BAM project had to deal with individual student blogs. The added effort and the novelty of marking blogs caused some backlash from markers.

    I am not confident that VoiceThread is going to as “mashable” as blogs were. A major enabler for BAM was that the output of blogs could be easily mashed up with other software for different purposes. I’m not sure that VoiceThread and its flash interface is going to allow this.

What is a PLE? More than a suite of tools? More than social media?

Jocene and I are having a bit of a chat about PLEs and she raises a number of questions or perspectives in her last comment in that discussion that are worth of thought. So, I’m starting a new blog post here, rather than making a comment (the inequity in power and ease of use between the editing tools/interface used to create a post and those used to make a comment – very limited – make an interesting comment about the assumptions and affordances of a blog).

The issues raised by Jocene I want to consider include:

  • Is the notion of PLEs separate from those of Web 2.0/social media tools?
  • The connection of PLE with a set course.

A PLE is not a collection of Web 2.0 tools

In her comment Jocene makes the following point

I keep trying to separate the notion of PLEs and that of Web 2.0 social media tools. The latter may be used to construct various PLEs, but even the sum of these tools, in any PLE context, is still not the PLE itself. A suite of Web 2.0 tools is not a PLE.

My response/current belief can be summarised in two points:

  1. Yes I agree, but how else do you engage in this work?
  2. The tools have to come first, don’t they? No?

How do you do it? or Is a Web 2.0-based PLE better?

This section turned into something different as I was writing. The original point was to ask a question of what more could a University do to enable students’s use of PLEs without focusing on Web 2.0 based technologies. I’ll get to that, but first I’m reflecting on my experience and wondering whether or not a Web 2.0-based PLE is better than a traditional one.

A personal collection of tools to support your learning is nothing new. We’ve all done it. I had a collection of folders and loose leaf paper that I used at University back in the 80s to supplement my textbooks and handouts from the academics. Managing this collection of resources/tools effectively was half the battle of learning. I imagine I did some, perhaps many, of the tasks that Graham Atwell outlined in the slidecast that kicked off this conversation.

The following wasn’t planned. It arose out of thinking about this problem and the idea of comparing my old PLE with my new PLE using the tasks outlined by Graham Atwell in his slidecast.
Those tasks included:

  • access and searching;
    This was incredibly time consuming and poorly supported in the old style. The Internet, Google and my blog provide a much better set of tools to support this. Both individually and also socially, collaboratively with others.
  • aggregating and scaffolding;
    In the 80s this was called photocopying and placing in folders – rarely to be accessed again. What I did access was put into structures and frameworks that perhaps helped me understand.
  • manipulating;
    Much of the learning I have undertaken has always been around ideas and information – computer science, information systems, learning and teaching, philosophy etc. – part in due to the nature of the disciplines but also the nature of teaching/learning. Manipulating the artifacts associated with this learning in my old PLE was laborious. In fact, if I attempt to do this now – e.g. write more than my signature – my body rebels, aches and generally says “stop!”. As a techie the manipulations I can perform in my new electronic PLE is so much easier, powerful and interesting.
  • analysing;
    Given that I eventually graduated there must have been some analysis of the content of my old PLE. I must have worked out what some of it meant. I believe my new PLE is orders of magnitude better at helping me in this analysis. The ability to access hugely diverse opinions and have tools like Google, Wordle and many others to perform various forms of low-level analysis is a great help.
  • storing;
    The question of long-term viability is still open. Moving from my old website to this blog has probably led to some loss of information. But keeping information is getting easier. I certainly have very little of the content from my 80s PLE. The multimedia nature of the new PLE, however, is a significant improvement. On my laptop I have videos and audio that I consider important. I also think there is something to be said for the way that my new PLE makes it much easier to store information/learning in a fragmented form, which is a good thing, really it is.
  • reflecting;
    Did my old PLE help in terms of reflection. To some extent. But the private nature, difficulty of manipulating, storing, accessing and searching that old PLE certainly placed significant constraints. My new Web 2.0 PLE makes reflection much easier. It lets me find and link my thoughts together. The form of a blog and its connection to a diary also helps encourage reflection. (Not to say that the technology determines this, it takes discipline and motivation on my part – but the affordances of the new PLE help.)
  • presenting;
    My old PLE led to presentations only in the form of formal, necessary presentations. To some extent that remains true, but even this post is a form of presentation, perhaps of representation. Trying to show my understanding. Even this bit of reflection is available as a “presentation” for others. The combination of presentation and reflection add meaning, at least for me as the author.
  • representing;
    I’m not sure I got Graham’s meaning on this one. However, the word points to me about representing the meaning and identify I place on what I’ve learnt in my PLE. Have I got it wrong? My old PLE had very little connection with me. If someone picked up the collection of folders and textbooks there wouldn’t be a lot in it that represented me. The odd comment, not a lot of reflection. With my blog, it’s a different story. There are photos of what I experience, there are small personal storied intermixed with the work and learning. Does there need to be a separation between learning and others aspects of life? Certainly in my PLE (my blog) there isn’t one.
  • sharing;
    With my old PLE I could do little or none of this. Access to those folders and their contents was not readily available to folk (access) and the searching was poor. With my blog I’m currently averaging around 50 or so “sharing events” a day as people visit the resources on it. They generally come here through links on WordPress, elsewhere on the web or through google searchers.

However, during my undergraduate education I certainly didn’t engage in the move from “sanctioned knowledge” to “collaborative forms of knowledge construction”. I didn’t talk to many folk, worked on my own with the “sanctioned knowledge” and my own constructions. Almost certainly the poorer for it and am now engaging differently through the Web 2.0 tools. Why the difference?

I’m certainly more mature and open about learning, perhaps I just wasn’t ready for it as a kid. I also know that the Web 2.0 tools, like blogs, have a much greater affordance for the type of “collaborative forms of knowledge construction” that I prefer. i.e. I don’t particularly like synchronous, group-based, warm and fuzzy co-operation. I prefer to be on my own, considering what lots of others have said and done and working through my own ideas.

On the basis of the above, it looks like, at least for me, that a Web 2.0-based PLE is a tremendous improvement over a traditional non-Web 2.0 based PLE. Too many of the tasks which Graham Atwell suggests you want to do in a PLE are much easier, more effective with the assistance of Web 2.0 technology. If it is better, should we look at helping people use it.

Question: Is this part of the difficulty we face with PLEs? A Web 2.0/social media enabled PLE is, because of the affordances of the technology, a completely different kettle of fish. Think of the difference between written, personal communication implemented in the 17th century and implemented now in the 21st century. It’s a completely different ball game. Perhaps the whole PLE thing is getting too bogged down with the “yea, we’ve always done it stuff”. Perhaps we haven’t always done it, perhaps it so different that relying on the old patterns of thought is preventing innovation?

I’m still thinking about this myself.

Back to the original point I was thinking of making. If you decide that students have always made use of a PLE using traditional approaches, just like I did back in the 80s, then what more can we do to support students in using the traditional form of PLEs? If you assume that in some institutions, like CQUniversity, that more and more of the learning experience will be moving online, then are is there anything we can do?

I’ve always believed that it’s not the task of the university to build or specify a PLE for students. Whether it be “traditional” or Web 2.0. The services a university could perform to help students use PLEs, seem to me, to be:

  • Open up its learning activities, resources and services so the student can use the tools they select to perform the tasks Atwell points out.
  • Because this idea is somewhat novel, scaffold and aid the development of individual PLEs, in whatever form, to learn some lessons and see where things go.
  • Learn from what worked and from what didn’t and continue.

PLEs and university courses

In her comment Jocene makes the following point

Conceptually, there is no reason why my PLE needs to service, or make me accountable to a set course (in which I may be enrolled) if my way of knowing (principle 2) does not match that of the course designer. Conceptually, I will learn when I am ready to learn, and I will select the evidence I need from seemingly infinite data, to bring me to the realisation that I know something.

I agree entirely with this perspective. I believe that the amount of learning an individual will go through with a formal learning organisation (like a university) pales almost into insignificance against the amount of informal learning.

The “services” I listed at the end of the previous section seem to allow for this. The emphasis is on opening up the university’s courses to allow students to use the PLEs they chose. Eventually the assumption being that this is the same PLE students use to service the broader array of learning experiences they have. In addition, the “opening up” of university courses may also include developing and helping academics use course designs that allow for more freedom and diversity in how a student travels through a course.

Back to watching the Super Bowl.

How do you implement PLEs into higher education courses?

Jocene reflects a bit upon a slidecast (titled “Personal Learning Environments: The future of education?”) by Graham Atwell.

I tend to sense a touch of frustration in the post. Which I don’t think is at all surprising since the question “How do you implement PLEs into higher education courses?” is extremely complex. Doing anything to change learning and teaching within higher education is extremely difficult. This is made almost impossible when it is something that potentially brings into question not only the pedagogical practice of individual academics, but also the assumptions underpinning the administrative and technological bureaucracies that have accreted within tertiary institutions.

Institutions of higher education have essentially failed to implement “enterprise e-learning” in a way that caters for and values the diversity inherent in university teaching. I’m somewhat pessimistic about its ability to implement e-learning that caters for the diversity of university students – an order of magnitude (or two) greater level of diversity.

A way forward

That diversity, is for me, a clue to a way forward. Implementing PLEs within higher education is about a focus on the potential adopters, both the teaching staff and the students. By answering questions like: “What do they want?”, “What do they do?”, “What is a problem you can solve for them that makes a difference?” with something that is related to, or at least moves them towards the ideals of a PLE.

However, answering these questions should not be done by asking them. When it comes to something brand new, something that challenges established ways of doing things simply asking people “what would you like to do with X” is a waste of time. If they tell you anything, the will tell you what they’ve always done.

I wonder if this explains the current suggestion that the next generation of students don’t want their university life mixed in with their social life. They don’t want universities getting into Facebook and other social spaces. If the students haven’t seen good examples of how this might work, you can’t really rely on their feedback, they don’t know yet.

I’ve talked about the 7 principles of knowledge management and in particular principle #2

We only know what we know when we need to know it. Human knowledge is deeply contextual and requires stimulus for recall.

Which is why I like this comment from Jocene

He talks about the need to contextualise the PLE. Well, yes. My colleague and I have decided to push ahead with our own contextualised understanding, so we can start to reflect upon rather that speculate about our PLE work.

Get stuck in, try a few things and then reflect upon what worked, what didn’t. What did the students like, what might be better. This sounds like a much more effective way than researchers and prognosticators extrapolating what they think people will need and how they should use it. However, I think Jocene’s next quote highlights the difficult in drawing a barrier between teleological and ateleological design.

But we still keep getting stuck, half way over the implementation hurdle! If we telelogically suggest a way forward for any group of learners, then we are not facilitating a PLE, we are imposing our values.

Traditionally e-learning within universities is teleological and because the nature of teleological design is a complete and utter mismatch with the requirements of e-learning problems arise. Some colleagues and I have pointed these problems out in two publications (Jones, Luck, McConachie and Danaher, 2005; Jones and Muldoon, 2007).

One defining characteristic of teleological design is that the major design decisions are made by a small group of experts and/or leaders. There decisions are meant to be accepted by the rest of the group and are meant to be the best decisions possible. I think Jocene’s worried about this type of “imperialism” within PLEs. If she and her colleague make decisions about what should be done, aren’t they being teleological?

They don’t have to be, but it does depend on how you go about it. To my mind you reduce the teleological nature of your decisions by doing the following

  • make small changes to existing practice;
  • ensure that the changes solve problems or provide new services that will be valued by the participants;
  • ensure that you will learn lessons/try new things/make different mistakes than you have before;

i.e. you are doing safe-fail probes rather than fail-safe design. This is a distinction from Dave Snowden and which is talked about more here.

What does that mean for PLEs in higher education

Some quick thoughts on what this might mean in concrete form for implementing PLEs in higher education:

  • Modify existing e-learning infrastructure to enable it to work with the Web 2.0/mashup/PLE technology and approaches.
    e.g. generate RSS feeds out of various course management system (CMS) features and make them available to students and staff.
  • Use the “web 2.0’ifying” of the CMS to build features that solve problems or provide better services for staff and students.
  • Build some examples using these services (of PLEs) in existing social media applications – the obvious is probably facebook – but this decision should be guided by some of the following.
  • Don’t be exclusionary, don’t focus all efforts on one type of PLE or social media application.
  • Implement strategies and techniques to really engage with the students and staff to learn about what they do. NOT what they say they do, but what they actually do and experience. Use this insight to guide the above.
  • Use the strategies and techniques in the previous point to observe what happens when staff and students do (or do not) use the PLE services and use that insight to identify the next step.
  • Ensure a tight connection with and awareness of what other interesting folk are doing in this area and use it to inform the design of the next safe-fail probes you are doing.
  • Try not to do too much for staff or students. The whole notion of the PLE is that you are empowering them to do things for themselves. If the “instructional assistant/designer” does too much for them it breaks this ideal and it also doesn’t scale.


David Jones, Jo Luck, Jeanne McConachie, P. A. Danaher, The teleological brake on ICTs in open and distance learning, To appear in Proceedings of ODLAA’2005

David Jones, Nona Muldoon, The teleological reason why ICTs limit choice for university learners and learning, In ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore. pp 450-459

Some resources for around blogs and discussion forums

Empty party room

In a previous blog post I tried/am trying to kick off an experiment in using a blog for a multi-person discussion as an attempt to answer a question we will have to address as part of the PLEs@CQUni project.

I’m hoping this is a party to which a few others will come.

This post is an attempt to illustrate one answer to the “mechanics” question, i.e. how might you do this and also to provide some pointers to existing information on this topic.

The mechanics

I’m posting this on my own blog, hosted on If I include a link to the previous post (as I did in the first sentence of this post) WordPress automatically tells the other blog, which then adds a link to the new blog post. The author of the original blog post gets an email from WordPress saying that someone has linked to the post. The linkage shows up in the management interface of WordPress.

If you visit the previous blog post you should now see a link back to this post towards the bottom.

In theory, this allows each of the participants know when someone comments on their posts. It provides a set of connections between the different blogs, a way of generating a view of the discussion.

Some resources

This is not a new exercise, some existing information includes

An experiment in blog-based discussions

One of the major tools used (and mis-used) in most university-based e-learning is the discussion forum, or mailing list, or some other form of software for managing/creating multi-person dialogue. The PLEs@CQUni project is attempting to figure out and experiment with social media tools as a way to improve existing practice. An obvious need is to identify if, how and with what limitations these tools can be used to manage/create multi-person dialogues of the sort most academic staff associate with discussion forums.

The perceived need for this type of identification is mostly pragmatic. It is based on the observation that the decisions and actions people take are mostly based on patterns formed by previous experience. This is why most e-learning continues to be of the “horseless carriage” type. Being able to show academic staff that a new technology can re-create aspects of previous practice is an important step in getting them to move. This is the first step in the journey. We have to help get them out the door.

The question and assumptions

Chances are that blogs are going to be a major component of a PLE. Some of the more interesting work in this area certainly suggests this. So the modified question for this post and the activity I hope will arise from it is

What are the mechanics, benefits and limitations of using individual blogs to manage a multi-person discussion?

The idea is not to have a single blog on which everyone posts. The idea is to encourage the PLE type assumption where each participant in the discussion has their own blog and uses their blog to make their contribution.

The assumption should be that, if possible, each participant in the conversation can have their own blog on a different provider. i.e. everyone shouldn’t have to get a blog on to engage in the discussion.


This should be a type of action research. We’re not going to talk about it. We’re actually going to try and do it, and learn from the doing. This blog post will serve as the first part of an artifact that will arise out of this process. Those who participate will attempt to use this post as the first part of a conversation, by using their own blog.

Your task is to use whatever blog-based means you like to continue this conversation. The aim of the conversation is to discuss and come to some conclusion about the question listed above.

I will kick the ball rolling by sharing some resources arising from a quick google. A link to it should appear below ASAP.

Kant – separation of reason and experience


I’m slowly working through some PhD related work (the post on the paper I’m reading will come out later today) and that brought me across the following description of an argument of Kant’s from the wikipedia page on Kant

Kant argues, however, that using reason without applying it to experience will only lead to illusions, while experience will be purely subjective without first being subsumed under pure reason.

I haven’t time to follow up on this or to go to the original source, so the following may suffer from that. However, I find that I interpret this as being very conncted to what I’m currently doing and writing about.

Separation of expert analysis/design and lived experience

My understanding is that Kant was arguing against both the empirical and the rational view of the world/philosophy. To some extent (possibly doubtful in its validity) I see a connection here with some of the problems I’ve been writing about.

The rational world, in my thinking, can be ascribed to aspects of the “expert designer” approach. An expert/consultant/designer in information technology, curriculum, organisational structure applies a range of theories and rules of thumb to design a solution. Such an expert has varying but only small amounts of experience with what actually goes on in the context.

For example, a curriculum designer doesn’t really know what goes on in a course. What the students experience, what the staff say and do etc. What knowledge they do have is based on less than perfect methods such as observation, evaluation results and self-reporting of the students and staff.

The lack of understanding of the lived experience limits what they can see and do. They generally aren’t aware of, or abstract away, the complexities of connections between elements within such a system (should point out that I’m talking primilary about design that happens within human organisations).

As a result of this lack, any solution is likely to be less than perfect.

On the other hand, the academic who is teaching the course (typically) has a large amount of lived experience. A deep understanding of what happens in the course. However, it will be somewhat limited by their patterns and what they are trained to see. In addition, (typically) they will also have no understanding of the various theories and rules of thumb that can help understand what happens and design new interventions.

So as the Wikipedia author ascribes to Kant. Solutions developed purely by an expert designer, without experience, will lead to illusion. While a solution based solely on experience will be purely subjective.

There needs to be a strong and appropriate mix of reason and experience. The right mix of practice and theory.

Implications for information technology

I wonder what this perspective would say about information technology development projects that develop entire systems divorced from experience/reality until they are completed and ready to be put into place?

Implications for the PLE project

For the PLEs@CQUni project this implies that the research project, in order to encourage use of PLE related concepts within learning and teaching, needs to be informed by both theory and experience.

More on the expert designer – efficiency and effectiveness

A previous post has gotten a comment which I want to follow up on. The interface for writing a post gives more opportunity to be creative than that provided to add comments.

A clarification of the intent

Due to a few factors my intent may not have been clear. So one more attempt at clarity.

Let’s concentrate on one level, rather than the 3 or 4 I used in the original post. Perhaps the most connected to my current work is that of teaching and the common saying that modern teachers need to “not be the sage on the stage, but become the guide on the side”.

Sage on the stage

This is the age old image of the university course and it’s face-to-face sessions. The primary purpose of the professor is to analyse the topic area, identify what is important and deliver it to the students. The professor is the expert designer. The sage on the stage.

The content of the course is packaged into a format entirely controlled by the professor. A format that fits the expert designers conception of what it should look like.

Guide on the side

The alternative recognises that the learner needs to be much more in control of their learner. They have to actively construct learning through activities, tasks and approaches that are most suitable for them.

In this model, the professor gives up much of the control associated with the expert designer approach. Instead the concentrate on providing scaffolding, encouragement and guidance to the learner to aid them in their journey through the content. The design of the specific learning experience is largely the responsibilty of the learner.

Spectrum not a dichotomy

It should be pointed out that this is not a dichotomy. You don’t have two extreme boxes. At one end is the expert designer option in which the designer controls all. While at the other end you have each individual doing their own design.

Instead there is a full spectrum of approaches inbetween where the control of the designer becomes less and less.

A software example

A software example would be WordPress not having plugins. Instead any and all new features for WordPress would be under the control of the WordPress software developers. The expert designers.

By providing support for plugins, WordPress allow aspects of control and design to be broaden to a move diverse group.

The comments

There will always be experts because it is more efficient for an organization to use division of labour techniques to maximize greater skill levels and greater productivity as a whole.

There are three ways I’d respond to this

  1. There is more to life than efficiency.
  2. The measurement of efficiency is a highly questionable exercise.
  3. I’m not sure the “expert” route is always more efficient.

More to life than efficiency

I can think of two competing characteristics that are often in competition to efficiency.

  1. Effectiveness

    Teaching a course with a single academic is considerably more efficient than teaching it with 5. However, for a variety of reasons (e.g. more academics, means more people marking which might mean quicker turnaround time on feedback and better quality and quantity of feedback, which probably means better learning), doing it with 5 might result in a more effective outcomes.

  2. Ability to adapt
    When things change you have to have some “fat” to enable change. In terms of organisations and innovation, Christensen’s disruptive innovation work seems to indicate that having and allowing different approaches is actually a good thing.

Measurement of efficiency is questionable

How and who measures what is efficient?

About 6 or 7 years ago I was fighting battles with a group of “expert designers” responsible for the institutional ERP. The group I worked with had created a web-based system for academic staff to view data in the ERP (i.e. student records). The system did a lot more than this, but this was the focus of the ERP group.

One of their arguments was that having two systems was inefficient. Instead of using our duplicate (shadow) system, academics should be using the ERP provided system. It was more efficient this way. The university didn’t have to support and maintain two different systems.

That sounds right doesn’t it? If you based your assumptions solely on what appeared in the university accounting system you would be right.

However, if you knew the organisation in a little more detail than captured in the accounts. You would be aware that the ERP system’s approach was taking academic staff 20 minutes to generate a simple list of students in a course. And this is one of the simplest tasks academics needed to do.

The web-based duplicate system we’d developed could do it in under a minute.

Reliance on the ERP system was requiring at least one faculty to employ additional staff to perform this task for the academics. In other faculties, academics were having to waste their time performing this task or weren’t doing it.

Is that efficient?

The expert route isn’t always efficient

I think the above story also illustrates how the expert route isn’t always more efficient. Sometimes (many?) the experts get caught up in the law of instrument. They did in the above case. All they had was an ERP, they had to solve every problem with the ERP, even though it was inefficient and terrible.

Trust and the expert

The organisation has to trust the experts to provide the information from their area of expertise.

One of the problems with experts is the law of instrument. They start to see every problem with the lens of their expertise. Even when it isn’t appropriate.

Experts, especially those in support/service positions, tend to over emphasise the importance of the requirements of their expert area over the broader needs of the organisation.

PLEs and experts

I would have thought that PLEs would lead to MORE specialization as it is far easier to build a targeted learning path to turn out experts.

I think we’re getting back to the area of confusion.

Currently, when it comes to providing the tools for students to use for e-learning. Most institutions use the expert designer approach. The IT unit goes out and evaluates all the available tools, makes the most appropriate choice and everyone uses it.

The extreme PLE approach is that the institutional experts don’t select or design anything. Each individual student takes on the role of designer. They are more familiar with what they have used before, what they want to do. They do the design.

In reality, at least in the work we’ve done so far, is that the truth is somewhere in between. The institution minimises the design of technology but it still provides some scaffolding, some direction and support to help the learners make their own choices.

Tool users, research, hammers and the law of instrument

The following quote is from (Hirschheim, 1992) and is questioning the practice of research/the scientific method

Within this context the researcher should be viewed as a craftsman or a tool builder – one who builds tools, as separate from and in addition to, the researcher as tool users. Unfortunately, it is apparent that the common conception of researchers/scientists is different. They are people who use a particular tool (or a set of tools). This, to my mind, is undesirable because if scientists are viewed in terms of tool users rather than tool builders then we run the risk of distorted knowledge acquisition techniques. As an old proverb states: ‘For he who has but one tool, the hammer, the whole world looks like a nail’. We certainly need to guard against such a view, yet the way we practice ‘science’ leads us directly to that view.

Using a hammer to make an omelete

I’ve used this image in a recent presentation as a background to an important point that I’ve hammered again and again and again. “If all you have is a hammer, then everything is a nail”.

Apparently this is called the law of instrument and came from Abraham Kaplan’s The conduct of inquiry: Methodology for behavioural science. Apparently first published in 1964.

Information technology

There is a false dichotomy often trotted out in the practice of information technology: buy versus build. The impression being that “building” (being a tool builder) is a bad thing as it is wasteful. It’s seen as cheaper and more appropriate for the organisation to be a tool user.

As the “buy” option increasingly wins over the “build” option I believe I am increasingly seeing the law of instrument raise its ugly head within organisations. The most obviously bad example of this I’ve seen is folk wanting to use a WebCT/Blackboard course site for a publicity website. But there are many, many others.


You can see this in the group of staff (and institutions) who have “grown up” in e-learning with learning management systems. Their hammer is the LMS. The LMS is used to beat up on every learning problem because it is seen as a nail.

This is especially true of LMS support staff who do not have a good foundation knowledge in technology and learning and teaching. Every problem becomes a question of how to solve it with in the LMS. Even though the LMS may be the worst possible tool – like making an omelette with a hammer.

Asking tool users what they’d like to do

A common research method around new types of technology in learning and teaching sees the researcher developing a survey or running focus groups. These are targetted at group of people who are current tool users. For example, students and staff of a university currently using an LMS. The research aim is to ask these “tool users” what they would like to do with a brand new tool, often one based on completely different assumptions or models from the tool they are using.

This approach is a bit like giving stone age people a battery powered (was going to use electric knife, but no electricity – the point is the knife is powered and cuts “by itself”) knife. They’d simply end up using it like they use their stone axes (they would bang what they are cutting). They have been shaped by their tool use. They will find it difficult to imagine the different affordances that the new tool provides until they’ve used it.


I believe this was the context in which Kaplan first originated the law of instrument. Folk who get so caught up in a particular research methodology that they continue to apply it in situations where it no longer works.

"Big" systems – another assumption "PLEs" overthrow

This is a continuation of my attempt to develop a list of the assumptions underpinning existing practice around learning and teaching at universities which the concepts surrounding the Personal Learning Environments (PLE) term bring into question. The list started with this post and is continuing (all the posts should be linked from the bottom of the original post).

“Big” systems

Since the mid to late 1990s most higher education institutions have been adopting the “big” system fad. The “big” system fad is the adoption of really expensive information systems that do everything. What I call the “one ring to rule them all” approach.

The positive spin on these systems is that they are “enterprise” systems. They embody best practice. They promise a single system to unite all required tasks. It’s all neat and tidy and a load of crap.

Most of this came out of interest in enterprise resource planning (ERP) systems which grew out of systems to manage planning for manufacturing systems. These morphed into systems to manage the entire operations of organisations.

Around the mid-1990s these started being applied to universities. Around the same time learning management systems (LMS) arrived on the seen. They were soon sold as “enterprise” learning management systems. Single systems to manage all of the “e-learning” of an organisation.

Senior management like these big systems because of the promise. Buy the “big” systems and all your problems will be over. You won’t have to do anything else. You will be able to get the entire organisation using this system. Everything will be consistent. You will be able to understand it, because it is simple, and subsequently manage and control it.

Information Technology units like “big” systems, at least at the beginning, because, for various reasons, it gives IT control. They are given the power to require people to comply and use the big system because it is so expensive everyone must use it.

Because it is expensive it is important. Because it is an IT system that is important, the IT unit is then important to the business.

Problems with big systems

The Wikipedia page on ERPs offers a good list of the disadvantages of these systems. Many of these become hugely problematic when a “big” system is applied to contexts outside of its sweet spot. i.e. contexts similar to manufacturing. Contexts which are not standardised in process, components or outputs (e.g. learning and teaching) suffer under the weight of a “big” system.

Some examples, drawing on the disadvantages listed on the wikipedia page

  • Customisation of the ERP software is limited
    Standard best practice advice with “big” systems is do not customise. This means your organisation and everyone in it must use the system in exactly the same way as everyone else. It is almost impossible to customise the system for local contextual needs.
  • Loss of competitive advantage
    This means your organisation cannot differentiate itself from any other competitor that is using the same “big” system.
  • Once established, switching costs are very high
    They cost so much to implement you can’t easily change. This leads to “stable systems” drag (Truex, Baskerville and Klein, 1999) where the organisation struggles against the problems of an inappropriate system because it is too expensive/difficult to change. Eventually the huge investment of resources is made and the change is made. Usually to another “big” system and the problem starts all over again.

    Supposedly open source “big” systems is the solution to this problem. Sorry, no it isn’t. The cost of the actual software, which is the bit which is “free” with open source software, is the smallest part of the cost of a “big” system. You end up saving this very minor portion of the cost and still retain all the other problems of a “big” system.

  • The system may be too complex for the needs of the customer
    A University I know of selected Peoplesoft as its ERP. The Peoplesoft ERP system grew out of human resource management. It was, I’m told, known for being a good HRM system. From there it grew to have a number of other components added – finance, student administration etc. Guess which part of the ERP this university did not implement – the HRM system. It was too hard given existing contextual constraints.

    How much of the functionality of an “enterprise” LMS is actually used by the staff and students of a university? If my experience is anything to go by, bugger all.

The biggest problem with these systems is that they attempt to do everything. This means that a single supplier, or in the case of open source – a single community, must provide all of the necessary functionality. If you want a kerfuffle widget for your “big” system you have to wait for the vendor/community to develop a kerfuffle widget

Small pieces, loosely joined

The PLE concept builds on the concept of small pieces loosely joined. That is, there are lots of different Web 2.0 tools, all (most?) of them concentrate on doing on thing, providing one service. Flickr helps folk share photos, WordPress helps folk blog etc. Each of these systems is loosely joined through a combination of feeds and open APIs.

Take a look at the home page of my blog. Down the right hand column you will see two collections of photos. One collection taken by me and another containing those photos on Flickr that I’ve used in my presentations. How do I get these into my blog?

  • Upload them into my account on Flickr/tag them as a favourite in Flickr.
  • Use FlickrRiver to create two HTML badges of random photos. One with my the most interesting photos I’ve taken, the other from my favourites.
  • Use WordPress’ features to add the HTML for these two badges to my blog.

Each of these tools do one thing well. They are loosely joined via an open API (Flickr to FlickrRiver) and HTML/”API” (FlickrRiver to WordPress).

Imagine how much effort, how much expense and how long you would have to wait for the vendor/community of a “big” system to provide that functionality. Does anyone think that such a tool would be easier to use and have more functionality than Flickr, FlickrRiver and WordPress?

Moving beyond the expert designer

Big Ben

Another major advantage of this approach is that it enables the “death of the expert designer”. Adding the FlickrRiver HTML badge to get my random collection of Flickr photos did not require me to go to the software designers of WordPress and ask them to add the ability to add random photos to my blog.

I could do it myself.

This removes a bottleneck within organisations.

Not without its problems

As others have pointed out the idea of small pieces loosely joined is not without its problems. These have to be looked at and worked out. Based on the work I’ve done I don’t see the potential problems as unsolvable and I certainly see the benefits far outweighing the problems.


Truex, D., Baskerville, R., & Klein, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117-123.

Expert designer: Another assumption PLEs question

In a serious of blog posts (starting with this one) I’ve been trying to develop a list of fundamental assumptions about learning and teaching at Universities which the various concepts associated with personal learning environments (PLEs) bring into question.

This post attempts to add another.

The expert designer

Experts only

Within the practice of learning and teaching at universities there are a number of levels that assume the need for an expert designer (or a small group thereof). These include:

  • Senior management (and their consultants);
    Any important decision must be made by the small group of senior managers. Typically they will draw on “experts” to provide analysis and recommendations and then the senior management (or manager) will make the decision.

    Senior management is difficult and requires great skills and foresight and subsequently couldn’t just be left to normal people to make the decision. They don’t have the skill.

  • Learning design; and
    The design a university course is performed by the academic (or small group thereof) with demonstrable discipline expertise in the form of PhDs. They might be aided by their consultants, the instructional designers and other technical staff, but in the end it is the academic staff who make the decisions.

    After all, learning all about a discipline area is difficult. It requires great depth and breadth of knowledge to understand how best to do this. You couldn’t leave this sort of thing up to the learners. They don’t have the knowledge to do this.

  • Provision of information technology systems.
    Information technology is complex and complicated. There is a broad chasm of difference between looking after your home PC and managing large, complex and important enterprise systems. Such a task requires enterprise IT experts, and their consultants, to make these difficult decision and ensure that the organisation isn’t losing money.

    You can’t simply leave information technology decisions up to the end-user. They don’t have this breadth and depth of knowledge. They would make mistakes. It would waste resources.

And the list could go on for each of the professional groups or divisions that infest a modern university. Ordering textbooks, booking travel, looking after gardens and buildings, all of these activities, as implemented in a modern organisation, assume that there is a need for the experts to take control.

Problems with this approach

The main problem with this tendency is “one size fits all”. The central, small group of designers can never fully understand the diversity of all of their clients and in many cases could never efficiently provide a customised service to each of them. For example, a university course is never customised for an individual student’s pre-existing knowledge – even though this is one of the things we know is important for learning.

Web 2.0, social media and other advances in technology are bringing this practice into question. Increasingly there are abundant, inexpensive and simple to use tools which users can adopt, and more importantly, adapt to their own preferences. These tools, through the use of standards, can be used to access organisational services (if they are configured appropriately).

It’s becoming possible for the end-user to use the tools they already know. Rather than being forced to use the tools selected by the central IT folk of the organisation they now work for or are studying at.

Related to this is that this approach assumes that the “non-experts” actually need the input of the experts. Increasingly with IT you don’t. Similarly, many folk can learn things quite effectively all of the time through informal learning without the need for the discipline expert. The need and supposed rise of lifelong learning means that this trend should only increase.

Following on from this is the assumption that the experts really are experts. I’m sure anyone that has worked within an organisation can point to organisational decisions which demonstrably suggest that the experts weren’t so expert.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén