Assembling the heterogeneous elements for (digital) learning

Month: October 2009 Page 1 of 2

Lectures, alternatives, poll everywhere and unexpected events

This Wednesday I’m involved with an experiment and presentation that is seeking to test out some alternatives for lectures/presentations. As it happens, the last week has brought a couple of events that are (so far) helping the case for the experiment. These are described below.

And now for a word from our sponsors…

The aim of the experiment it to break out of the geographic limitations of participation in lectures/presentations. Anyone with a web browser can participate (a Twitter account and mobile phone will increase your ability to participate, but aren’t necessary). The more people who use these medium, the better. So you are invited.

More detail on the experiment/presentation page.

We return now to your regularly scheduled program

Being bumped

I work at CQUniversity. The university has 4/5 regional campuses spread across a fairly broad geographic area. A significant number of courses are offered across all of those campuses. A common approach for some years has been for lectures for these courses to be given from one campus and broadcast across the other campuses via the Interactive System-wide Learning (ISL) system. Essentially a video-conference system with specially built rooms at each of the campuses.

This approach is becoming embedded into the operations of the institution. To such an extent that the ISL rooms are becoming a resourcing bottleneck. Apart from teaching, these rooms are also used for research presentations and meetings. It’s getting to the stage that trying to get these rooms during campus is simply impossible.

Originally, the experiment was scheduled to use one room on each of the campuses

Rockhampton – 33/G.14. Bundaberg – 1/1.12. Gladstone – MHB 1.09. Mackay – 1/1.01.

On Friday I was told that we’ve been bumped from the Mackay room. Apparently someone senior needs the Mackay room for an ISL session that is more important than my experiment.

Normally, this would have meant Mackay staff would miss out on the live presentation. They’d have to rely on the recorded presentation.

Not now. Theoretically, they should be able to participate the same as people off campus. I’m actually happy about this, it gives me a practical story to tell about why this approach might be useful. It will be interesting to see what problems arise.

PollEverywhere Polls and results

Over the weekend, while avoiding work on the presentation I came across this post from Wes Fryer. It describes how they used PollEverywhere in a conference presentation. PollEverywhere is essentially a commercial version of Votapedia which I plan to use on Wednesday.

Some things I found interesting:

  • The graphs.
    The PollEverywhere graphs look much nicer than Votapedias (minor point).
  • A comment that students like this approach because it is a legitimate use of their mobile phones in class.
  • The idea that this type of experiment was an “a-ha” moment for some.

The bureaucratic model and the grammar and future of universities

Last week I attended a presentation by a colleague at CQUniveristy titled The Bureaucratic Model of Adult Instructional Design. The stated purpose of the presentation was

present and explore the Bureaucratic Model as a narrative that we must understand if we are to influence the direction of adult education.

The talk resonated with me as much of my current struggles/work is trying to make folk aware of a range of unstated assumptions that guide their thinking about learning and teaching within a university context. As Jay says, we have to understand those assumptions before we can think of influencing the future of learning and teaching – and somewhere in that, universities.

Since Jay’s talk I’ve come across and/or been reminded of a range of related work. Please feel free to add more here.

A vision for the future

Tony Bates has recently posted the second of his blog posts title Using technology to improve the cost-effectiveness of the academy: Part 2 within which he gives his vision for the future of universities.

A number of his implications seek to remove many of the basic assumptions that underpin university operation (e.g. semesters, fixed exams). However, a number of them show connections with an existing orthodoxy (e.g. all PhD students will have 6 months training in L&T).

That’s one of the problems I have with visioning. Too often it excludes interesting possibilities because it is held back by the background, preferences, ideas and prejudices of the people doing the visioning. My preference would be to let it emerge through a institution/setting that is flexible, open and questioning. I think much more interesting things can emerge from that situation than can ever happen because of the visioning of experts.

That’s because, no matter who you are, you have unstated assumptions that define what you can think of. Often this is addressed by having lots of different people do the visioning, but too often such attempts use approaches that to quickly focus on a particular vision, closing out future possibilities.

The grammar of school

In this post I mentioned a 1995 article by Seymour Papert on Why school reform is impossible. In this article Papert draws on Tyack and Cuban’s (1995) idea of the “grammar of school”

The structure of School is so deeply rooted that one reacts to deviations from it as one would to a grammatically deviant utterance: Both feel wrong on a level deeper than one’s ability to formulate reasons. This phenomenon is related to “assimilation blindness” insofar as it refers to a mechanism of mental closure to foreign ideas. I would make the relation even closer by noting that when one is not paying careful attention, one often actually hear the deviant utterance as the “nearest” grammatical utterance a transformation that might bring drastic change in meaning.

This sounds very much like what is happening in Jay’s bureaucratic model.

The need for experiments

A lot of the current debate about the future of universities is built on the comparison with print media. i.e. look, newspapers are a long-running institution that are dieing. Look, Universities, they are a long-running institution, they must be dieing also.

Clay Shirkey has written a long blog post title “Newspapers and Thinking the Unthinkable”. A major point that he makes in his post, seems to apply directly to the future of universities and the limitations of attempts at visioning like those of Tony Bates. In particular, this

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

He then draws on the development of the printing press to talk about revolutions

That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen

Dede’s metaphors of learning

Lastly, the following recording is of talk by Professor Chris Dede and some metaphors of learning. It is the current underlying assumption of consistency in delivery of learning that underpins much of what universities are currently doing which is my biggest bugbear. It’s what is contributing to university learning and teaching approaching what Dede describes as “the worst of fast food”.

Chris Dede: Human behaviours and metaphors for learning

Participation, impact, collecting data and connecting people

A couple of colleagues and I are trying to kickstart a little thing we call the Indicators project. We’ve developed a “tag line” for the project which sums up the core of the project.

Enabling comparisons of LMS usage across institutions, platforms and time

The project is seeking to enable different people at different institutions to analyse what is being done with their institutions learning management system (LMS, VLE, CMS) and compare and contrast it with what is happening at different institutions with different LMS.

To some extent this project is about improving the quality of the data available to decision makers (which we define to include students, teaching staff, support staff and management). In part this is about address the problem identified by David Wiley

The data that we, educators, gather and utilize is all but garbage.

But it’s not just about the data. While the data might be useful, it’s only going to be as useful as the people who are seeing it, using it and talking about it. David Warlick makes this point about what’s happening in schools at the moment

not to mention that the only people who can make much use of it are the data dudes that school systems have been hiring over the past few years.

And then this morning George Siemens tweeted the following

Collecting data less valuable that connecting people” http://bit.ly/3SMJCT agree?

If it’s an either/or question, then I agree. But with the indicators project I see this as a both/and question. For me, the indicators project is/should be collecting data in order to connect people.

What follows is an attempt to map out an example.

The link between LMS activity and grades

There is an established pattern within the literature around data mining LMS usage logs. That pattern is essentially

the higher the grade, the greater the usage of the LMS

The order is reversible as I don’t think anyone has firmly established a causal link, it’s just a pattern. My belief (yet to be tested) is that is probably, mostly good students get good grades and do everything they can do to get good grades, including using the LMS.

With our early work on the indicators project we have found some evidence of this pattern. See the two following graphs (click on them to see bigger versions).

The X axis in both graphs is student final grade at our current institution. From best to worst the grades are high distinction (HD), distinction (D), credit (C), pass (P), and fail (F).

In the first graph the Y axis is the average number of hits on either the course website or the course discussion forum. Hopefully you can see the pattern, students with better grades average a higher number of hits.

Average student hits on course site/discussion forum for high staff participation courses

In the next graph, the Y axis is the average number of posts (starting a discussion thread) and the average number of replies (responding to an existing discussion thread) in the course discussion forum. So far, the number of replies is always greater than the number of posts. As you can see, the pattern is still there, but it is somewhat less evident for replies.

Average student posts/replies on discussion forums for high staff participation courses

Importance of staff participation

Fresen (2007) identified the level of interaction or facilitation by teaching staff as a critical success factor for web-supported learning. We though we would test this out using the data from the project by dividing courses up into categories based on the level of staff participation.

The previous two graphs are actually for the 678 courses (the high staff particiaption courses) for which teaching staff had greater than 3000 hits on the course website during the term. The following two graphs show the same data, but for the super-low staff participation courses (n=849). A super-low course is one where teaching staff had less than 100 hits on the course website during term.

What do you notice about the pattern between grade and LMS usage?

First, the hits on the course site and the course discussion forum

Average student hits on course site/discussion forum for super low staff participation courses

Now, the average number of posts and replies in the course discussion forum

Average student posts/replies on discussion forums for super low staff participation courses

For me, the pattern is not there. The HD students appear to have decided there’s no value on the course website and decided they need to rely upon themselves. They’ve still been able to get a HD in spite of the super low staff participation. More work needs to be done.

I’m also interested in what the students in these super low courses might be talking about and what networks they are forming. The SNAPP tool/work at Wollongong might be useful here.

How to bring people together

My fear is that this type of finding will be used to “bring people together” in a way that is liable to be more destructive than anything. i.e. something like this:

  • The data mining dweebs (I do recognise that this probably includes my colleagues and I) will bring it to the attention of university management.
    After all, at least at my institution it’s increasingly management that have access to the dashboards, not the academic staff.
  • The data mining dweebs and management will tell stories about these recalcitrant “super-low” academics and their silliness.
  • A policy will be formulated, probably as part of “minimum standards” (aka maximum requirements), that academics must average at least X (probably 3000 or more) hits on their course website in a term.
  • As with any such approach task corruption will reign supreme.

While the indicators project is a research project focused on trying to generate some data, we also have to give some thought and be vocal about how the data could be used appropriately. Here are some initial thoughts on some steps that might help:

  • Make it visible.
    To some extent making this information visible will get people talking. But that visibility can’t be limited to management or even teaching staff. All participants need to be able to see it. We need to give some thought about how to do this.
  • Make it collaborative.
    If we can encourage as many people as possible to be interested in examining this data, thinking about it and working on ways to harness it to improve practice, then perhaps we can move away from the blame game.
  • Be vocal and critical about the blame game.
    While publicising the project and the resulting data, we need to continuously, loudly and effectively criticise the silliness of the “blame game”/policy approach to responding to the findings.
  • Emphasise the incompleteness and limitation of the data.
    The type of indicators/data we gather through the LMS is limited and from some perspectives flawed. An average doesn’t mean a great deal. You can’t make decisions with a great deal of certainty solely on this data. You need to dig deeper, use other methods and look closer at the specifics to get a picture of the real diversity in approaches. There may be some cases where a super-low staff participation approach makes a lot of sense.

References

Fresen, J. (2007). A taxonomy of factors to promote quality web-supported learning. International Journal on E-Learning, 6(3), 351-362.

Alternate ways to get the real story in organisations

I’ve just been to a meeting with a strangely optimistic group of people who are trying to gather “real stories” about what is going on within an organisation through focus groups. They are attempting to present this information to senior management in an attempt to get them to understand what staff are experiencing, to indicate that something different might need to be done.

We we asked to suggest other things they could be doing. For quite some time I’ve wanted to apply some of the approaches of Dave Snowden to tasks like this. The following mp3 audio is an excerpt from this recording of Dave explaining the results of one approach they have used. I recommend the entire recording or any of the others that are there.

Why do we shit under trees?

Imagine this type of approach applied to students undertaking courses at a university as a real alternative to flawed smile sheets.

Choosing a research publication outlet

I’m reluctant to post this. It’s part of a pragmatic approach to figuring out where, as an Australian academic, I should try and target publications. It seeks to identify publications in the higher education and educational technology areas that would be “best”.

I’m well aware of the questionable aspects of this approach, but if this is the game…. Especially when your institution is starting to discuss definitions of research active staff – the implication being that if you aren’t research active you don’t get time to do research – that include requirements for fixed numbers of A and A* journals within a 3 year period.

My mitigation strategy against this type of pragmatism is that I am fairly open when it comes to my research. Much of it gets an airing here first. It’s not much, but better than nothing (or at least that’s what I keep telling myself).

For my immediate purposes, it looks like AJET is a good fit. A journal that is open access.

Work to do

  • Find out how much value is placed on the difference between A and A* journals.
  • Check the final lists from the government to see if rankings have changed.

What’s your suggestion?

What’s the “best” publication outlet?

I’m assuming that when it comes to writing a paper based on that research that the first step is to choose the outlet. Which journal or conference are you aiming the paper at? I think you need to answer this question as there is a part of the writing process that has to respond to the specifics of the outlet (e.g. address the theme of a conference etc.).
In answering this question, I can think of at least the following dimensions to consider:

  1. Quality.
    There are two common strategies I’ve heard: top down or bottom up. Bottom up folk go for the “worst” journal based on the hope that their poor article will get accepted. The top down folk suggest starting at the top because you never know, you might get lucky, and if you don’t you will at least get good feedback to improve the paper. At this stage you prepare it for submission to outlet #2.
  2. Fit.
    i.e. the one which best fits the topic or point of your paper. Which may be to visit Hawaii (conference) or might be a topic match (the paper “Gerbils preference in social software” might be a good fit for the journal “Studies in Gerbil Selection of Social Software”.
  3. Speed of review.
    How quickly will the journal accept and publish your paper.
  4. Openness.
    Are the papers published in a closed or open manner? Can you circulate copies? Is the journal an open access journals .

The rankings approach that is increasingly prevalent tends to suggest that “Quality” is the first choice. The following focuses on the quality dimension, however, in operation there needs to be an appropriate balance with the other factors.

How to judge the top quality publication?

The “top quality publication” dimension begs the question, “How do you know what is the top quality publication?”. In some disciplines this is a clear cut thing. You can’t be a researcher within a field without knowing. The trouble is that in some other fields, it’s not so clear. Especially if you’re new to the field.

Those wonderful folk in the Australian government, following the lead of their British colleagues, are making it easier for us poor Australian academics. As part of this work they are developing “discipline-specific tiered outlet rankings”. i.e. if you want to play the game, you follow their rankings – while trying to balance the other dimensions.

While the Oz government lists are still under development John Lamp is providing a nice interface to view the rankings as part of his broader site. There’s a but field of research method and a search. This is provided for two lists from the Australian Research Council – an early draft one and a more recent one. The recent one isn’t that integrated into the database – so the following information is a bit out of date, but it gives an indication.

In the following I’ve selected those journals of potentially most interest to me – I could be mistaken and have left some important ones out – but it’s a start. I’ve added a link to the journal home page and made some comments from my look at their online information.

My main interests are in educational technology within higher education, so that’s the focus. Suggestions and comments welcome.

One of the outstanding tasks I have, is to determine how much of a difference folk are making between A and A* journals.

Higher education

Most of these are selected from this list

Ranking Journal Comments
A* Higher Education Research and Development Max 7000 words
Closed access
6 issues a year
A* Studies in Higher Education Max 7000 words
Closed access
8 issues a year
A Higher Education Quarterly

Associated with Society for Reseach in Higher Education
Closed
A Higher Education Review 5K to 10K words
Copyright is assigned to Tyrrell Burgess Associates with a fee? to cover all rights. Author allowed to circulate with acknolwedgement
This is interesting

HIGHER EDUCATION REVIEW is committed to a problem-based epistemology. In all countries there is an urgent need to formulate the problems of post school education, to propose alternative solutions and to test them. The policy and practice of governments and institutions require constant scrutiny. New policies and ideas are needed in all forms of post school education as new challenges arise.

A International Journal of Teaching and Learning in Higher Education

he specific emphasis of IJTLHE is the dissemination of knowledge for improving higher education pedagogy.

Review process ~ 3 months
Open
4K to 7K words
3 types of article: research, instructional (designed to explain and clarify innovative higher education teaching methods) and review

A Journal of Higher Education 6 issues a year
Paper-based submission!!!
Max 30 pages, double-spaced
12 months submission to publication (usuaully)
closed
A Teaching in Higher Education One aim of journal “identifies new agendas for research”
3K to 6K words
6 issues a year
closed

Coming out of that table, the International Journal of Teaching and Learning in Higher Education sounds interesting, at least for me. It’s open access, shortish review times, promise of good feedback, has a couple of types of articles and is related to the scholarship of learning and teaching which is connection to an aspect of my current position.

Educational technology journals

Most of these came from this list

Ranking Journal Comments
A* British Journal of Educational Technology Closed
Various suggestions it’s the top journal in this sort of field.
Only 4000 words
Not clear about hosting your own sites
6 issues a year
A* Computers & Education Closed
8 issues a year
Impact factor higher than BJET?
Apparently horrible restrictions on reuse
Authors suggest reviewers!
No max length
A ALT-J 3 times a year
Basically closed
5K words
A Australasian Journal of Educational Technology Open access
5k to 8K words, with occasional flexibility
A Australian Educational Computing 2 issues a year
Closed
A Educational Technology & Society Open access
7K words
About 4 issues a year
A Educational Technology Research and Development Closed
Claimed two month review process
5K to 8K words
A Journal of Computer Assisted Learning Closed
3K to 7K words
A Technology Pedagogy and Education Closed
3 issues a year
B International Journal on E-Learning Closed
AACE journal
B Internet and Higher Education Closed
10 to 30 pages double spaced
C Studies in Learning Evaluation Innovation and Development Open
3K to 6K
Disclaimer: I’m associated with this journal

Discipline specific and curriculum

Sometimes I do work with discipline folk, some of the following might be interesting. More of these journals here. I’ve only included links for these.

Ranking Journal
A* Management Learning
A* Nursing outlook
A* Science Education
A Computer Science Education
A Journal of Engineering Education

Podcast for presentations at the PLEs & PLNs symposium

The following basically tells the rationale and approach used to create a (audio) podcast of the presentations from the Personal Learning Environments & Personal Learning Networks Online symposium on learning-centric technology.

I don’t know if anyone else has already done this, but just in case will share.

If you don’t want to be bored by the background, this is the link for the podcast.

Rationale

I’ve hated the idea of the LMS for quite some time. I even had the chance to briefly lead a project looking at investigating how PLEs could be grown and used within a university, at least before the organisational restructure came. In its short life the project produced a symposium, a number of publications, various presentations and a little bit of software.

Due to the background I had some significant interest in the symposium being organised by George Siemens and Stephen Downes. However, due to other responsibilities, odd times (given my geographical location) for the elluminate presentations and the low speed of my home Internet connection I knew I was unlikely to actively engage. Some of these factors have already prevented my on-going engagement with CCK09.

I probably would have left it there, however, over the last 24 hours two separate folk have mentioned the symposium and almost/sort of guilted me into following up. The one thing I can do at the moment, due to a fitness kick involving a great deal of walking, is listen to mp3s. So, I wanted an easy way to get the mp3s. A podcast sounds ideal for my current practices.

The podcast

Last night I did a quick google and found this page that seems to provide a collection of links to video and audio recordings of presentations associated with the CCK09 course. Including some mp3s from the presentations at the PLEs & PLNs symposium

Rather than download and play silly buggers with iTunes I decided to recreate an approach we used on our first “Web 2.0 course site”. Using del.icio.us the students and staff in the course could tag audio/video for inclusion in a podcast created by Feedburner.

So I followed the same process for these:

I just hope now that I have the time to reflect and write about what I listen to.

Thank you Deidre and Maijann for the encouragement to engage with the symposium. Thanks to those organising the symposium and CCK09 for the resources.

Thoughts about the next steps for the indicators project

This post is an attempt to capture some adhoc, over night thoughts about how the indicators project might move forward.

Context

Currently the indicators project is an emerging research project at CQUniversity. There are currently three researchers involved and we’re all fairly new to this type of project. I’d characterise the project at being at the stage where we’ve laid a fair bit of the ground work, done some initial work, identified some interesting holes in the literature around analytics/LMS evaluation and made the observation that there is a lot of different ways to go.

The basic aim is to turn the data gathered in Learning Mangement Systems (LMS, aka CMS, VLEs) usage logs into something useful that can help students, teaching staff, support staff, management and researchers using/interested in e-learning make sense of what is going on so they can do something useful. We’re particularly interested in doing this in a way that enables comparisons between different institutions and different LMS.

The process

A traditional approach to this problem would be big up front design (BUFD). The idea is that we spend – or at least report that we spent – lots of time in analysis of the data, requirements and the literature before designing the complete solution. The assumption is that, like gods, we can learn everything we will ever need to know during the analysis phase and that implementation is just a straight forward translation process.

Frankly, I think that approach works only in the most simplistic of cases, and generally not even then because people are far from gods. The indicators project is a research project. We’re aiming to learn new things.

For me this means that we have to adopt a more emergent, agile or ateleological approach. Lots of small steps where we are learning through doing something meaningful.

Release small patterns, release often

So, rather than attempt to design a complete LMS and institutional independent data schema and associated scripts to leverage that data, lets start small, focus on one or two interesting aspects, take them through to something final and then reflect. i.e. focus on a depth first approach, rather than a breadth first.

As part of this we should take the release early, release often approach. Going breadth first is going to take some time. Depth first we should be able to have something useful that we can release and share. That something will/should also be fairly easy for someone else to experiment with. This will be important if we want to encourage other folk, from other institutions to participate.

We should also aim to build on what we have already done and also build on what other people have done. I think that the impact on LMS usage by various external factors might be a good fit.

External factors and LMS usage

First, this is a line of work in which others published. Malikowski, Thompson & Theis (2006) investigate what effect class size, level of class and college in which a course was offered had on feature adoption (only class size had significant impact). Hornik et al (2008) have put courses into high and low paradigms and seen how this, plus the level of the course, has impacted on outcomes in web-based courses. There are some limitations of this work we might be able to fill. For example, Malikowski et al (2006) manually checked courses sites and because of this are limited to observations from a single term.

Second, we’ve already done some work in this area in our first paper. We

This sort of examination of external factors and their impact on LMS usage is useful as it helps identify areas of interest in terms of further research and also potential insights for course design. It’s also (IMHO) somewhat useful in its own right without any need for additional research. So it’s something relatively easy for us to do, but also should be fairly easy for others to experiment with.

Abstracting this work up a bit

The first step in examining this might be an attempt to abstract out the basic principles and components of this sort of work. If we can establish some sort of pattern/abstraction this can guide us in the type of work required and some sort of move towards a more rigorous process. The following is my initial attempt.

There have been two main approaches we’ve taken in the first paper:

  1. Impacts on student performance.
  2. Impacts on LMS feature adoption.

Impacts on student performance

An example is the impact of an instructional designer. The following graph compares the level of student participation mapped against final result between course designed with an instructional designer and all other courses.

Instuctional Designer Designed Courses vs Overall Average

Instuctional Designer Designed Courses vs Overall Average

In this type of example, we’ve tended to use three main components:

  1. A measure of LMS usage.
    So far we have concentrated on

      the average number of hits by the student on the course website and discussion forum; and

    • the average number of posts and replies by the student on the discussion forum
  2. A measure of student performance.
    Limited to grade achieved in the course, at the moment.
  3. A way to group students.
    This has been done on the basis of mode of delivery/type of student (i.e. a distance education student, an Australian on-campus student, an international student) or by different types of courses.

Having identified these three components we can actively search for alternatives. What alternatives to student performance might there be?

For example, in the paper we use Fresen’s (2007) taxonomy of factors to promote quality web-supported learning as a way to group students. For example, staff participation should promote quality, hence is there any difference in courses with differing levels of staff participation?

Are there other theoretical insights which could guide this work?

Impacts on LMS feature adoption

We’ve used the LMS independent framework for LMS features developed by Malikowski et al (2007) to examine to what level different features are used within courses. We’ve looked at this over time and between different LMS. The following shows the evolution of feature adoption over time within the Blackboard LMS used at CQU.

Blackboard Feature Adoption

Under this model, the components could be described as:

  • Framework for grouping LMS features.
  • Definition of adoption.

A mixture of the two?

I wonder if there’s any value in using the level of feature adoption as another way of grouping courses to identify if there’s any connection with student outcome. e.g. do courses with just content distribution have different student outcomes/usage than courses with everything?

Next steps

Some quick ideas:

  • Look at improving the abstraction and alternatives of the two abstractions above.
  • Look at focusing on developing some platform independent database schema to enable the cross-LMS and cross-institutional comparison of the above two abstractions.
    This would include:
    • the database scheme;
    • some scripts to convert various LMS logs into that database format;
    • some tools to automate interseting graphs.

References

Fresen, J. (2007). “A taxonomy of factors to promote quality web-supported learning.” International Journal on E-Learning 6(3): 351-362.

Hornik, S., C. S. Saunders, et al. (2008). “The impact of paradigm development and course level on performance in technology-mediated learning environments.” Informing Science 11: 35-58.

Malikowski, S., M. Thompson, et al. (2006). “External factors associated with adopting a CMS in resident college courses.” Internet and Higher Education 9(3): 163-174.

Malikowski, S., M. Thompson, et al. (2007). “A model for research into course management systems: bridging technology and learning theory.” Journal of Educational Computing Research 36(2): 149-173.

The indicators project and what it means for me

After at least a decade of “wouldn’t it be a good idea if” and at least one aborted attempt (hint: an organisational restructure in which you are a loser, is not a great context for a new project with intra-organisational implications), the Indicators Project is getting started. This post is my attempt to define what the project means for me. What I hope to get out of the project, and what I hope others might get out of the project.

The main aim is to let people know about the project and encourage feedback, either here or on the project blog.

Absence of data and poor decision making

Dave Snowden defines sensemaking as

How do we make sense of the world, so that we can act in it.

When it comes to education, David Wiley makes this point which resonates strongly with me

The data that we, educators, gather and utilize is all but garbage. What passes for data for practicing educators? An aggregate score in a column in a gradebook. A massive, course-grained rolling up of dozens or hundreds of items into a single, collapsed, almost meaningless score. “Test 2: 87.” What teacher maintains item-level data for the exams they give? What teacher keeps this data semester to semester, year-to year? What teacher ever goes back and reviews this historical data?

For a long time I have believed that the absence and/or poor quality of the data available has meant that universities have been particularly bad at sensemaking around learning and teaching, and especially e-learning.

For me, a major consequence of this “garbage data” is that decisions made within universities (I work within a university, I’m paid to help improve learning and teaching at that institution, so my focus is on universities) about learning and teaching, and especially about e-learning, are made with very little sense of what is happening within the real world. This situation is increasingly getting worse as, at least within my experience, management at universities are attempting to adopt a more top-down, “corporate” approach to decision making.

Such an approach to decision making means that when management make the decisions about learning and teaching not only don’t have good data to base their decision on. They are also making decisions on the basis of one of the following categories of teaching experience:

  • only taught recently with a significant amount of support;
    (this means they don’t have to experience all the low level “make work” that consumes so much time)
  • haven’t taught for a number of years;
  • have never taught within the local context; or
  • have never taught.

For individual academics, they are stuck with the “garbage data” from their own courses and their own gut feel. Since teaching at University is mostly a solo activity, there is little or no opportunity to compare and contrast with the experience of others. Even when the opportunity does arise, it has to be done with “garbage data”.

Support staff, be they instructional designers, academic developers or IT folk, are almost entirely without data, which means they can’t target their assistance. They have to take a one size fits all (i.e. one size that helps no-one) approach. Mainly because what data that is available about learning and teaching is only available to the teacher or their line supervisor.

Students, well they are at the bottom of the pile. They get essentially no indication of how where the sit with respect to other students. etc.

The indicators project aims to provide better data to teaching staff, management, support staff and students.

What can be done?

David Wiley believes that

using technology to capture, manage, and visualize educational data in support of teacher decision making has the potential to vastly improve the effectiveness of education.

A lot of the work by Dave Snowden is based around the idea of achieving

A synthesis of technology and human intelligence

Using technology for what it is good for in order to generate indicators that can help people do what they are good at – pattern matching.

David Wiley’s long term goal is huge, difficult and expensive. You can read more about it on his blog. That goal is beyond the scope of our little indicators project. I think the aims for our project can be summarised as:

  • Identify potentially interesting indicators from LMS usage data and some other institutional data (e.g. student characteristics etc.).
  • Make that information available to students, teaching staff, management, support staff and researchers.
    We aren’t likely to achieve all this at once, different folk will get it at different times.
  • Engage in additional research around the indicators, how they are used, how they can be used and what they can tell us about learning and teaching.
  • Return to step #1.

Cross platform and cross institution

Importantly, we’re aiming to/hoping for the project to identify, encourage and enable use of the indicators across different institutions and different LMS. As we progress, we’ll be looking for people interest in partnering with the project.

Graphical representation

In an attempt to understand what we have to do and where the interesting work might be we developed the following graphical representation of the project.

Project Overview

Figure 1. Project Overview

Working from the bottom up, the figure includes:

  • LMS and institutional specific data.
    Each institution will have its own LMS and also some other data in the form of information about the students (e.g. age, country of origin, type of student) and the courses (e.g. discipline, number of campuses offered at etc.).
  • We need to do some “research” to identify the knowledge necessary to effectively convert this institutional and LMS dependent data into something that is independent of LMS and institution.
  • The LMS & institutional independent data forms the main data source of the indicators. At the very least, partner institutions will be able to perform comparisons. In a perfect world, the data will be in such a form as to enable free sharing, anyone who has an interest can get the data and perform analysis.
  • We then need to do some research to generate knowledge to convert the LMS and institution independent data into indicators. The indicators abstract the data into a form that provides useful knowledge for students, teachers, managers, support staff or researchers.
    One simple example, is the percentage of courses within an LMS that have adopted specific features.
  • Some of the “useful knowledge” will be passed onto the institutional business intelligence folk who are responsible for institutional data warehouses, dashboards and the like.
  • Some of the useful knowledge will be used by a variety of people (teaching staff, support staff, students and management) to improve the practice of learning and teaching.
  • Some of the useful knowledge will be used as the basis for additional research to identify the whys and wherefores of the indicators.
    For example, Why do international students “break” the link between LMS activity and student grades?

Problems

This is by no means a simple task. There are any number of problems that will impact the project. Here are some.

Online only is rare

David Wiley is in the somewhat rare situation of having an online only context

The Open High School of Utah is the first context in which I’m studying this use of technology. Because it is an online high school, every interaction students have with content (the order in which they view resources, the time they spend viewing them, the things they skip, etc.) and every interaction they have with assessments (the time they spend answering them, their success in answering them, etc.) can all be captured and leveraged to support teachers.

It is very rare for my institution, and I’m assuming many other universities, to have courses that are entirely online. In our situation a large percentage of our students must attend on-campus sessions and another large percentage believes they are missing out on something important if they don’t get face-to-face. So, in our situation the online data is only ever going to tell part of the story. It is going to have to be supplemented with other approaches and methods.

Data quality

David Warlick in a blog post that responds to David Wiley’s post (is it me, or are there a lot of David’s in this post?) identifies the problems with data quality

even in the best of situations, the data is scarce, shallow, grainy, and awfully expensive to collect

He is perhaps talking about a different context with high schools, but some of these limitations apply in existing work. Much of the research into LMS usage has focused on the use of surveys, interviews of manual examination of course sites to generate insight. Where data mining is done on system data it is often for limited time frames (e.g. 1 term or 1 year) and is usually communicated in a LMS dependent way that makes comparisons between systems and institutions difficult.

Who will use it?

David Warlick makes another important point in his blog post. This time the question is “who will use all this data?”

not to mention that the only people who can make much use of it are the data dudes that school systems have been hiring over the past few years.

This is a problem I’ve seen with universities with the rise of data warehouses and dashboard. Unless there is a particular motivated and well resourced team, such information systems become the toys of the “data dudes”, occasionally the weapons of managers who wish to make a particular point, or a resource for a small group of researchers to publish papers. They rarely become embedded into the day to day practice of learning and teaching.

The LMS problem

The LMS is based on the assumption that all “learning” – or at least content access and discussion forum use – occurs within the LMS. This “one ring to rule them all” approach does provide one benefit. All of this data is in the one place, the one system, the one database.

This “one ring to rule them all” approach is also, in my opinion and that of many others, the main problem with the LMS. It removes choice from the student and the teacher about what tools can be used. However, if alternatives such as personal learning environments become prevalent, then the sort of approach being adopted by the indicators project will no longer be possible. The focus will have to change to the type of question Stephen Downes raised when pointing to Wiley’s post

Shouldn’t we be devising ways for students to organize and track their own learning?

This an important point. If I had my way we wouldn’t be using an LMS. The trouble is that the LMS is the almost universal response to e-learning by universities. To get them to change, we’re going to have to – at the very least – provide lots of meaningful data that encourage management and others to recognise the limitations of the LMS approach. Certainly one of my aims in being involved with the indicators project is to illustrate the inherent limitations and problems with the LMS approach.

Where to from here?

The project is starting to gather some momentum. We’ve had our first paper accepted at a conference. We’re talking about research and ALTC grants. We’ve started identifying additional work we need to make progress on, in particular making a start on the cross LMS comparisons. We’re talking about making connections with various folk to help the project move one.

So, feel free to share your comments and thoughts.

The learning pyramid: true, false, hoax or myth?

The aim of this post is to investigate the question of whether or not the learning pyramid (see following figure – click to expand) is true or false, or perhaps a hoax, myth, misdirection, useful model and/or theory based on verifiable research.

In the end, I confirm my belief that it is a hoax/myth. I don’t believe it is useful in guiding the design of learning and teaching, in fact, I believe it to be destructive. It aims to provide a simplistic and wrong basis on which to guide design, when such design should be guided by and engage with a recognition that teaching is complex, difficult and contextual and can’t be improved by silver bullets

What do you think? (I do recognise that my direct opposition in the last paragraph is likely to significantly limit alternate perspectives, but I though I’d best be clear of my view given the prevalence of the figure.)

Origins of the post

A colleague from my current institution has recently been attending the Jossey-Bass Online Teaching and Learning conference and has been blogging her reflections. In her first post on the conference Wendy mentions a presentation by Rena Palloff and Keith Pratt (who, from their website, seem to be very informed folk around online learning etc.) entitled “Assessing the online learner: resources and strategies for faculty”.

Wendy mentions

They put up a pyramid I quite liked that had retention rates for lectures at 5% at the top through to teaching others as 90% effective for retaining information (see book, p. 19) and suggested assessments should be aimed at the bottom half of the triangle (discussion activities, practice by doing, teaching others).

This sounds an awful lot like the above pyramid.

Quite some time ago I came across this post by Will Thalheimer. The post essentially seeks to argue that the pyramid is not based on any published research and suffers from a number of major flaws. I was convinced by this post and have since taken the view that the pyramid is false/a myth. I believed this to the extent that when another colleague used the learning pyramid in a blog post, I posted a comment linking back to the naysayer post by Thalheimer.

I was going to post a similar response to Wendy’s post but couldn’t remember some of the resources, so I revisited my comment on Scott’s post. To my surprise, I discovered that Scott had responded to my comment. The surprise arose both from the fact that I don’t remember receiving a notification of the reply (though that may say more about my memory than the technology); and that Scott was claiming that the learning pyramid was based on research that addressed some of the problems. i.e. that there was some basis. In addition, Scott suggests that the questions raised about the pyramid may arise from folk with questionable motives and also suggests that the naysayers don’t provide evidence or experimental research.

I’m going to spend a bit of time seeing what I can find about this difference of perspective. Is the pyramid based on some research? Have I been basing my dismissal of the pyramid on work by people with an axe to grind? Is there evidence to suggest that the pyramid is wrong?

Origins of the pyramid

One obvious place to start is to find out whether the proposed research actually exists. Does the research institute that is supposed to have done this research exist?

Lalley and Miller (2007) claim

No specific credible research was uncovered to support the pyramid, which is loosely associated with the theory proposed by the well-respected researcher, Edgar Dale. Dale is credited with creating the Cone of Experience in 1946.

This is from the abstract of their paper displayed on this ERIC page. My institution’s library doesn’t have access to the full text in electronic form, I’m chasing up a paper copy. (Of course the library website is currently down so I can’t log a request to get a copy of the paper…). Annoyingly, the institution I’m doing my PhD through has digital access to the journal, but not for 2006 through 2008.

Further web research has found a copy of the Lalley and Miller (2007) paper online here. The aim of this article is

Therefore, it is our intention to examine the following: the source of the general structure of the pyramid, Dale’s Cone of Experience; available research on retention from the methods identified by the pyramid; and consider the relationship(s) among the methods.

Rather than Bell Laboratories being the source of research, the research is generally referenced back to the National Training Laboratories in Bethel Maine. From information on the web it appears that this organisation is now known as the NTL Institute. Lalley and Miller (2007) quote from a response from the NTL Institute to a query about the pyramid

Institute at our Bethel, Maine campus in the early sixties when we were still part of the National Education Association’s Adult Education Division. Yes, we believe it to be accurate–but no, we no longer have–nor can we find–the original research that supports the numbers. We get many inquiries every month about this–and many, many people have searched for the original research and have come up empty handed. We know that in 1954 a similar pyramid with slightly different numbers appeared on p. 43 of a book called Audio-Visual Methods in Teaching, published by the Edgar Dale Dryden Press in New York. Yet the Learning Pyramid as such seems to have been modified and always has been attributed to NTL Institute.

Lalley and Miller (2007) go onto give some arguments about why it is appears questionable that this research was ever/could ever be done.

The origins and data for the pyramid look very questionable. So, is there data or research to suggest that the pyramid is wrong?

What’s the literature say?

Lalley and Miller (2007) then go onto review the literature about each of the different methods of instruction included in they pyramid. The aim being to find out what the literature says about retention rates. I have not read all of what they have written (I have a thesis to get back to), but in summary they say (emphasis added)

The research reviewed here demonstrates that use of each of the methods identified by the pyramid resulted in retention, with none being consistently superior to the others and all being effective in certain contexts.

.

Lalley and Miller’s (2007) final conclusion is that direction instruction, such as a lecture, remains very important as a part of the mix of approaches required. They close the article with

Not surprisingly, this returns us to the assertions of Dale (1946) and Dewey (1916) that for successful learning experiences, students need to experience a variety of instructional methods and that direct instruction needs to be accompanied by methods that further student understanding and recognize why what they are learning is useful.

Rutger van de Sande from a University in the Netherlands has blog post that connects with this myth. He supervised some students (physics teachers) in an experiment to test retention. The rationale and results are explained on this knol. In a small scale study, likely to have all sorts of limitations, they established different percentages to the pyramid, which they conclude “to be an all too simplistic model”.

This, admittedly small, collection of research (though Lalley and Miller draw on a significant body of research) seems to provide evidence and experimental research to disprove the ideas of the pyramid.

Axe to grind?

Do the folk questioning the pyramid have an axe to grind? That’s a difficult question to answer without significant knowledge of who they are. So, let’s start with the question of who they are.

  • James Lalley
    According to this page is the Acting Chair of the Education Department at D’Youville College. He’s also an author of a book published by SAGE who publish this author’s bio. ERIC lists these publications
  • Rutger van de Sande – an “experience educational researcher and teacher educator”.
    Looks like a keen academic trying to make his way in the world.
  • Will Thalheimerconsultant and researcher
    Okay, a consultant, which potentially means there’s some potential benefit in getting more people to his site (which has ads). Attacking a broadly accepted idea is a good way to attract attention. Given the challenge to the effectiveness of learning styles, you could argue that there is a trend developing here. (I should note that academics in search of citations have the same motivation)
  • Christopher Harris – a librarian/educator/adminstrator

Don’t think these guys form a cabal aimed at attacking the legitimacy of an ideas based on sound empirical research. You could argue that the attention given by attacking such a widely accepted idea might be motivation, but the data seems to suggest that the pyramid is based on questionable to non-existent data.

Why does this continue to get air play?

A number of the folk who have written about this pyramid or commented on blogs about it have asked the question “Why does it continue to get air play?”. I have a preference for two explanations:

  • “looking for a silver bullet, a simplistic approach to a complex issue” (Metiri Group, 2008)
    Teaching and learning is a wicked problem, especially in some of the increasingly diverse contexts people are facing. For some/many folk it’s easier to believe in a simple, universal solution than engage in the full complexity of the problem. This is, I suggest, encouraged to extreme ends in the increasingly “corporate world” of higher education.
  • Confirmation bias – “an irrational tendency to search for, interpret or remember information in a way that confirms preconceptions or working hypotheses.
    i.e. a lot of education folk don’t like lectures. A lot of education folk have a barrow to push in terms of problem-based learning, discovery learning, authentic learning…..etc. The pyramid confirms the biases these folk have and hence they are more ready to accept than critique.

    I don’t like the way most lectures are given, they are very poor. I like even less that most of the focus of many courses is on giving lectures. But I don’t believe there’s a silver bullet.

Of course, the idea that I don’t believe there is a silver bullet – i.e. I don’t the application authentic learning will save a course, a program, an institution or the world – means that I have a confirmation bias that leans towards thinking the learning pyramid is a hoax.

References

Lalley, J. and R. Miller (2007). “The learning pyramid: Does it point teachers in the right direction?” Education and Information Technologies 128(1): 64-79.

Metiri Group (2008). Multimodal learning through media: What the research says, Cisco Systems: 24.

Is there value in strategic plans for educational technology

Dave Cormier has recently published a blog post titled Dave’s wildly unscientific survey of technology use in Higher Education. There’s a bunch of interesting stuff there. I especially like Dave’s note on e-portfolios

eportfolios are a vast hidden overhead. They really only make sense if they are portable and accessible to the user. Transferring vast quantities of student held data out of the university every spring seems complicated. Better, maybe, to instruct students to use external services.

Mainly because it aligns with some of my views.

But that’s not the point of this post. This morning Dave tweeted for folk to respond to a comment on the post by Diego Leal on strategic plans for educational technology in universities.

Strategic plans in educational technology are a bugbear of mine. I’ve been writing and thinking about them a lot recently. So I’ve bitten.

Summary

My starting position is that I’m strongly against strategic plans for educational technology in organisations. However, I’m enough of a pragmatist to recognise that – for various reasons (mostly political) – organisations have to have them. If they must have them, they must be very light on specifics and focus on enabling learning and improvement.

My main reason for this is a belief that strategic plans generally embody an assumption about organisations and planning that simply doesn’t exist within universities, especially in the context of educational technology. This mismatch results in strategic plans generally creating or enabling problems.

Important: I don’t believe that the problems with strategic plans (for edtech in higher education) arise because they are implemented badly. I believe problems with strategic plans arise because they are completely inappropriate for edtech in higher education. Strategic plans might work for other purposes, but not this one.

This mismatch leads to the following (amongst others) common problems:

  • Model 1 behaviour (Argyris et al, 1985);
  • Fads, fashions and band wagons (Birnbaum, 2000; Swanson and Ramiller, 2004)
  • Purpose proxies (Introna, 1996);
    i.e. rather than measure good learning and teaching, an institution measures how many people are using the LMS or have a graduate certificate in learning and teaching.
  • Suboptimal stable equilibria (March, 1991)
  • Technology gravity (McDonald & Gibbons, 2009)

Rationale

Introna (1996) identified three necessary conditions for the type of process embedded in a strategic plan to be possible. They are:

  • The behaviour of the system is relatively stable and predictable.
  • The planners are able to manipulate system behaviour.
  • The planners are able to accurately determine goals or criteria for success.

In a recent talk I argued that none of those conditions exist within the practice of learning and teaching in higher education. It’s a point I also argue in a section of my thesis

The alternative?

The talk includes some discussion of some principles of a different approach to the same problem. That alternative is based on the idea of ateleological design suggested by Introna (1996). An idea that is very similar to broader debates in various other areas of research. This section of my thesis describes the two ends of the process spectrum.

It is my position that educational technology in higher education – due to its diversity and rapid pace of change – has to be much further towards the ateleological, emergent, naturalistic or exploitation end of the spectrum.

Statement of biases

I’ve only ever worked at the one institution (for coming up to 20 years) and have been significantly influenced by that experience. Experience which has included spending 6 months developing a strategic plan for Information Technology in Learning and Teaching that was approved by the Academic Board of the institution, used by the IT Division to justify a range of budget claims, thrown out/forgotten, and now, about 5 years later, many of the recommendations are being actioned. The experience also includes spending 7 or so years developing an e-learning system from the bottom up, in spite of the organisational hierarchy.

So I am perhaps not the most objective voice.

References

Argyris, C., R. Putnam, et al. (1985). Action science: Concepts, methods and skills for research and intervention. San Francisco, Jossey-Bass.

Birnbaum, R. (2000). Management Fads in Higher Education: Where They Come From, What They Do, Why They Fail. San Francisco, Jossey-Bass.

Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

March, J. (1991). “Exploration and exploitation in organizational learning.” Organization Science 2(1): 71-87.

McDonald, J. and A. Gibbons (2009). “Technology I, II, and III: criteria for understanding and improving the practice of instructional technology ” Educational Technology Research and Development 57(3): 377-392.

Swanson, E. B. and N. C. Ramiller (2004). “Innovating mindfully with information technology.” MIS Quarterly 28(4): 553-583.

Call for participation: Getting the real stories of LMS evaluations?

The following is a call for participation from folk interesting in writing a paper or two that will tell some real stories arising from LMS evaluations.

Alternatively, if you are aware of some existing research or publications along these lines, please let me know.

LMSs and their evaluation

I think it’s safe to say that the idea of a Learning Management System (LMS) – aka Course Management System (CMS), Virtual Learning Environment (VLE) – is now just about the universal solution to e-learning for institutions of higher education. A couple of quotes to support that proposition

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai (Jones and Muldoon 2007).

LMS have become perhaps the most widely used educational technologies within universities, behind only the Internet and common office software (West, Waddoups et al. 2006).

Harrington, Gordon et al (2004) suggest that higher education has seen no other innovation result in such rapid and widespread use as the LMS. Moodle or Sakai. Almost every university is planning to make use of an LMS (Salmon, 2005).

The speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006).

Even more surprising is the almost universal adoption of just two commercial LMSes, both now owned by the same company, by Australia’s 39 universities, a sector which has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005).

Oblinger and Kidwell (2000) comment that the movement by universities to online learning was to some extent based on an almost herd-like mentality.

I also believe that increasingly most universities are going to be on their 2nd or perhaps 3rd LMS. My current institution could be said to be on its 3rd enterprise LMS. Each time there is a need for a change, the organisation has to do an evaluation of the available LMS and select one. This is not a simple task. So it’s not surprising to see a growing collection of LMS evaluations and associated literature being made available and shared. Last month, Mark Smithers and the readers of his blog did a good job of collecting links to many of these openly available evaluations through a blog post and comments.

LMS evaluations, rationality and objectivity

The assumption is that LMS evaluations are performed in a rational and objective way. That the organisation is demonstrating its rationality by objectively evaluating each available LMS and making informed decisions about which is most appropriate for it.

In the last 10 years I’ve been able to observe, participate and hear stories about numerous LMS evaluations from a diverse collection of institutions. When no-one is listening, many of those stories turn to the unspoken limitations of such evaluations. They share the inherent biases of participants, the cognitive limitations and the outright manipulations that . Stories that rarely, if ever, see the light of day in research publications. In addition, there is a lot of literature from various fields suggesting that such selection processes are often not all that rational. A colleague of mine did his PhD thesis (Jamieson, 2007) looking at these sorts of issues.

Generally, at least in my experience, when the story of an institutional LMS evaluation process is told, it is told by the people who ran the evaluation (e.g. Sturgess and Nouwens, 2004). There is nothing inherently wrong with such folk writing papers. The knowledge embodied in their papers is, generally, worthwhile. My worry is that if these are the only folk writing papers, then there will be a growing hole in the knowledge about such evaluations within the literature. The set of perspectives and stories being told about LMS evaluations will not be complete.

The proposal

For years, some colleagues and I have regularly told ourselves that we should write some papers about the real stories behind various LMS evaluations. However, we could never do it because most of our stories only came from a small set (often n=1) of institutions. The stories and the people involved could be identified simply by association. Such identification may not always be beneficial to the long-term career aspirations of the authors. There is also various problems that arise from a small sample size.

Are you interested in helping solve these problems and contribute to the knowledge about LMS evaluations (and perhaps long term use)?

How might it work?

There are any number of approaches I can think of, which one works best might depend on who (or anyone) responds to this. If there’s interest, we can figure it out from there.

References

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Harrington, C., S. Gordon, et al. (2004). “Course Management System Utilization and Implications for Practice: A National Survey of Department Chairpersons.” Online Journal of Distance Learning Administration 7(4).

Jamieson, B. (2007). Information systems decision making: factors affecting decision makers and outcomes. Faculty of Business and Informatics. Rockhampton, Central Queensland University. PhD.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Oblinger, D. and J. Kidwell (2000). “Distance learning: Are we being realistic?” EDUCAUSE Review 35(3): 30-39.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

Sturgess, P. and F. Nouwens (2004). “Evaluation of online learning management systems.” Turkish Online Journal of Distance Education 5(3).

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

LTERC, finally a research centre – shamless plug

Sorry, but the purpose of this post is entirely selfish. My host institution has recently established a research center around learning, teaching and education. Given my background and interests, I’ll be doing work under the auspices of the center and occasionally need to access the website (even though the website is implemented with CQU’s home-grown content management system) of the center.

The full name of the center is the Learning and Teaching Education Research Centre which is abbreviated to LTERC. As of yet, despite promises to the contrary, you can’t Google LTERC and get a pointer to the site. To make matters worse the home-grown content management system doesn’t maintain human readable URLs. The URL for the centre is http://www.cqu.edu.au/lterc/, but once you’re looking at the page you this meaningful URL gets replaced with http://content.cqu.edu.au/FCWViewer/view.do?site=779. Obviously a URL that encourages one to remember the site. For example, I’ve only just (through trial and error) realised that the www.cqu.edu.au/lterc URL will work.

Yes, I know I could book mark it. I could even be innovative and use del.icio.us. But that doesn’t help make it easy to tell external folk about the site. Rather than use URLs, these days I tend to say “google it”. For example, rather than give the URL for the video of my talk last week (au.video.yahoo.com/watch/6075473/15784044) I can say google “herding cats losing weight”.

The point is ‘google “lterc”‘ doesn’t work yet. And it’s not as if there a massive amounts of people using “lterc” – it’s not like “david jones”.

So the point of this blog post is to get a few pages on the web pointing to the LTERC website so that Google might rank the site a bit higher.

Is this mis-use of the blog?

Protected: Initial CRC investigations

This content is password protected. To view it please enter your password below:

Using Votapedia

In the next couple of weeks I’m going to be giving a presentation that will also serve as an experiment in alternate technologies for presentations. One of those technologies will be Votapedia.com – an Australian-based, free SMS/Web audience response system. This post is meant to capture the process I went through in learning about how to use Votapedia.

Accounts

Votapedia is based on Mediawiki. To create quizzes on Votapedia you need to get an authorised account. This consists of two steps:

  1. create an account;
    Using the normal mediawiki approach.
  2. get the account authorised or known.
    This is entails sending an email to a person with some blurb about what you’re using it for. I got a response in a few hours.

Participants using the service to “vote” don’t need accounts. However, those voting via the Web can create accounts to use in voting – they don’t need to.

There are limits placed on how many times participants can “vote”, but this is done on the basis of the IP address (if via the web) or their phone number (via phone).

Creating a survey

Surveys exist on their own page on the Votapedia installation of MediaWiki. You can create surveys by using some specific markup or using one of a number of “forms” which automate the process. Let’s create one.

A part of the Votapedia home page is shown in the following image. The links in the first left-hand menu are how you create surveys. There are 6 different types of surveys.

Survey type Description
Simple survey
Questionnaire This is a survey with more than one question. Wit h this type of survey you don’t need to wait for everyone to finish a question before moving on.
Quiz Essentially a questionnaire, but with other features (allocate points, can’t see the quiz page before it starts..) to allow it to be used for student assessment.
Anonymous text Participants submit whatever they want
Identified text
Rank expositions

Votapedia home page

Each link takes you to a basic HTML form that guides you in the information required to create the chosen survey. The following image is for the simple survey (click on it to see more).

Creating a simple survey on votapedia

Well, that’s not good. It didn’t work. Filled out the form, all good, hit submit and I get “There is currently no text in this page”.

So, I try to create another survey. Very simple and don’t do anything to upset the gods. Same error. Not good. Go looking and see there’s a link “My Surveys”, perhaps that might give me the link. Yep, the two surveys do show up on that page, see the following image.

My Surveys on Votapedia

Okay, if I click on the link for one of the surveys on “My Surveys” page I get a page with the same “error”. Now, there is a “Choose Number” link for each survey, maybe I need to select that first.

It appears that Votapedia has a limited set of phone numbers, choose number means selecting from that collection of numbers, one number for each response. Some are in red – which means you can’t use them – some are in green.

Trouble is that you can only chose one number at a time and it always asks for me to choose the first number. What about the others?

It would seem that I am missing something important.

Tried to create an “anonymous text” survey, same problem. There are other surveys that seem to be working….

Mmmm, now they are working. The main thing that changed was that I changed the password for my account. I don’t think that will have changed anything. Here’s the proof.

My first Votapedia quiz

Still only able to choose one phone number. Well, let’s try and start the survey. Hit the “Start survey” button….that seems to start it. The numbers are already there. Let’s try. Phone the number for my response, and hey presto it works. Engaged tone and the graph is updated in front of my eyes. That’s neat. Time to tell some other folk.

Ahh, I’ve now got an SMS from Votapedia thanking me for my vote and giving me some details to login to the website.

Now I did want to try and change the survey while it was running. I wanted to remove the results from the page and enable web voting. But it didn’t look like I could change it while the survey was running. So I’ve turned it off and will reset the survey and see if the changes work.

But of course, this was because I was viewing the survey through my account. Not what a visitor would see. Silly David.

Running a survey

Just briefly, asked a few colleagues to take the survey – all up 7 participants. The experience highlighted:

  • Getting the engaged signal when dialing the number gave the impression of failure – this needs to be made clear at the start.
  • The web interface for taking the poll suffered the same problem that I described above when creating the question.

Results looked like the following image. Interestingly, it seems at least some of the participants missed the least modifier in the question.

Results of Votapedia question

Conclusions

Still a bit dodgy in places via the web interface. Phone side worked well. Will need the right sort of preparation of participants.

The question of the phone numbers and how long it takes to dial a response is also an issue.

Thoughts on "Insidious pedagogy"

The following is a reflection on and response to a paper by Lisa Lane (2009) in First Monday titled “Insidious pedagogy: How course management systems impact teaching”. I’ve been struggling with keeping up with reading, but this topics is closely connected to my thesis and the presentation I’ll be giving soon.

The post starts with my thoughts and reactions to the paper and has a summary of the paper at the end.

My Thoughts

In summary, the paper basically seems to be based on

  • observing a problem; and
    In summary, the problem is that because most academics are not expert online technology users they seek to use course management systems (CMS) at a basic level by using system defaults. They system defaults in some CMS (e.g. Blackboard) are seen to encourage limited use and also to encourage academic staff to continue as novices. These novice staff produce learning environments that are less than appropriate, but they are also happy with the CMS.
  • proposing two bits of a solution.
    The two solutions are:
    • start novices with pedagogy;
      When introducing a CMS to technically novice acacdemic staff, don’t start by examining the technical features of the CMS. This encourages them to stick with those features without considering pedagogy. Instead, start with pedagogy and work to the tools.
    • have the CMS use opt-in, rather than opt-out.
      The default setting for an opt-out CMS is that all of the options are there, in the face of the academic. This can be confronting and lead novices to taking more pragmatic approaches. An opt-in approach has fewer defaults which encourages/requires the academic to think more holistically.

I like the paper, especially in its description of the problem. This is an important problem that is often over-looked. However, while there is some value in the solutions – the distinction between opt-in and opt-out is especially interesting – I wonder about the practicality of the “start with pedagogy” solution. Also, not surpisingly given such a complex problem, think there are other factors to be considered.

Practicality of “start with pedagogy”

My current institution is currently in the midst of adopting Moodle. The institution has implemented the organisationally rational approach of having compulsory training sessions in Moodle being run by both IT trainers and curriculum designers. For various reasons, a number of the staff attending these sessions have asked a common question: “What’s the minimum I need to know?”. Such staff aren’t that interested in starting with pedagogy.

This raises an interesting point that I haven’t thought of for the first time. Given our institutional context, I believe that the number of true novices (i.e. those that have never used a CMS) amongst academic staff is very low. Many of these staff may well have very limited conceptions of e-learning from a pedagogical perspective, however, they have started to develop “their way” of teaching online. They are comfortable with that and all they want to know is how to replicate it in the new system.

In addition to this, most of the staff I know don’t start with pedagogy when they are designing their teaching. This can be due to not knowing about pedagogy or also for vary pragmatic reasons. For example, if you are a casual, part-time being employed to teach an existing course, you are going to stick with what has been done before. You’re not being paid to do something different and any problems that arise because of “difference” will not be treated well.

Other solutions

There are many other potential solutions, I will be talking about the main ones in a couple of weeks. Some other misc ones before I get back to work:

  • Engage web novice academics in the use of the Internet – especially social media – that further their career.
    e.g. Using social media to connect with other researchers, using blogs to become a “public intellectual”. This provides them with experience to be aware of different possibilities.
  • Modify the context of most universities to appropriately encourage a focus on improving learning and teaching.
    Are instructors motivated to spend more time on improving their teaching? What if they believe the following (Fairweather, 2005)

    More time teaching is a negative influence on academic pay….The trend is worsening most rapidly in institutions whose central mission focuses on teaching and learning

    Until universities truly value learning and teaching and treat it as such…….

  • Adopt a best of breed approach for the CMS.

Other thoughts

Other thoughts/responses include

Is Moodle really different?

Lane (2008) writes (emphasis added)

This is particularly true of integrated systems (such as Blackboard/WebCT), but is also a factor in some of the newer, more constructivist systems (Moodle).

This seems to accept the view that Moodle, being designed on social constructivst principles, is somehow different and better than Blackboard, WebCT etc. I’m sorry to say that if I haven’t seen anything significant while using Moodle that strongly shows those social constructivist principles.

I think there’s a really interesting research project around investigating this claim, how/if it is visible in the design of Moodle and how that claimed strength influences use of Moodle.

Today’s CMS can be customized

There’s a quote in the paper (Lane, 2008)

Today’s CMSs can be customized, changed and adapted

I question this a little. I think the point of the quote in the paper is from the perspective of the academic. i.e. that when designing your course there is choice, an ability to customize your course in a variety of ways by the breadth of additional functionality that CMS vendors have provided.

I agree with that to some extent, however, this customization has some limits:

  • don’t break the model;
    All systems have an in-built model. You can only customize to the extent that you fit within the model. We had an experience in one course where we couldn’t create enough discussion forums in the right places for one pedagogical design. This was entirely due to the assumptions built into the CMS about how discussion forums would work. It broke the model, so we couldn’t do it.
  • your installation allows it.
    There is an important distinction to be made between what the CMS allows you to customize and what the particular installation of the CMS you are using allows you to customize. The decisions made by specific institutions can further constrain the level of customization. The simplest example is the choice at the institutional level not to install “module X”. But in some CMS there are also installation level configuration decisions that constrain customization.

I’ve argued elsewhere that the basic model of a CMS is based on that of an integrated, enterprise system – a product model well known to be inflexible. In fact, best practice information systems literature suggests that for such systems you must “implement vanilla” to minimise costs.

Designed to focus on instructor effeciency?

The paper (Lane, 2008) includes the following claim about the design of CMSs

Today’s enterprise–scale systems were created to manage traditional teaching tasks as if they were business processes. They were originally designed to focus on instructor efficiency for administrative functions such as grade posting, test creation, and enrollment management.

My position is that most of them were very badly designed to do this, if they were at all designed to do this. I’ve heard lots of folk explain that if you have a class for 30 or 40, then the commercial CMSs work fine, but if you have 800, you are buggered.

The first version of WebCT installed at my institution had an internal limit on the number of students that could be managed within the gradebook – 999. If you had more than 1000 students in a class, you were stuffed. My institution had classes that big.

The nature of my current institution – courses having upwards of 20 different teaching staff spread across the eastern coast of Australia – means that online assignment submission and management is an important task. Experience of staff here is that the assignment submission system in Blackboard is really bad in terms of efficiency. Early indications are that the default Moodle system is just as bad. A locally produced system is significantly more efficient.

All of this seems to bring into question the “efficiency” aspect of CMS. They don’t even do that well. We should write something on this.

Paper summary

The following is a quick summary of the paper

Introduction

Nice quote from Thoreau which I might have to steal

But lo! men have become the tools of their tools.

Draws on historian’s view to argue that technologies tend to have a purpose/objective that can limit or even determine its use.

Course Management Systems (CMS) also do this, through the defaults in those systems. Other literature tends to not to focus on this. The paper suggests that

A closer look at how course management systems work, combined with an understanding of how novices use technology, provides a clearer view of the manner in which a CMS may not only influence, but control, instructional approaches.

The inherent pedagogies of CMSs

CMS designed mostly for administrative purposes. Built-in pedagogy is essentially based on presentation and assessment. The design of these systems make it simple to perform presentation and assessment tasks.

That said, CMSs have been expanded to include other features and this is expanding. Suggests that CMSs can be customized, changed and adapted. But why aren’t faculty tinkering the CMS to make their individual pedagogies work online?

Novice web users and the CMS

Most academics are not web-heads. Most are drafted to teach online. It’s based on top-down directives. Lots more references to explain that they aren’t savvy with technology. At the same time, most have established successful learnig approaches over time.

Interesting points about how much academics use the same research methods they learned in graduate school. Can expand here.

Experts and novices are different.

The fault of the defaults

Basically argues that the defaults of the CMS aren’t designed to make it easy for or fit with the expectations/experience of academics. As they spend more time with the system, they become comfortable with the defaults.

Important: makes the point about the difference of perspective between educational technologists and academics, especially how they view the CMS.

Novices are happier with CMS because – to put it bluntly – they don’t know better. It’s the folk pushing the boundaries that are less satisfied with CMS.

Solutions to CMS dominance

Treat novices, differently from advanced instructors. With novices emphasis pedagogy first. Argues that starting with technology features focuses on the novice instructor’s weakness (technological literarcy) at the expense of their main strength (expertise in discipline and their teaching).

Also suggests that “opt-out” systems – systems that show all the tools and features and expect users to choose which they don’t want – are too overwhelming for novices. Suggests that opt-in systems – like Moodle – are better. Especially in the way they give similar emphasis to discussions and content transmission

References

Fairweather, J. (2005). “Beyond the rhetoric: Trends in the relative value of teaching and research in faculty salaries.” Journal of Higher Education 76(4): 401-422.

Lane, L. (2009). “Insidious pedagogy: How course management systems impact teaching.” First Monday 14(10).

Lectures and the LMS: Alternatives and experiments

This post stores information about an experiment/presentation seeking to examine alternatives for both the lecture and the LMS. Information available below includes:

Resources

The video of the talk is available on ustream. Slides are below.

When

The following experiment/presentation will take place on Tuesday the 10th of November from 1pm-3pm. The time is based in the “Australia – Queensland – Brisbane” timezone, you can use this converter to make it meaningful for you or the following table might help.

Where When (start time)
London Nov 10, 3am
Washington DC Oct 9, 11pm
New Delhi Nov 10, 8:30am

What

The experiment/presentation will occur in two forms:

  1. Physically on a number of rooms on the campus of CQUniversity; and
    Rockhampton – 33/G.14. Bundaberg – 1/1.12. Gladstone – MHB 1.09. Mackay – 1/1.01.

    IMPORTANT: Originally the Mackay room was not going to be available. Due to the change in time the Mackay room is now available.

  2. Virtually through ustream, twitter and Votapedia.
    The ustream will probably be through this channel. More details on the twitter and votapedia usage will be given during the presentation.

The current session structure will be

  • Introduction and background – no more than 30 minutes.
    Explain the rationale for the experiment and get people using Votapedia and twitter.
  • Presentation – 50 minutes.
    A dry run of the EDCAUSE’09 conference presentation
  • Discussion and questions.
    Whatever time left will be for discussion amongst the participants.

Abstract

Postman’s (1998) fifth of five things to know about technological change is that media or technology tends to become mythic. That is, some technologies come to be thought of as part of the natural order of things. It becomes difficult to imagine life without the technology. Postman suggests that this is dangerous because such technology becomes accepted as is and is consequently not easily modified or changed. Such difficulty is a contributing factor to what Truex et al (1998) label as stable systems drag, where an organisation battles against its constraining technologies as it seeks to adapt to an ever-changing environment. There can be no doubt that universities operate in a continuously changing environment (CQU, 2005)

This session consists of a talk and an experiment. Both aim to explore and open up for modification two mythic technologies within higher education: the lecture and the learning management system. The talk will argue for the need for alternatives to learning management systems and describe the implementation and results of such an alternative. The experiment will use various technologies (ustream, Votapedia and Twitter) to demonstrate methods to significantly modify the mythic attributes of lectures and presentations.

You will be able to participate in the talk and the experiment either by coming to one of the ISL rooms on campus or by your web browser. If you do participate, please be sure to bring your mobile phone. If you’re really keen, you may also wish to create yourself an account on Twitter for use during the presentation.

Additional Background

The talk will be a trial run of a presentation to be given at EDUCAUSE’09 in early November. The title is “Alternatives for the institutional implementation of e-learning: Lessons from 12 years of Webfuse”. The abstract for the talk follows.

The practice of e-learning in universities suffers from a number of unquestioned perspectives that limit outcomes. This presentation describes a framework for understanding the full diversity of alternate perspectives and examines one successful set of perspectives arising out of 12+ years of designing, supporting and competing with the Webfuse system.

An extended abstract of the talk is also available.

The talk will be used as the test bed for an experiment with a range of different technologies that seek to question many of the mythic attributes of the lecture or presentation. The technologies being experimented with include:

  • ustream – a live interactive video broadcast platform.
    ustream provides a free, simple to implement and easy to use approach that allows anyone with a web browser to watch the presentation live.
  • Votapedia – a web and SMS audience response system (clickers)
    Votapedia allows the presenter to pose questions and poll participants answers during a presentation. Votapedia will allow anyone with a mobile phone of web access to participate in these questions and answers.
  • A back channel.
    Using a combination of Twitter and features of ustream participants will be able to share a conversation about the presentation while it is happening.

References

CQU. (2005). “CQU Strategic Plan: 2006-2011 – Creating an opening to a different future.” Retrieved 31 October, 2005, from http://policy.cqu.edu.au/Policy/policy.jsp;policyid=607.

Postman, N. (1998). Five things we need to know about technological change. NewTech, Denver, CO.

Truex, D., R. Baskerville, et al. (1999). “Growing systems in emergent organizations.” Communications of the ACM 42(8): 117-123.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php