Assembling the heterogeneous elements for digital learning

Another perspective for the indicators project

The indicators project is seeking to mine data in the system logs of a learning management system (LMS) in order to generate useful information. One of the major problems the project is facing is how turn the mountains of data into something useful. This post outlines another potential track based on some findings from Lee et al (2007).

The abstract from Lee et al (2007) includes the following summary

Sample data were collected online from 3713 students….The proposed model was supported by the empirical data, and the findings revealed that factors influencing learner satisfaction toward e-learning were, from greatest to least effect, organisation and clarity of digital content, breadth of digital content’s coverage, learner control, instructor rapport, enthusiasm, perceived learning value and group interaction.

Emphasis on learner satisfaction???

This research seeks to establish factors which impact on learner satisfaction. Not on the actual quality itself, but with how satisfied students are with it. For some folk, this emphasis on student satisfaction is not necessarily a good thing and at best is only a small part of the equation. Mainly because its possible for students to be really happy with a course, but to have learnt absolutely nothing from it.

However, given that most evaluation of learning at individual Australian Universities and within the entire sector rely almost entirely on “smile sheets” (i.e. low level surveys that test student satisfaction), an emphasis on improving student satisfaction may well be a pragmatically effective past-time.

How might it be done?

The following uses essentially the same process used in a previous post that describe another method for informing the indicators project’s use of the mountains of data. At least that suggested approach had a bit more of an emphasis on quality of learning.

The process is basically:

  • Identify a framework that claims to illustrate some causality between staff/institutional actions and good outcomes.
  • Identify the individual factors.
  • Identify data mining that can help test the presence or absence of those factors.
  • Make the results available to folk.

In this case, the framework is the empirical testing performed by the authors to identify factors that contribute to increased student satisfaction with e-learning. The individual factors they’ve identified are:

  • organisation and clarity of digital content;
  • breadth of digital content’s coverage;
  • learner control;
  • instructor rapport;
  • enthusiasm;
  • perceived learning value; and
  • group interaction.

Now some of these can’t be tested for by the indicators project. But some can. For example,

  • Organisation of digital content
    Usually put into a hierarchical structure (weeks/modules and then resources), is the hierarchy balanced?
  • Breadth of content coverage
    In my experience, it’s not unusual for the amount of content to significantly reduce as the term progresses. If breadth is more even and complete, greater student satisfaction?
  • group interaction – participation in discussion forums.
  • instructor rapport – participation in discussion forums and presence in the online course.

Questions

I wonder if the perception of there being a lot of course content for the entire course is sufficient. Are students happy enough that the material is there? Does whether or not they use it become academic?

References

Lee, Y, Tseng, S et al (2007). “Antecedents of Learner Satisfaction toward E-learning.” Journal of American Academy of Business 11(2): 161-168.

Leave a comment

Your email address will not be published. Required fields are marked *

css.php