In the course I’m currently teaching the students will be expected to keep a learning journal. They’ll be keeping this on a personal blog that they register on the course Moodle site (but not with BIMM, the institutional process to get BIM installed is underway). The following floats some initial ideas about how the “assessment” of the learning journal will be automated.

Reactions, criticisms and pointers welcome.

Learning journals and assessment

Moon (2006, p. 1) sums it up

A learning journal is essentially a vehicle for reflection.

Beyond reflection, this courses uses a blog as the medium for the learning journal for a variety of reasons, including:

  • Broaden the students’ ICT experience by requiring the use of blogs.
    The course is trying to help pre-service teachers understand how to integrate ICTs into the students’ learning.
  • Encourage the students to produce some online content that may support the formation of connections between students and others.
  • Encourage the students to make their learning and thinking visible.
    Too many of the students coast by listening, but never really applying.

But a problem here is how to assess it?

In a chapter titled “Assessing journals and other reflective writing”, Moon (2006) mentions some of the following points:

  • How do you mark someone’s reflection or personal development? What is a suitable set of criteria for “reflectiveness”?
  • Limitations of available instruments: intercoder reliability and unsuitability for use with a large number of students.
  • The potential impact on students’ reflections by the assessment process.
    e.g. will they say what they think, or what they think we want to hear them say?
  • The issue of “strategic students” not engaging in tasks that aren’t assessed.

In my context, perhaps the most relevant considerations are:

  • Why expect time-poor students to engage in a task that doesn’t contribute to their result?
  • How to avoid the problem of students’ writing to the criteria?
  • How can time-poor teaching staff mark 280+ learning journals?

The current plan

In short, the current plan is to

  1. Assign to the learning journal a small percentage (2-5%) from each of three assignments.
  2. “Marking” of students’ journals will be automated, i.e. done by a program.
  3. Supplement this with a sampling approach to double check any attempts at subterfuge.
    And yes, I do wonder about the “catch them out” mentality that underpins this type of thought.

The criteria will focus on attributes that can be automatically analysed and that may encourage the type of practices that may be positive. The current list includes:

  • Regularity of posting – reflective blog posts occur regularly during the semester (e.g. they aren’t all done the night before).
    A problem here is that “life happens”. There will be reasons why regularity isn’t possible.
  • Size – posts have to be of a reasonable size.
    The quantity over quality problem applies.
  • Links – the percentage of posts that link to other resources.
    The assumption being that linking to other resources implies commenting or reflecting on the work of others. At the very least it’s indicative of a certain level of technical competence.
  • Links to the blogs of other students – the percentage of posts that link to posts of other students.
    Suggesting some attempt at building on or responding to the work of other students. A practice I’d like to encourage.
  • Copy detection – doing various forms of copy detection on the posts.

Given that BIM isn’t installed on the institutional version of Moodle, the current plan will be

  1. Have the students register their blogs with a Moodle database activity.
  2. Export that data into a version of Moodle on my computer and place it into BIM.
  3. Distribute the BIM generated OPML files to each of the teaching staff.
  4. Modify BIM to support the types of analysis listed above.
  5. Generate and distribute marking spreadsheets with the results of this automated analysis to each of the teaching staff.

Future possibilities

Given time, there’s a fair bit of extra analysis/analytics that could be added to this

  • Readability scores
  • Explore and integrate the work of Ullman et al (2012) in automatically detecting reflective texts.
  • Use BIM to share the insights from this automated analysis with the students.

References

Moon, J. (2006). Learning journals: A handbook for reflective practice and professional development (p. 200). Abingdon: Routledge.

Ullmann, T. D., Wild, F., & Scott, P. (2012). Comparing Automatically Detected Reflective Texts with Human Judgements. 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning. 7th European Conference on Technology-Enhanced Learning (pp. 101–116). Saarbruecken, Germany.