Assembling the heterogeneous elements for (digital) learning

Category: bim Page 1 of 8

How does BIM allocate blog posts to prompts

The following is a response to the following query.

https://twitter.com/UOWMoodleLAB/status/729511696054681600

Background

BIM is a plugin for the Moodle LMS. BIM is “Designed to support the management of individual student blogs (typically external to Moodle) as personal learning/reflective journals”. Students create their individual blogs (or anything that produces a RSS/Atom feed) and register it with BIM. BIM then mirrors all posts within the Moodle course and provides functionality to support the management and marking.

A part of that functionality allows the teacher to create “prompts”. The design of the original tool (BAM) assumed that students would write posts that respond to these prompts. These posts would be marked by teaching staff.

BAM (and subsequently BIM) was designed to do very simple pattern matching to auto-allocate a student post to a particular prompt. It also provides an interface that allows teaching staff to manually change the allocations.

Defining a prompt

A prompt in BIM has the following characteristics

  • title;
    A name/title for the prompt. Usually a single line. The original design of BIM assumed that this title was somewhat related to the title of a blog post. The advice to students was to include the title of the prompt in the title of their blog post, or in the body of the blog post.
  • description; and,
    A collection of HTML intended to describe the prompt and the requirements expected of the student’s blog post.
  • minimum and maximum mark.
    Numeric indication of the mark range for the post. Used as advice only. If the marker goes outside the range, they get a reminder about the range and it’s up to them to take action.

Auto-allocation

Auto-allocation only occurs during the mirror process. This is the process where BIM checks each student’s feed to see if there are new posts.

When BIM finds a new post from a student blog it will loop through all of the un-allocated prompts. i.e. if this student already has a blog post allocated to the first prompt, it won’t try to allocate any more posts to that prompt.

BIM will allocate the new post to an unallocated prompt if it finds the prompt title in either the body of the blog post, or the title of the blog post. BIM ignores case and it tries to ignore white space in the prompt title.

For example, if this blog post is the new blog post found by BIM, then BIM will make the following decisions

  1. ALLOCATE: the post to a prompt with a title of “does BIM allocate blog posts“.
    This matches exactly the title of this blog post.
  2. ALLOCATE: the post to a prompt with a title of “DOES    BIM ALLOCATE   BLOG POSTS“.
    BIM ignores case and white space, hence this matches the title of this blog post
  3. ALLOCATE: the post to a prompt with a title of “Auto-allocation“.
    The body of this post includes the word Auto-allocation.
  4. DO NOT ALLOCATE: the post to a prompt with a title of “does BAM allocate blog posts“.
    (Assuming that the above line didn’t appear in this post) This particular phrase (see the A in BAM) would not occur in the title or the body of this post, and hence not be matched.

 

Exploring BIM + sentiment analysis – what might it say about student blog posts

The following documents some initial exploration into why, if, and how sentiment analysis might be added to the BIM module for Moodle. BIM is a tool that helps manage and mirror blog posts from individual student blogs. Sentiment analysis is an application of algorithms to identify the sentiment/emotions/polarity of a person/author through their writing and other artefacts. The theory is that sentiment analysis can alert a teacher if a student has written something that is deemed sad, worried, or confused; but also happy, confident etc.

Of course, the promise of analytics-based approaches like this may be oversold. There’s a suggestion that some approaches are wrong 4 out of 10 times. But I’ve seen other suggestions that human beings can be wrong at the same task 3 out of 10 times. So the questions are

  1. Just how hard is it (and what is required) to add some form of sentiment analysis to BIM?
  2. Is there any value in the output?

Some background on sentiment analysis

Tends to assume a negative/positive orientation. i.e. good/bad, like/dislike. The polarity. There are various methods for performing the analysis/opinion mining. There are challenges in analysing text (my focus) alone.

Lots of research going on in this sphere.

Of course there also folk building and some selling stuff. e.g. Indico is one I’ve heard of recently. Of course, they all have their limitations and sweet spots, Indico’s sentiment analysis is apparently good for

Text sequences ranging from 1-10 sentences with clear polarity (reddit, Facebook, etc.)

That is perhaps starting to fall outside what might be expected of blog posts. But may fit with this collection of data. Worth a try in the time I’ve got left.

Quick test of indico

indico provides a REST based API that includes sentiment analysis. Get an API key and you can throw data at it and it will give you a number between 0 (negative) and 1 (positive).

You can even try it out manually. Some quick manual tests

  • “happy great day fantastic” generates the result 0.99998833
  • “terrible sad unhappy bad” generates 0.000027934347704855157
  • “tomorrow is my birthday. Am I sad or happy” generates 0.7929542492644698
  • “tomorrow is my birthday. I am sad” generates 0.2327375924840286
  • “tomorrow is my birthday. I am somewhat happy” 0.8837247819167975
  • “tomorrow is my birthday. I am very happy” 0.993121363266806

With that very ill-informed testing, there are at least some glimmers of hope.

Does it work on blog posts…….actually not that bad. Certainly good enough to play around with some more and as a proof of concept in my constrained circumstances. Of course, indico is by no means the only tool available (e.g. meaningcloud).

But for the purpose of the talk I have to give in a couple of weeks, I should be able to use this to knock up something that works with the more student details script.

Evaluating the use of blogs/reflective journals

The use of blogs in one of the courses I teach is now into it’s fourth semester. Well past time to do explore how it’s all going, evaluate some of the design decisions, and make some decisions about future developments. In preparation for that it’s time to look at some of the extant literature to look at findings and methods. The following is the first such summary and is focused on @spalm et al’s

Palmer, S., Holt, D., & Bray, S. (2008). The learning outcomes of an online reflective journal in engineering, 724–732.

Summary

Looks at the use of an “online reflective journal” (implemented using discussion forum in WebCT) in a 4th year engineering course. Combines a survey-based evaluation of student perceptions with “an analysis of student use of the journal…to investigate its contribution to unit learning outcomes”

Findings are

  • Most students understood the purpose and value the journal in their learning
  • Most read the entries of others and said this help their learning
  • Two most useful things about the journal
    • The need to continuously revise course material
    • Ability to check personal understanding against that of others
  • Least useful related to problems with using WebCT – the difficulty of the interface and “problems with CMS operation”.
  • Significant contributors to final mark
    • Prior academic performance
    • Number of journal postings
    • Mode of study

Thoughts

The quantitative nature of this is interesting as it’s something I need to do more of. If only to gain some experience of this approach and to tick the box (which is a great reason).

There is always going to be a bit of a “so what” issue with this type of thing as research. What really does a study in a single course reveal that’s new or easily transferable. But perhaps doing almost a replication addresses that somewhat.

Of course the real interest in doing this research is just to find out more about what’s going on in the course, its impact and what the students think so it can inform further development.

To do

  • Can I get a copy of the evaluation survey used?
  • Can I apply the evaluation survey to past students?
  • Do I need to get student permission to analyse the data around their use of their blogs and final results?

Comparison with EDC3100

My impression is that prior academic performance would be the most significant factor in EDC3100. Raising the question about how much value there is in what teachers do, if prior academic performance is a big contributing factor are we “failing” the weaker student?

Question: Could the students performance on the course be included in the evaluation survey in some form?

I think mode of study might play a role. My guess is that the online students would get see the value in the blogs more than some of the on-campus students.

Question: What impact does mode of study play in EDC3100?

Given that the blog posts were only marked based on number of posts, average word count and number of links it would be interesting to explore what impact that had on the final mark. Especially given the quote about “assessment” as a “strategic tool for creating student engagement”.

Question: Exactly what type of student engagement is the assessment of the blogs in EDC3100 creating?

Question: What patterns exist in the learning journal marks?

The journaling is contributes 5% of the total course mark for each of the 3 assignments. Is there a pattern in the marks for each assignment and the final outcome?

Question: Does the medium used make any difference?

In EDC3100 students are using their own blog hosted on their choice of external service. Very different from an LMS forum. Does this make a difference? Is it more their space?

Is it seen as too difficult or a waste of time creating a blog?

Question: Is the collection of technologies used to complete this task too difficult?

To get marks students have to create their blog (e.g. WordPress) and follow the blogs of others (Feedly, WordPress “follow” mechanism, WordPress reader etc). This leads to problems with creating links to posts. E.g. students link to the post in Feedly or the WordPress reader, not in the student’s blog.

But that said, the interface is “better” (subjective) and might be seen as more realistic – not a Uni tool, something broader.

Question: Does the time of posting make any difference? What are the different patterns of posting visible? Any patterns indicating task corruption?

Palmer et al (2008)’s journal is essentially a weekly task. In EDC3100 students need to post at least 3 posts a week to get full marks. There is no specific direction as to what or when to post. What patterns are there?

Question: How does the perception of reflection/journaling in the discipline impact thoughts?

Palmer et al (2008) describe how a work journal is a common practice for engineers. Hence doing this in the course can be linked to professional practice.

The same doesn’t apply to the teaching profession. While reflection is seen as important, there’s doesn’t appear to be the accepted practice of regularly keeping a work journal.

Making BIM ready for Moodle 2.6

The very nice folk from my institution’s ICT group warned me back in March that

I have started work on the moodle 2.6 upgrade that will be happening midyear and have come across some deprecation warning from BIM. Just giving you plenty of notice that an updated version will be needed before release.

That was just as my first use of BIM on the institution’s servers was getting underway. That’s gone reasonably well and it will be continuing (and hopefully expanding as I learn more about what’s required and possible with the approach) next semester, so I better get BIM playing nicely with 2.6. That’s what this post is reporting on.

BIM for Moodle 2.6 (and also 2.5) is available from the BIM plugin database entry and also from GitHub.

Get Moodle 2.6 running

Let’s get the latest version of Moodle 2.6 – 2.6.3 – and install that.

So that’s the first change. PHP setting for caching. Not that I’ll need that for testing. Looks like I can ignore it for now.

Get BIM installed

I’m doing this so irregularly now it’s good that I actually documented this last time.

That all appears to be working. Ahh, but I haven’t turned the debugging all the way up to annoying yet.

That’s better

get_context_instance() is deprecated, please use context_xxxx::instance() instead.

And about this stage it was always going to be time to….

Check the Moodle 2.6 release notes

The Moodle 2.6 release notes and then the developer notes. Nothing particularly related to this warning.

Do it manually

As outlined in this message it appears that this particular usage has been deprecated for a few versions. The deprecatedlib.php suggests this gets removed in 2.8.

So the changes I’m doing appear like this
[code language=”javascript”]
#$context = get_context_instance( CONTEXT_COURSE, $course->id );
$context = context_course::instance( $course->id );
[/code]

I can see this is needed in the following

  • ./coordinator/allocate_markers.php
  • ./coordinator/find_student.php
  • ./index.php **done?**
  • ./lib/groups.php
  • ./lib/locallib.php
  • ./marker/view.php
  • ./view.php – this one had actually been done earlier
    #$context = get_context_instance( CONTEXT_MODULE, $cm->id );
    $context = context_module::instance( $cm->id );

That all seems to be working.

Do a big test

Will back up a large BIM activity with a temp course from my Moodle 2.5 instance and restore it under Moodle 2.6.

Some more issues

print_container() is deprecated. Please use $OUTPUT->container() instead. Done

Identifying some immediate changes to BIM

I have until the 21st of February to get BIM tested and ready for installation into the institutional Moodle instance. The following is some initial planning of what I’d like to get done in that time frame. A list that will then need to be further whittled away to what I can get done in that time frame. There are three categories of changes

  1. Changes to better support the pedagogy I’m currently using.
  2. Changes from the BIM issues list.
  3. Changes to ensure correct functioning.

Better support the pedagogy

The pedagogy/learning design that informed the initial design of BIM is fairly limiting. The learning design/pedagogy I’m currently using isn’t directly supported by BIM. I found myself last year doing a range of programming kludges to get it to work. This won’t work in the second half of this year when a non-technical academic takes over as course coordinator. BIM better fitting the current learning design saves me time and enables other people to use this approach.

  • Allow more than one post to be allocated to a question

    Already had this as an issue. Would allow “questions” to be though of as modules (in EDC3100 speak) or time periods.

  • Allow student allocation of posts to a question.

    Mentioned in this issue (amongst other extensions).

  • The student interface would also need to be changed to handle the display of multiple posts to a question.
  • Have BIM generate statistics about length of posts, number of links and number of links to other student posts. Display this to the student and generate a CSV file for the marker.
  • Also seems to suggest some sort of auto mark generation based on the statistics.
  • Have a default allocation of posts to questions/topics based on the date OR just have it set to a particular value (i.e. all posts should now be allocated – by default – to Module 1).

The basic idea behind these changes is that students are required to make a number of posts per Module in the course (one module = about 3 weeks). Students mark for the posts for each module is based solely on the number of posts and that the posts meet a certain set of statistics (length of posts, links etc). These marks are added as part of the bigger assignment that is associated with the module.

Students need to be able to see their progress. Markers need to be able to access the statistics to mark assignments.

Changes required would likely include

  1. Changes to student interface
    1. Show question allocation as a row (not a column).
    2. Replace the “Status” column with a “Statistics” column that includes the number of words, links etc statistics
    3. What options exists with parsing HTML and extracting links in PHP and in Moodle.
    4. Show unallocated posts together in a separate set of rows – perhaps even a special form – where the questions allocation drop box is available.
  2. Changes to the coordinator interface
    1. Configuration option for multiple questions

      database change required

    2. Change the default allocation of posts.
    3. Some how deal with questions that aren’t questions.
    4. How to specify statistics for auto marking.
    5. What if any changes are required for the “Manage Marking” page.

      May not need it. As this page simply shows counts. With multiple posts to a question, the count should still work.

    6. What to do about the workflow and the idea of marked/suspended/released if posts aren’t being marked, simply analysed.
  3. Changes to the marker interface
    1. “Mark posts” will still need to be used to allocate posts (or should this feature be added to “View student details”) even though “marking” may not make sense.
    2. “Mark posts” cell with a question would need to show the number of posts for that question implying no way to mark directly. Or perhaps list each post? Needs thought here
    3. “Allocate posts” page will need to retain all question names in the “Choose one” drop box.
    4. “Allocate posts” page should also have an additional heading to group multiple posts to the one question — this may enable doing without the database change for the new configuration option.
    5. May need to add to “Allocate posts” a link to “mark this post” so that “Mark posts” page can point to a list of posts and one can be chosen for marking. Mmmm.
    6. One of these pages should have a link to export a CSV file containing marks for students against posts.

From the BIM issues list

The BIM source code is hosted on github and I’ve been using the associated issue list to record any ad hoc improvements/fixes. The following are the issues that would be nice

  • No response to find student – a bug, has it been fixed yet?

    Seems to specific to Moodle 2.5, so not directly applicable to the institutional context.

  • Warn of summary feeds

    Problem from last year, blogs configured to just showing the first few lines of posts, not the whole post.

  • A couple of issues about updating posts – allow students/markers to update posts stored in BIM (mostly to fix errors or recent changes).

Correct functioning

  • Check out any and all warnings being generated by BIM now.
  • Bulk email

    Used in a number of places. This is not working. Either on mootest or my box. A missing parameter. Not sure when this cropped up.

  • User search

    Search for a student within BIM isn’t working on mootest, it does work on my box. My initial guess is some SQL type queries in BIM that are MySQL specific.

  • All teaching staff are coordinators

    The distinction between coordinator and marker isn’t kicking in as it should.

Reflective Blogging as part of ICT Professional Development to Support Pedagogical Change

I am planning to do some more work on BIM in preparation for using it in teaching this year, including finishing some analysis of how the blogging went in last year’s two offerings.

As luck would have it, I skimmed one of my feeds and came across Prestridge (2014). What follows is a summary and some thoughts. It’s nice to be reading an open access journal paper after a few very closed off articles.

Aside: I am wondering whether or not in the new world order being someone that reads feeds and has students blog is become somewhat old fashioned.

Abstract

The abstract for Prestridge (2014) is

Reflection is considered an inherent part of teacher practice. However, when used within professional development activity, it is fraught with issues associated with teacher confidence and skill in reflective action. Coupled with anxiety generally associated with technological competency and understanding the nature of blogging, constructive reflection is difficult for teachers. This paper focuses on the reflective quality of school teachers’ blogs. It describes teachers’ perceptions and engagement in reflective activity as part of an ICT professional development program. Reflective entries are drawn from a series of blogs that are analysed qualitatively using Hatton and Smith’s (1995) three levels of reflection-on-action. The findings suggest that each level of reflective action plays a different role in enabling teachers to transform their ICT pedagogical beliefs and practices. Each role is defined and illustrated suggesting the value of such activity within ICT professional development, consequently reshaping what constitutes effective professional development in ICT.

This appears to be relevant to what I do as the course I teach is titled “ICTs and Pedagogy” and reflection through blogging is a key foundation to the pedagogy in the course. Of course, this appears to be focused on in-service, rather than pre-service teachers.

Introduction

In the Australian education context various government policies illustrate that ICTs are important. There’s a move to 1-to-1 student/computer ratios. However, “success with regard to technology integration has been based on how extensive or prominent the use of it has been in schools rather than on whether the teacher has been able to utilize it for ‘new’, ‘better’, or more ‘relevant’ learning outcomes (Moyle, 2010)” (Prestridge, 2014). Suggests a need to “reconceptualise both the intentions and approaches to professional development” if there’s going to be an ROI on this government investment and if we’re to help teachers deal with this.

PD is “an instrument to support change in teacher practice”. Long held view that PD should move from “up-skilling in the latest software” to a deeper approach that focuses on pedagogy and context rather than technology; building teachers’ confidence in change; development of teachers’ metacognitive skills; and, as a philosophical/revisioning of ICT in learning (references attached to each of these). References work by Fisher et al (2006) as requesting “a cultural change in the teaching profession”, the principles of which need to be “activiated within ICT professional development if we are going to move from retooling teachers to enabling them to transform their practices”.

Note: I wonder how well this academic call for a cultural change matches the perceptions of teachers and the organisations that employ them? I have a feeling that some/many teachers are likely to be more pragmatic. As for the organisations and the folk that run them….

And now onto the importance/relevance of reflection to this. Schon gets a mention. As does the action research spiral, teacher-as-researcher, inquiry based professional development, reflective action, Dewey. Leading to research suggesting “that reflection brings automatic calls in the improvement of teaching” and other work suggesting there’s a lack of substantive evidence.

This paper aims to investigate “the role of written reflection as a central process in a ‘hearts and mind’ approach to ICT professional development.

Note: The mix of plural and singular in “hearts and mind” is interesting/a typo in this era of standardised outcomes/curriculum and increasing corporatisation.

Methods to framing the research

Background on a broader ARC funded project that aims to develop “a transformative model of teacher ICT professional development”. With “one or two teachers” volunteering from each school it would appear to suffer the problem of pioneers versus the rest. Teachers engaged in classroom inquiries, in particular the “implementation of an ICT application in regard their pedagogical practices and student learning outcomes”. Supported through a local school leader, outside expert, online discussion forum and personal blogging.

Has a table that lists the inquiry questions of the 8 teachers. Questions range from “How can students be supported when creating an electronic picture book using the theme ‘Program Achieve”?” to “What strategies need to be employed to promote effective/productive ICT practices that encourage intellectual demand and recognise difference and diversity?”

Teachers were encouraged to blog after teaching. Provided with a framework for reflecting after teaching (5R framework). Weekly blog mandatory. School leaders asked to encourage blogging.

This work focuses on

  1. teachers’ perceived value of the reflective activity

    Data from teachers’ final interviews and reports analysed using constant comparative method

  2. the role of written reflection in enabling change in pedagogy

    Blog posts analysed with Hatton and Smith’s (1995) three types of writing: descriptive reflection; dialogic reflection; and, critical reflection.

Results

  • 1 teacher had consistent reflection of the year implementation
  • 4 teachers had spasmodic entries, mostly at the beginning
  • the other teachers writing could be seen as simply record keeping

Finds similar results in other use of reflective blogs and suggests that “teachers’ lack of understanding on how to reflect limits their reflective writing abilities”.

Note: Not a great result perhaps, but not entirely unexpected. Might get some idea of this from my students posts in 2013 later today.

Perceived value of reflective writing

Not surprisingly, the “consistent reflection” person liked blogging. Others didn’t.

A major theme on the value was “a lack of understanding on how to reflect”.

Note: I have a feeling this may be one factor for my students. Though I wonder how much pragmatism and especially how reflective blogging falls outside the realm of standard practice for many plays a role.


“What to write in the reflective blog and then what to do with these reflections were issues raised by the teachers”

Note: Raising the issue of BIM being better at providing “prompts” to students.

Ahh, a quote from a participant brings back the “realm of standard practice” issue

I think because this inquiry thing was such a different way of doing things I’ve ever done before, it took me a while to get fair dinkum about it. I still couldn’t get the blog…..that’s one positive that’s come out of it because if I were asked to do something like this again then I would do it much more readily.

Picks up on the idea of “reflection as description” through a number of quotes. An apparent lack of priority given to analysing on what had happened and going beyond description.

This is even though the teachers were given the 5R framework and a range of questions/prompts in project documentation and comments by the outside PD expert.

Note: Given this difficulty in understanding how to write reflectively, what impact does it have on the next part of the paper “examining the role of written reflections to identify how reflection supports teacher change in pedagogy”?

The role of reflective writing

The obvious “solution” is to focus on the 1 teacher who consistently blogged, generating the problem of a sample of 1.

The posts were analysed in chronological order. Emphasis on linking the type of reflection and the role it plays in “improving and or supporting teachers in transforming the beliefs and practices”

The most common type of reflection is descriptive, which really isn’t reflection. But this descriptive reflection “provides a leverage for dialogic reflection” which may or may not be pursued. As it turns out, generally not chosen. Only when a critical friend provides some additional prompting does it appear.

When it did occur, it helped shape the teacher’s pedagogical beliefs and practices. Descriptive reflection made conscious the connection between pedagogical beliefs and actual practice, but more as a justification.

Only spasmodic evidence of critical reflection.

Suggests that data supports the conclusion that there’s a developmental sequence to reflection. Start with descriptive and then the more demanding forms emerge.

The role played by each type of reflection in transforming pedagogical beliefs and practices

  1. Descriptive reflection – a connector, making conscious the links between pedagogical beliefs, current teaching practices, and student learning outcomes.
  2. Dialogical reflection – the shaper, where the connections were examined and explored, enabling transformation.
  3. Critical reflection – a positioner. Placing the role of teacher in the broader context and critical evaluate the role.

Conclusion

If how to reflect in written form is understood, then “reflective action plays a significant part in enabling them to change their pedagogical beliefs and practices”. Each type of reflection plays a different role.

A lack of guidance and support were found to affect reflective action.

References

Prestridge, S. J. (2014). Reflective Blogging as part of ICT Professional Development to Support Pedagogical Change. The Australian Journal of Teacher Education, 39(2).

#moodle, blogs, Moodle blogs and #bim

BIM is a Moodle activity module that I developed and use. BIM evolved from an earlier system called Blog Aggregation Management (BAM). BIM’s acronym is BAM Into Moodle. As the name suggests, BIM is essentially a port of all of BAM’s functionality into Moodle. Both BAM and BIM are designed to help with the task of managing students in a course writing and reflecting on their own individual web blogs. In particular, it was designed to do this for courses with hundreds of students.

The aim of this post is to explore and explain a comment that often arises when BIM is first mentioned. i.e. doesn’t Moodle already offer blog integration? The following tweet from @tim_hunt is an example of this.

https://twitter.com/tim_hunt/status/397489330169458688

The aim here is to answer the question, “What does BIM offer that Moodle’s existing blog integration doesn’t already provide?”

In short,

  • Blogs in Moodle are focused at providing a way for authors to create a blog inside of a Moodle instance.
  • BIM is focused on supporting teaching staff in managing a course where all students are expected to write on their own externally hosted blog.

Blogs in Moodle

Each user in Moodle has their own blog. i.e. the user’s (student, teacher or other) blog resides in Moodle. The functionality used to create and edit blog posts is provided by Moodle.

Each user’s blog can have an RSS feed if configured (by default this is turned off). However, standard advice appears to be to have RSS feeds secured (i.e. only people who can login to Moodle can access the feed).

There is support for “course tags” which allow particular posts to be associated with a course. Posts associated with courses in this way are still visible elsewhere.

If the Moodle administrators have enabled it, users can register their external blog with their Moodle blog. For example, if I registered this blog with a Moodle blog, then anything I post to this blog would also appear in my Moodle blog. Posts from an external blog can be deleted from a Moodle blog, but can’t be edited.

Summary

Moodle’s blog functionality is focused on helping users create and maintain a blog that sits within a Moodle instance.

It is user-focused, not course-focused. e.g. it appears to offer no functionality for teaching staff to find out which students have blogged or haven’t, and no functionality to mark blog posts.

The problem here (at least for some) is that

Reflective learning journals are demanding and time-consuming for both students and staff (Thorpe, 2004, p. 339)

Blogs with BIM

BIM doesn’t provide any functionality for students or teachers to create a blog. Instead, BIM relies on the author creating a blog on their choice of blogging platform (e.g. I always recommend WordPress.com). This means that the students’ blogs (it’s almost always student blogs that BIM works with) are hosted external to the LMS. Each student’s blog is their individual blog.

What BIM does is

  • Make a copy of all the posts students make on their blog within the LMS just in case the dog eats it.
  • Provide a couple of aggregated views that shows you who has blogged, how much they’ve blogged and how recently they’ve blogged.
  • Allows different teaching staff to see these aggregated views for the students they are responsible for (while the “in charge” teacher can see all).
  • Shows which students haven’t registered their blogs yet and provides a mail merge facility to remind them to do it.
  • Provides an interface so students can check what BIM knows about their posts.
  • If you really want to, allows you to mark student posts.

    This is done by specifying a set of questions that student posts should respond to, and the provision of a marking and moderation interface. Finally, the marks will integrate into the Moodle gradebook.

Summary

BIM functionality is focused on managing (and marking) of student blog posts. It aims to reduce the time-consuming nature of reflective journals implemented using blogs.

What functionality BIM currently provides for this task remains essentially the same as was designed into BAM in 2007. I’m hoping 2014 will see some long overdue evolution in functionality.

Moodle blogs and BIM?

The Moodle blog functionality is all about helping authors produce blogs. BIM is currently all about helping teachers manage and mark the student use of blogs. It is possible to argue that neither do an overly fantastic job.

This means that it should be possible for the two to work together. i.e. a student could register their Moodle blog with BIM, rather than using WordPress or some other external service. Indeed it is. I’ve just successfully registered a Moodle user blog in BIM.

This is of potential interest in situations where what the students are reflecting on might raise privacy concerns (e.g. nursing students – or just about any other profession – reflecting on their placement experiences). In this situation, the students could create their blog within Moodle and register the RSS feed with BIM.

However, the privacy of this approach depends on the blog visibility settings within Moodle and their impact on the generation of the RSS file. There appear to be three relevant settings for “blog visiblity” in Moodle

  • “The world can read entries set to be world-accessible”
  • “All site users can see all blog entries”
  • “Users can only see their own blog”

The question is what effect this visibility setting will have on the RSS file required by BIM. i.e. If visibility is set at “Users can only see their own blog” will this stop generation of the RSS file? A quick test seems to suggest that the RSS file is still generated.

This begs another question about privacy. The “security” or “privacy” of the RSS file generated by a Moodle blog is an example of security through obscurity. i.e. if you know the URL for the RSS file, you can view. The “security” arises because the URL includes a long list of hexadecimal numbers that make it hard to guess.

References

Thorpe, K. (2004). Reflective learning journals : From concept to practice. Reflective practice: International and Multidisciplinary Perspectives, 5(3), 327–343.

BIM for Moodle 2.5

Earlier this week @sthcrft asked
https://twitter.com/sthcrft/status/397143586178756608

Talk about good timing. My shiny new Mac laptop arrived the same day and I’d been waiting on its arrival to explore whether or not BIM was Moodle “2.5ish happy”. It turns out that there are a few tweaks required and some improvements made possible. The following is records those tweaks.

Current status

BIM seems to be working on Moodle 2.5.

I have made a minor change so that there is now a branch of BIM specific to Moodle 2.5. Will probably become the master branch in coming days.

Tested the changes with my current course’s use of BIM – about 100 students – but have yet to add this to the Moodle plugin database.

Crashing on tabs

It was looking quite good for BIM on Moodle 2.5. Installed without a problem and appeared to be basically working. Some of the interface tweaks helped the unmodified BIM look a bit nicer.

But then I tried to “Find a student”. At which stage it appears to crash/stall/hang. Sit’s there never completing (or at least not for a very long time).

A bit of exploration of what’s happening suggests that the problem is with print_tab which appears to be deprecated from Moodle 2.5 onwards. A quick translate to the new alternative still left the same problem. The tabs work for all of the pages, but not on the submission of “Find Student”.

And back to this on the next day.

After a lot of wasted time – you idiot – I haven’t setup the http proxy on my server and that’s causing the delay. And again, you idiot.

Other tests

Tests as other users all seemed to work fine.

Layout issues

Some of the more “busy” pages for the coordinator (some overlap with the marker) don’t display very well. Never have really, but the current default theme emphasises those problems. Let’s change to another theme and see.

  • The text editor for comments on MarkPost overlaps a bit

These are minor issues and after a quick look, can’t see any quick way to solve it beyond a broader re-working of the interface.

Nested tabs

The move to tab tree apparently gives scope for nested tabs, that could solve one of the (many) uglies in BIM. i.e. the coordinators ability under “Your students” to view details and mark posts. Implementing these as nested tabs could be useful. An exploration.

That seems to work surprisingly easily. Now to remove the old kludge.

BIM and broken moodle capabilities

The following is a long overdue attempt to identify and solve an issue with BIM.

The problem

BIM provides a three different interfaces depending on the type of user, these are

  1. coordinator;

    The name is a hangover from a past institution, but essentially this is the teacher in charge. Can do anything and everything, including the management of the marking being done by other staff.

  2. marker;

    Another staff role, mostly focused on marking/looking after a specific group of students.

  3. student.

    What each student sees.

The problem is that the code that distinguishes between the different types of users is not working.

For example, a user who should be a coordinator, BIM thinks is potentially all three.

The method

The method I use (and which was used in BIM 1 and has worked fine) is based on capabilities, essentially a few ifs
[sourcecode lang=”php”]
if ( has_capability( ‘mod/bim:marker’, $context )) {
# do marker stuff
}
if ( has_capability( ‘mod/bim:student’, $context )) {
# do student stuff
}
if ( has_capability( ‘mod/bim:coordinator’, $context)) {
# do coordinator stuff
}
[/sourcecode]

These are then defined in db/access.php via the publicised means

What’s happening

To get to the bottom of this, I’m going to create/configure three users who fit each of the BIM user types and see how BIM labels them.

  1. coordinator user – BIM thinks can be marker, student or coordinator.
  2. marker user – is a marker
  3. student user – is a student and a coordinator

The above was tested within BIM itself. There’s a capability overview report in Moodle that shows “what permissions that capability has in the definition of every role”.

For coordinator, it’s showing “Allow” for “Student” and not set for everything else. Not even the manager. Suggesting that there is a mismatch between the BIM code and what Moodle knows. Suggesting that an upgrade of the BIM module is called for.

So, let’s update the version number, visit the admin page and do an upgrade. Success. Now check the capability overview report.

The capability overview report is reporting no change. This appears to be where the bug is. What’s in the db/access.php file is not being used to update the database.

Seem to have it working.

Clean test

Need to do a test on a clean Moodle instance.

  1. Coordinator – CHECK
  2. Teacher – CHECK
  3. Student – CHECK

Glad that’s out of the way. More work on BIM in the coming weeks.

Identifying and filling some TPACK holes

The following post started over the weekend. I’m adding this little preface as a result of the wasted hours I spent yesterday battling badly designed systems and the subsequent stories I’ve heard from others today. One of those stories revolved around how the shortening available time and the poorly designed systems is driving one academic to make a change to her course that she knows is pedagogically inappropriate, but which is necessary due to the constraints of these systems.

And today (after a comment from Mark Brown in his Moodlemoot’AU 2013 keynote last week) I came across this blog post from Larry Cuban titled “Blaming Doctors and Teachers for Underuse of High-tech tools”. It includes the following quote

For many doctors, IT-designed digital record-keeping is a Rube Goldberg designed system.

which sums up nicely my perspective of the systems I’ve just had to deal with.

Cuban’s post finishes with three suggested reasons why he thinks doctors and teachers get blamed for resisting technology. Personally, I think he’s missed the impact of “enterprise” IT projects, including

  • Can’t make the boss look bad.

    Increasingly IT projects around e-learning have become “enterprise”, i.e. big. As big projects, the best practice manual requires that the project be visibly led by someone in the upper echelons of senior management. When large IT projects fail to deliver the goods, you can’t make this senior leader look bad. So someone has to be blamed.

  • The upgrade boat.

    When you implement a large IT project, it has to evolve and change. Most large systems – including open source systems like Moodle – do this by having a vendor driven upgrade process. So every year or so the system will be upgraded. An organisational can’t fall behind versions of a system, because eventually they are no longer supported. So, significant resources have to be invested in regularly upgrading the system. Those resources contribute to the intertia of change. You can’t change the system to suit local requirements as all the resources are invested in the upgrade boat. Plus, if you did make a change, then you’d miss the boat.

  • The technology dip.

    The upgrade boat creates another problem, the technology dip. Underwood and Dillon (2011) talk about the technology dip as dip in educational outcomes that arises after the introduction of technological change. As the teachers and students grapple with the changes in technology they have less time and energy to expend on learning and teaching. When you have an upgrade boat coming every 12 months, then the technology dip becomes a regular part of life.

The weekend start to this post

Back from Moodlemoot’AU 2013 and time to finalise results and prepare course sites for next semester. Both are due by Monday. The argument from my presentation at the Moot was that the presence of “TPACK holes” (or misalignment) causes problems. The following is a slide from the talk which illustrates the point.

Slide14

I’d be surprised if anyone thought this was an earth breaking insight. It’s kind of obvious. If this was the case then I wouldn’t expect institutional e-learning to be replete with examples of this. The following is an attempt to document some of the TPACK holes I’m experiencing in the tasks I have to complete this weekend. It’s also an example of recording the gap outlined in this post.

Those who haven’t submitted

Of the 300+ students in my course there are some that have had extension, but haven’t submitted their final assignment. Most likely failing the course. I’d like to contact them and double check that all is ok. I’m not alone in this, I know most people do it. All of my assignments are submitted via an online submission system, but there is no direct support in this system for this task.

The assignment system will give me a spreadsheet of those who haven’t submitted. But it doesn’t provide an email address for those students, nor does it connect with other information about the students. For example, those who have dropped the course or have failed other core requirements. Focusing on those students with extensions works around that requirement. But I do have to get the email addresses.

Warning markers about late submissions

The markers for the course have done a stellar job. But there are still a few late assignments to arrive. In thanking the markers I want to warn them of the assignments still to come, but even with only less than 10 assignments to come this is more difficult than it sounds due to the following reasons

  • The online assignment submission treats “not yet submitted” assignments as different from submitted assignments and submitted assignments is the only place you can allocate students to markers. You can’t allocate before submission.
  • The online assignment submission system doesn’t know about all the different types of students. e.g. overseas students studying with a university partner are listed as “Toowoomba, web” by the system. I have to go check the student records system (or some other system) to determine the answer.
  • The single sign-on for the student records system doesn’t work with the Chrome browser (at least in my context) and I have to open up Safari to get into the student records system.

Contacting students in a course

I’d like to send a welcome message to students in a course prior to the Moodle site being made available.

The institution’s version of Peoplesoft provides such a notify method (working in Chrome, not Safari) but doesn’t allow the attachement of any files to the notification.

I can copy the email addresses of students from that Peoplesoft system, but Peoplesoft uses commas to separate the email addresses meaning I can’t copy and paste the list into the Outlook client (it expects semi-colons as the separator).

Changing dates in a study schedule

Paint me as old school, but personally, I believe there remains a value to students of having a study schedule that maps out the semester. A Moodle site home page doesn’t cut it. I’ve got a reasonable one set up for the course from last semester, but new semester means new dates. So I’m having to manually change the dates, something that could be automated.

Processing final results

As someone in charge of a course, part of my responsibilities is to check the overall results for students, ensure that it’s all okay as per formal policy and then put them through the formal approval processes. The trouble is that none of the systems provided by the institution support this. I can’t see all student results in a single system in a form that allows my to examine and analyse the results.

All the results will eventually end up in a Peoplesoft gradebook system. In which the results are broken up based on the students “mode” of learning i.e. one category for each of the 3 different campuses and another for online students. But from which I cannot actually get any information out of in a usable form. It is only available in a range of different web pages. If the Peoplesoft web interface was halfway decent this wouldn’t be such a problem, but dealing with it is incredibly time consuming. Especially in a course with 300+ students.

I need to get all the information into a spreadsheet so that I can examine, compare etc. I think I’m going to need

  • Student name, number and email address (just in case contact is needed), campus/online.

    Traditionally, this will come from Peoplesoft. Might be some of it in EASE (online assignment submission).

  • Mark for each assignment and their Professional Experience.

    The assignment marks are in EASE. The PE mark is in the Moodle gradebook.

    There is a question as to whether or not the Moodle gradebook will have an indication of whether they have an exemption for PE.

EASE provides the following spreadsheets, and you’re not the only one to wonder why these two spreadsheets weren’t combined into one.

  1. name, number, submission details, grades, marker.
  2. name, number, campus, mode, extension date, status.

Moodle gradebook will provide a spreadsheet with

  • firstname, surname, number…..email address, Professional Experience result

Looks like the process will have to be

  1. Download Moodle gradebook spreadsheet.
  2. Download EASE spreadsheet #1 and #2 (see above) for Assignment 1.
  3. Download EASE spreadsheet #1 and #2 (see above) for Assignment 2.
  4. Download EASE spreadsheet #1 and #2 (see above) for Assignment 3.
  5. Bring these together into a spreadsheet.

    One option would be to use Excel. Another simpler method (for me) might be to use Perl. I know Perl much better than Excel and frankly it will be more automated with Perl than it would be with Excel (I believe).

    Perl script to extract data from the CSV files, stick it in a database for safe keeping and then generate an Excel spreadsheet with all the information? Perhaps.

Final spreadsheet might be

  • Student number, name, email address, campus/mode,
  • marker would be good, but there’ll be different markers for each assignment.
  • a1 mark, a2 mark, a3 mark, PE mark, total, grade

An obvious extension would be to highlight students who are in situations that I need to look more closely at.

A further extension would be to have the Perl script do comparisons of marking between markers, results between campuses, generate statistics etc.

Also, probably better to have the Perl script download the spreadsheets directly, rather than do it manually. But that’s a process I have’t tried yet. Actually, over the last week I did try this, but the institution uses a single sign on method that involves Javascript which breaks the traditional Perl approaches. There is a potential method involving Selenium, but that’s apparently a little flaky – a task for later.

Slumming it with Peoplesoft

I got the spreadsheet process working. It helped a lot. But in the end I still had to deal with the Peoplesoft gradebook and the kludged connection between it and the online assignment submission system. Even though the spreadsheet helped reduce a bit of work, it didn’t cover all of the significant cracks. In the absence of better systems, these are cracks that have to be covered over by human beings completing tasks for which evolution has poorly equipped them. Lots of repetitive, manual copying of information from one computer application to another. Not a process destined to be completed without human error.

Documenting the gap between "start of art" and "state of the actual"

Came across Perrotta et al (2013) in my morning random ramblings through my PLN and was particular struck by this

a rising awareness of a gap between ‘state of art’ experimental studies on learning and technology and the ‘state of the actual’ (Selwyn, 2011), that is, the messy realities of schooling where compromise, pragmatism and politics take centre stage, and where the technological transformation promised by enthusiasts over the last three decades failed to materialize. (pp. 261-262)

For my own selfish reasons (i.e. I have work within the “state of the actual”) my research interests are in understanding and figuring out how to improve the “state of the actual”. My Moodlemoot’AU 2013 presentation next week is an attempt to establish the rationale and map out one set of interventions I’m hoping to undertake. This post is about an attempt to make explicit some on-going thinking about this and related work. In particular, I’m trying to come up with a research project to document the “state of the actual” with the aim of trying to figure out how to intervene, but also, hopefully, to inform policy makers.

Some questions I need to think about

  1. What literature do I need to look at that documents the reality of working with current generation university information systems?
  2. What’s a good research method – especially data capture – to get the detail of the state of the actual?

Why this is important

A few observations can and have been made about the quality of institutional learning and teaching, especially university e-learning. These are

  1. It’s not that good.

    This is the core problem. It needs to be better.

  2. The current practices being adopted to remedy these problems aren’t working.

    Doing more of the same isn’t going to fix this problem. It’s time to look elsewhere.

  3. The workload for teaching staff is high and increasing.

    This is my personal problem, but I also think it’s indicative of a broader issue. i.e. much of the current practices aimed at improving quality assume a “blame the teacher” approach. Sure there are some pretty poor academics, but the most of the teachers I know are trying the best they can.

My proposition

Good TPACK == Good learning and teaching

Good quality learning and teaching requires good TPACK – Technological Pedagogical and Content Knowledge. The quote I use in the abstract for the Moodlemoot presentation offers a good summary (emphasis added)

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements. (Mishra & Koehler, 2006, p. 1029)

For some people the above is obvious. You can’t have quality teaching without a nuanced and context specific understanding of the complex relationships between technology, pedagogy and context. Beyond this simple statement there are a lot of different perspectives on the nature of this understanding, the nature of the three components and their relationships. For now, I’m not getting engaged in those. Instead, I’m simply arguing that

the better the quality of the TPACK, then the better the quality of the learning and teaching

Knowledge is not found (just) in the teacher

The current organisational responses to improving the quality of learning and teaching is almost entirely focused on increasing the level of TPACK held by the teacher. This is done by a variety of means

  1. Require formal teaching qualifications for all teachers.

    Because obviously, if you have a teaching qualification then you have better TPACK and the quality of your teaching will be better. Which is obviously way the online courses taught by folk from the Education disciplines are the best.

  2. Running training sessions introducing new tools.
  3. “Scaffolding” staff by requiring them to follow minimum standards and other policies.

This is where I quote Loveless (2011)

Our theoretical understandings of pedagogy have developed beyond Shulman’s early characteristics of teacher knowledge as static and located in the individual. They now incorporate understandings of the construction of knowledge through distributed cognition, design, interaction, integration, context, complexity, dialogue, conversation, concepts and relationships. (p. 304)

Better tools == Better TPACK == Better quality learning and teaching

TPACK isn’t just found in the head of the academic. It’s found in the tools, the interaction etc they engage in. The problem that interests me is that the quality of the tools etc found in the “state of the actual” within university e-learning is incredibly bad. Especially in terms of helping the generation of TPACK.

Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. Due, however, to the “machine-centered view of the design of machines and, for that matter, the understanding of people” (Norman, 1993, p. 9) our artifacts, rather than aiding cognition, “more often interferes and confuses than aids and clarifies” (p. 9). Without appropriately designed artifacts “human beings perform poorly or cannot perform at all” (Dickelman, 1995, p. 24). Norman (1993) identifies the long history of tool/artifact making amongst human beings and suggests that

The technology of artifacts is essential for the growth in human knowledge and mental capabilities (p. 5)

Documenting the “state of the actual”

So, one of the questions I’m interested in is just how well are the current artifacts being used in institutional e-learning helping “the growth in human knowledge and mental capabilities”?

For a long time, I’ve talked with a range of people about a research project that would aim to capture the experiences of those at the coal face to answer this question. The hoops I am having to currently jump through in trying to bring together a raft of disparate information systems to finalise results for 300+ students has really got me thinking about this process.

As a first step, I’m thinking I’ll take the time to document this process. Not to mention my next task which is the creation/modification of three course sites for the courses I’m teaching next semester. The combination of both these tasks at the same time could be quite revealing.

References

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Perrotta, C., & Evans, M. A. (2013). Instructional design or school politics? A discussion of “orchestration” in TEL research. Journal of Computer Assisted Learning, 29(3), 260–269. doi:10.1111/j.1365-2729.2012.00494.x

Comparing Automatically Detected Reflective Texts with Human Judgements

The following is a summary and some thoughts on the following paper

Ullmann, T. D., Wild, F., & Scott, P. (2012). Comparing Automatically Detected Reflective Texts with Human Judgements. 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning. 7th European Conference on Technology-Enhanced Learning (pp. 101–116). Saarbruecken, Germany.

My interest in this paper is as an addition to BIM to provide scaffolding for students in their reflections and also as part of the preparation for my Moodlemoot’AU 2013 talk next week.

Of course, it also allows me to engage with one of the current fads

The automated detection of reflection is part of the broader field of learning analytics, especially social learning content analysis [13]. (Ullman et al, 2012, p. 102)

where [13] is

Ferguson, R., Shum, S.B.: Social learning analytics: five approaches. In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge. p. 23–33. LAK ’12, ACM, New York, NY, USA (2012), http://doi.acm.org/10.1145/2330601.2330616

In working through this I’m pointed to the proceedings of a 2011 workshop on “Awareness and Reflection in Learning networks”. Which is something I’ll need to return to, not to mention the 2nd and 3rd workshops.

Would appear, that just yet, this work isn’t quite ready for BIM. But who knows what has happened in the last year or so. Possibilities exist.

A distributed cognition aside

At the moment, I’m thinking that that “Moving Beyond” part of that presentation – which will show off what I’m thinking of working on with BIM – is going to be scaffolded with the following which is from Bell & Winn (2000) and is their description of Salomon’s (1995) two broad “classes” of distributed cognition as applied to artefacts in a learning environment

  1. An individual’s cognitive burden can be off-loaded onto the artefact, which may not necessarily help the individual learn about what the artefact is doing.
  2. An artefact is designed to reciprocally scaffold students in specific cognitive practices, it helps them in a process and in doing so can help them be able to perform this task without the artefact.

Ullman’s et al (2012) work would seem to fit neatly into the second of those classes. I’m hoping it (or other work) will provide the insight necessary to scaffold students in learning how to reflect.

I’m also thinking that there’s another dimension to think about in the design of BIM (and e-learning tools in general), the identity of the individual. I’m thinking there are at least three/four different identities that should be considered, they are:

  1. Student – as above, the person who’s using the tool to learn something.
  2. Teacher – the individual there to help the student. (My thinking is rooted in a formal education environment – it’s where I work – hence there is a need for a distinction here and my context also drives the remaining identities).
  3. Institutional support and management – the folk who are meant to help and ensure that all learning is of good quality.
  4. Artefact developers – the folk that develop the artefacts that are being used by the previous three roles.

I’m thinking that a tool like BIM should be concerned with providing functionality that addresses both “distributed cognition classes” for all roles.

Abstract – Ullmann et al (2012)

  • Paper reports on an experiment to automatically detect reflective and non-reflective texts.
  • 5 elements of reflection were defined and a set of indicators developed “which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts”.
  • A collection of blog posts were then run through the system and an online survey used to gather human judgements for these texts.
  • The two data sets allowed a comparison of the quality of the algorithm versus human judgements.

Introduction

Reflection is important – “at ‘the heart of key competencies’ for a successful life and a well-functioning society” (Ullman et al, 2012, p. 101).

Methods for assessing reflective writings are recent and not full established. Some quotes about how most focus has been on theorising reflection and its use, with little on how to assess reflection.

Issues with manual assessment of reflection

  1. Time-consuming.
  2. Feedback comes a long time after of reflection.
  3. Reluctance to share reflective writing given the nature of reflection.

Situating the research

This fits within learning analytics and social learning content analysis

Two prominent approaches for identifying automatically cognitive processes

  1. Connection between cue words and acts of cognition
  2. Probabilistic models and machine learning algorithms.

References to earlier work with both approaches provided.

Elements of reflection

Points out that no model of reflection is currently agreed upon. Presents 5 elements based on major stream of theoretical discussion

  1. Description of an experience.

    Sets the stage for reflection. A description of the important parts of the event. Might be a description of external events or the internal situation of the person. Common themes include: Conflict; self-awareness; and, emotions.

  2. Personal experience.

    Still some debate. But self-awareness, inner dialogue indicators.

  3. Critical analysis

    critical questions of content, process and premises of experience to correct assumptions beliefs etc.

  4. Taking perspectives into account

    Using a frame of reference based on dialogue with others, general principles, theory.

  5. Outcome of reflective writing

    New understanding/transformative and confirmatory learning. Sums up what was learned, concludes, plans for the future, new insight etc.

Acknowledges overlap between these elements.

A set of indicators were developed for each element.

Reflection detection architecture

a set of annotators – “bits of software” – developed and combined to do the analysis. Analysing the text and identify certain elements (roughly) based on keywords or other analysis.

  • NLP annotator – highlighting elements of natural language.
  • Premise (assuming that, because, deduced from) and conclusion (as a result, therefore) annotator
  • Self-reference (I me mine) and pro-noun other (he, they, others) annotators
  • Reflective vern (rethink, reason, mull over).
  • Learning outcome annotator (define, name, outline ) and Bloom’s taxonomy (remember, understand, apply, analyse)
  • Future tense (will, won’t)

An analysis component aggregates and tries to infer knowledge from the annotations. The creation of IF THEN statements used to chain lower level facts together.

This goes on until a high level rule(s) are used to connect with an element of reflection. Ended up using 16 such rules allocated to the five elements of reflection.

Method

Indicators were developed iteratively with sample texts.

Big question, what “weight should be given to each indicator to form a reflective text” (p. 108).

Used 10 texts marked as prototypical reflective writings from the reflection literature to parameterise the analytics. All this ended up with the following definition of the conditions for a reflective text

  • The indicators of the ”description of experiences” fire more than four times.
  • At least one self-related question.
  • The indicators of the ”critical analysis” element fire more than 3 times.
  • At least one indicator of the ”taking perspectives into account” fires.
  • The indicators of the ”outcome” element fire more than three times.

(p. 109)

Goes onto describe the questionnaire used as a comparison. Blog posts shown in random with questions. Human judgements reduced to 202. Data gathered by Mechanical Turk. Led to some problems, handled by filtering some.

Text corpus

Experiment was done using the “Blog Authorship Corpus” a collection of 681,288 posts, 140 millon words from 19320 bloggers from blogger.com in August 2004. Took the first 150 blog files. Short blog posts (less than 10 sentences) and foreign language posts removed.

5176 blog posts were annotated, 4m+ annotations and 170K+ inferences.

Results

Some value. Human folk more likely to agree with those identifies as reflective by the system.

One of the limiting factors is the parameterisation – the text used to do this was limited and there is no large body of quality reflective text available. Important because the parameterisation influences quality detection.

Doing more work on this.

Closes with

One possible application scenario especially useful for an educational setting is to combine the detection with a feedback component. The described reflection detection architecture with its knowledge-based analysis component can be extended to provide an explanation component, which can be used to feedback why the system thinks it is a reflective text, together with text samples as evidences.

References

Bell, P., & Winn, W. (2000). Distributed Cognitions, by Nature and by Design. In D. Jonassen & S. Land (Eds.), Theoretical Foundations of Learning Environments (pp. 123–145). Mahwah, New Jersey: Lawrence Erlbaum Associates.

Ullmann, T. D., Wild, F., & Scott, P. (2012). Comparing Automatically Detected Reflective Texts with Human Judgements. 2nd Workshop on Awareness and Reflection in Technology-Enhanced Learning. 7th European Conference on Technology-Enhanced Learning (pp. 101–116). Saarbruecken, Germany.

The kludge for marking learning journals

The following is a description of the kludge I put in place to mark the learning journals – see here for a description of initial thinking behind the journal – folk in the EDC3100 course this semester had to complete. It’s meant to record what I did, provide some food for further development and offer an opportunity for some initial reflection.

Final format

5% of each of the three course assignments contained a component titled the “learning journal”. In this, the students were expected for the relevant weeks of semester:

  • complete all the activities on the course site; and,
  • post a sequence of reflective posts on their personal blog.

As outlined in the ideas post the student’s mark was based on:

  • what percentage of the activities they completed;
  • how many posts per week (on average) they published;
  • the word count of those posts;
  • the number of posts that contained links; and,
  • the number of posts that contained links to posts from other students in the course.

The intent was to encourage connections and serendipty and minimise students having to “reflect to order” in response to specific questions/criteria. Of course, that didn’t stop many from seeking to produce exactly what was required to obtain the mark they wanted to achieve. 100 words per average means exactly that and also a bit of judicious quoting etc. Something for further reflection.

Activity completion

Each week the course site had a number of Moodle activities and resources, all with activity completion turned on. This means that Moodle tracks who has fulfilled the necessary actions to complete e.g. read a page, post to a forum etc.

The reports of activity completion aren’t particular helpful as I need to get the data into a Perl script, so the process is

  1. Download the CSV file Moodle can export of activity completion.

    The CSV file for the course I just downloaded was 1.7Mb in size.

  2. Delete the columns for the activities that don’t belong to the required period.
  3. Save it locally.

Blogs

I’ve written some Perl scripts that will parse the BIM database, evaluate the student posts and then combine that with the data from the activity completion CSV and produce the report. This report is circulated to the markers who manually copy and paste the students result into their assignment sheet. I’ve also got a version of the script that will email all the students a copy.

Of course, to get to this stage I’ve had to make sure that all of the students’ blogs are registered correctly with the version of BIM on my laptop. Then I need to

  1. Run the BIM mirror process to ensure that BIM has the most recent student posts.

    Currently 335 students have registered blogs and there are 8550 posts mirrored. For an average of about 25 posts per student. In reality, there have been a number of students withdraw from the course for a number of reasons.

  2. Dump the PHP BIM database and create a copy in the Perl database.

    Due to how I’ve got Perl and PHP installed they are using different MySQL database servers.

  3. Run either script.

The end result

Is a report that summarises results. But beyond this it’s a lot of extra work in overcoming human error that would have been removed with a decent system. I’ve spent a fair chunk of the last week dealing with these errors that mostly arise from the absence of a system giving students immediate feedback of problems including

  • Telling students they’ve registered a URL that is either not a URL or not a valid RSS feed.

    Earlier problems dealt with students making mistakes with registering their blog. BIM does this, but because BIM isn’t installed on the institutional servers I had to make do with the Moodle database activity and then manually fixing errors.

  • Warning students that their RSS feed is set to “summary” and not “full”.

    To encourage visitors to the actual blog, some blog engines have an option to set the feed to “summary” mode. A mode where only the first couple of sentences of a post is shown in the feed. This is not useful for a system like BIM that assumes it’s getting the full post. Especially when “average word count” is part of the marking mechanism.

    I’ve spent a few hours this week and more this semester helping recover this situation. BIM needs to be modified to generate warnings of this so recovery can happen earlier.

  • Students editing posts.

    Currently, once BIM gets a copy of a post it doesn’t change it. Even if the author makes a change. This caused problems because some students edited published posts to make last minute changes. This is okay but BIM’s assumptions broke the practice.

    BIM does provide students with a way to view BIM’s copy of their posts. I believe this feature helps the authors understand that the copy on BIM is different from the version on their blog. Reducing this error.

  • Allowing students to see their progress.

    This week I’ve sent all students an email with their result. BIM does provide a way for students to see their progress/marks, but with no BIM the first the students knew of what the system knew of them was when their marked assignments were returned. BIM, properly modified for the approach I’ve used here, would allow the students to see their progress and do away with the need for the email. It would allow the nipping of problems in the bud. Reducing work for me and uncertainty for the students.

Was it a success?

I’ve been wondering over the recent weeks – especially when I’ve been in the midst of the extra work that arose from having to fix the above problems – whether it was worth it. Did I make a mistake deciding to go with the blog based assessment for this course in the absence of appropriate tool support. Even if the institution had installed BIM, BIM itself didn’t have all the tools to support the approach I’ve used this semester. BIM would have reduced the workload somewhat, but additional workload would have been there.

Was it worth it was a question I asked myself when it became obvious that at least some (perhaps many) students did “write for the marks”. I need to explore this a bit further. But it is obvious that some students made sure they wrote enough to meet the criteria. There was also some level of publishing the necessary posts in the day before the assignment was due. At least some of the students weren’t engaging in the true spirit of the assessment. But I don’t blame them, there were lots of issues with the implementation of this assessment.

Starting with the absence of BIM which created additional workload which in part contributing to less than appropriate scaffolding to help the students engage in the task more meaningfully. Especially in terms of better linkages to the weekly activities. I’m particularly interested, longer term, in how the assessment of the course and the work done by the markers can be changed from making submitted assignments to actively engaging with students blog posts.

On the plus side, there was some evidence of serendipity. The requirement for for students to link to others worked to create connections and at least some of them resulted in beneficial serendipity. There’s enough evidence to suggest that this is worthwhile continuing with. There does of course need to be some more formal evaluation and reflection about how to do this, including work on BIM to address some of the problems above.

I’ve also learnt that the activity completion report in Moodle is basically useless. With the number of students I had, the number of activities to complete, and apparently the browser I was using viewing the tabular data in the activity completion report in a meaningful way was almost useless. Downloading the CSV into Excel was only slightly more beneficial. In reality, the data needed to be manipulated into another format to make it useful. Not exactly a report located in “The Performance Zone” talked about at the end of this post. On the plus side, this is informing some further research.

This whole experience really does reinforce Rushkoff’s (p. 128) point about Digital Technology

Digital technology is programmed. This makes it biased toward those with the capacity to write the code.

Without my background in programming, developing e-learning/web systems and in writing BIM none of the above would have been possible. The flip side of this point is that what is possible when it comes to e-learning within Universities is constrained by the ideas of the people who wrote the code within the various systems Universities have adopted. Importantly, this may well be at least as big a constraint on the quality of University e-learning as the intentions of the teaching staff to use the tools and the readiness of the students to adopt to changes.

References

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Animation over time of links between student posts

After seeing a previous post sharing some of his visualisations of the links between blog posts of EDC3100 students, Nick provided some video showing how the links were made over time.

When I have some time it will be interesting to explore how events within the course (e.g. feedback on assignment 1 etc) impacted the connections between students.

Also interesting to explore why even at day 77 there are a couple of outliers not connected with the others.

How can an "enterprise" e-learning tool be agile?

I have a problem. If I’m really lucky, BIM will get added to my institution’s version of BIM for Semester 2 and I will be able to use it. Based on my experience this semester – where I’ve used an approach that depends on BIM – there has been limitations and workload issues. Having BIM installed in the “enterprise LMS” will help significantly reduce these problems. It will also severely limit my ability to learn.

That limitation will arise from the nature of being an “enterprise” LMS. i.e. not at all agile. Instead a lumbering behemoth that takes a while to turn around. Getting the “enterprise” installation of BIM changed in anyway will involve going through a governance process that will have numerous steps. During these steps the expense of changing BIM will have to compete for the scarce resources available to change the “enterprise” LMS with other requirements. Requirements that are likely to be significantly more important than the couple of hundred students in the 2 or 3 courses I teach.

This causes problems because while BIM has been used at other institutions. It’s typically been supported just like most “enterprise” LMS. i.e. if there are any problems or limitations with the tool it is learners and teachers who are aware of it first. These folk will either ignore/workaround the problem (and blame the &^%%## technology) or they will ask for help. The people they ask for help will be either IT helpdesk folk or L&T staff development/training folk. If they are lucky these folk will actually know how to use the particular tool that has the problem without having to quickly read the manual. In the worse case scenario, they’ll have to do a quick read of the manual/Google search (which theoretically the learner/teacher could have done in the first place). Either way the only options open to the support folk are

  1. Here’s where you went wrong and how you fix the problem.
  2. You have just discovered one of the known problems with that tool, there’s nothing we can do about it.

Either response often involves the learner/teacher engaging in a large laborious manual process to workaround the limitation of the tool.

A different situation

When I’m using BIM, I’ll be in a slightly different situation. I designed and wrote BIM. When there’s a problem or limitation with BIM, I can generally change BIM to fix it. For example, earlier this week I discovered that one of the main pages wasn’t displaying individual student posts in order of time published. Five minutes later it did.

The fact that BIM has been around for 3/4 years and this problem still existed in the code is a nice piece of evidence of the limitations of the “enterprise” approach, even if it is based on open source technology.

The trouble is, I was able to make and use this fix because I’m currently running BIM on my laptop. Next semester, when (or if) it is installed on the “enterprise” LMS it is very unlikely that a change like this would ever get installed on the “enterprise” LMS in any reasonable time frame. Perhaps ready for next semester, if I’m lucky.

This is a real problem because next semester I will have a real opportunity to do some really interesting experimentation and development with BIM. Activities that will be somewhat curtailed by the constraints of the enterprise process.

How can I work around this?

Some possibilities

Two short-term possibilities are

  1. The backup/restore shuffle.

    This is where the students interact with the enterprise version of BIM. I then back that data up and restore it on my laptop. This is where I have the agile version of BIM that I can play with. If I make any change to the data, I then have to shuffle the data back the other way. In reality, the round trip of taking data from the agile version to the enterprise version probably isn’t going to work in any consistent and safe way.

    This approach also doesn’t help enable some of the ideas where the changes to BIM will enable students to do new and interesting things with BIM. Perhaps a version of BIM installed on an outside server the students could interact with might work. But it raises all sorts of other issues.

  2. The client-side scripting workaround.

    This is where I create browser/client based scripts that modify how BIM works. Each student/staff member would need to install the scripts on their browser to get the functionality.

    Perhaps I could make changes to the BIM code to make this sort of workaround more effective and simpler?

The other possibility is to explore how the enterprise approach could be changed to be more agile. At the very least this would involve building a better relationship with the institutional IT folk, but even then there are limitations.

Are there other possibilities?

The grammar of enterprise IT

The grammar of school is an idea to explain why reforms of education have failed to take root. Especially the use of ICTs. The rationale is that any proposed reform is so different from the accepted mindsets of schooling (the grammar) that it is seen as nonsensical, as ungrammatical. i.e. it gets rejected or ignored in much the same way a nonsensical sentence.

I suggest that there is also a “grammar of enterprise IT”. Ideas such as

  1. Wanting to make rapid, unplanned changes to a piece of software; or,
  2. Trusting a member of the Education Faculty to make those changes.

would simply be seen as nonsensical and rejected. Changing that grammar is going take a lot longer.

Even in writing this post, I run the chance that someone in enterprise IT will see how this is an attempt to break the grammar of enterprise IT. A perception that could lead to additional constraints on the development and use of BIM. Shall be interesting to see how it develops.

How to capture the "full benefits of the creative, original and imaginative efforts of" teaching staff

What’s good for research, must surely be good for teaching?

An article on the Australian’s higher education page quotes the following advice from this policy note from the Group of 8 (an obviously non-self-serving document, of course)

If Australia is to capture the full benefits of the creative, original and imaginative efforts of its researchers, it will always need a means to support the ideas and challenges coming from individuals and small groups, even when these ideas fall outside formal priority setting mechanisms

Having engaged a bit in the formal priority setting mechanisms around institutional e-learning over the last month or so, I was struck by how this perspective could be moved across from research to institutional e-learning.

I don’t think anyone could claim that the institutional governance processes around e-learning – especially the LMS – could ever be described as “a means to support the ideas an challenges coming from individuals and small groups”.

This is not to suggest there isn’t some level of need for these processes to ensure the availability of institutional systems. It is to suggest that if you want “creative, original and imaginative” efforts then the processes need (I would argue) to be able to to support the ideas an challenges coming from individuals and small groups”.

For example, as mentioned previously as part of the case for getting BIM installed on the institutional version of Moodle I had to explain why others might use it. It seemed that the governance processes/bodies etc didn’t know that there were 30 odd courses this year that were using learning journals of one type or another that might have benefited from BIM. There appears to be a lack of knowledge of the ideas and challenges of teaching staff and students with institutional e-learning systems within the priority setting mechanisms that “govern” them.

The trouble with this type of argument is that it’s strange. Perhaps because of the lack of knowledge about the issues and challenges, it’s impossible for those responsible to see a problem with the priority setting mechanisms. Or perhaps it’s an example of the following.

From “Status Quo”

Or, of course, it’s not that big of a deal.

Page 1 of 8

Powered by WordPress & Theme by Anders Norén

css.php