Backup for BIM 2.0

What follows is a journal of the attempt to bring BIM 2.0’s backup functionality into line with the new approach in Moodle 2.x.

Done. Appears to be all working. Will work on restore next and do some testing.

Backup

First up is trying to understand the developer docs on the new backup process. What follows is an attempt to both summarise/understand those docs and explain what changes I’ve made to BIM 2.0. The Backup 2.0 general architecture documents are also used.

What I believe it all boils down to is the ability to convert the database structure for a BIM activity into an XML file/structure. The aim here will be to keep the XML structure produced as close to that produced by 1.9 as possible.

Steps required

  1. Preparation – knowing what to backup
    Much of this is done in the “BIM data” section below.
    1. Draw the DB schema.
    2. Identify where the user information is located in the schema.
    3. Determine correct order of backup.
    4. Identify attributes and elements.
      All “id” fields should be attributes.
    5. Identify not needed elements.
      Any field except those in parent elements should be included.
    6. Identify the file areas used.
      Text fields and attachements appear to fit into this sector. This appears to be a bit new in Moodle 2.
    7. Annotating important bits
      e.g. the ID fields.
  2. Remove the old backup stuff.
    Basically backuplib.php in the bim directory.
  3. Tell Moodle that BIM 2.0 is supporting backups.
    Add the following to mod/bim/lib.php[code lang=”php”]case FEATURE_BACKUP_MOODLE2: return true;[/code]
  4. Set up the directory for the code
    create mod/bim/backup/moodle2
  5. Set up and test the backup process (which won’t work at the moment).
    The backup documentation includes a simple script that speeds up the develop/test cycle for backups. put that in place and run it. Breaks as expected
  6. Start putting in the code
    1. create empty mod/bim/backup/moodle2/backup_bim_settingslib.php
    2. backup_bim_stepslib.php – another empty file
    3. backup_bim_activity_task.class.php the place the above files are used. For now just some skeleton code with empty methods.
  7. Run the backup again.
    Which runs without error as expected.
  8. Create the bim.xml file – as an empty file
    • Some empty code into backup_bim_stepslib.php
    • Call the method from the steps file from backup_bim_activity_task.class.php.
  9. Now to define each of the elements essentially a translation of the provided code with the description of the bim data below. This produces an empty backup file for bim.
  10. Define the tree of data following the skeleton code.
  11. Connecting it all to the database
    A fairly simple set of method calls building on the above. Tested and all seems to be working. Woo hoo!
  12. Annotating IDs
    This appears to be related to signposting user (and other) information, something I missed the first time.
    For BIM, the relevant fields to annotate are user and group.
  13. Annotating files
    Not sure about this section. Need to read some more and update.
  14. Encode references to URLs?
    Done as per example.

BIM Data

The following is based on this 2010 post documenting the development work on the backup process for BIM 1.0. With some extra work based on the preparation information from above.

The bim data hierarchy (bullet points represent table names)

  • bim
    id (attr)
    course (not needed) **** CHECK IF THIS IS INCLUDED ****
    name
    intro (????file area???)
    introformat
    timecreated
    timemodified
    register_feed
    mirror_feed
    change_feed
    grade_feed

    • bim_group_allocation
      id (attr)
      bim (not needed)
      groupid
      userid
    • bim_questions
      id (attr)
      BIM (not needed)
      title (???? file area ???? )
      body (??? file area????)
      min_mark
      max_mark
    • bim_student_feeds
      id (attr)
      bim (not needed)
      userid
      numentries
      lastpost
      blogurl
      feedurl
    • bim_marking
      id (attr)
      bim (not needed)
      userid
      marker
      question (this is an id back into bim_questions)
      mark
      status
      timemarked
      timereleased
      link (???file area??)
      timepublished
      title (??file area??)
      post (?? file area?? )
      comments (?? file area??)

In BIM 1.0 the user data includes: student feeds, marking and group allocation.

Major (Moodle) requirements for BIM 2.0

The next step in the development of BIM 2.0 is identifying the list of major (Moodle) requirements that need to be implemented. BIM is a Moodle activity module. Moodle has a range of expectations that such modules are meant to meet. The following is an attempt to identify what needs to be done.

It has resulted in a renewed effort to use the github issue list to record and manage what needs to be done. Not only have I started adding issues for BIM 2.0, I’ve also been through the old issues and decided which apply to BIM 2.0

In short, some major work to be done to get backup/restore migrated. Some minor tweaks (it appears) to get gradebook integration working. Logging is working as is.

Summary of the requirements

A summary of what was found follows. It includes some compulsory/important type requirements:

And also some that would be nice future additions:

What has changed?

Now to find out what has changed in the requirements that have to be addressed now.

Backup and restore

This has definitely been changed. It’s listed in the migrating CONTRIB code document.

backuplib.php is now replaced with a backup directory. It also appears to be a more OO-based approach. Some major re-work to be done here. Will leave this to another post.

Gradebook

This isn’t working. Any attempt to turn on the BIM gradebook integration generates an error on line 313 of lib.php due to a problem with a database insert

Debug info: Column ‘grademax’ cannot be null
INSERT INTO mdl_grade_items

The question will be whether this is a problem in BIM or evidence that the Gradebook API has changed significantly.

According to the Gradebook API there should be a mod/bim/grade.php file. Certainly not one in BIM 1.0. But then the forum module doesn’t have one either and yet it does use the gradebook, so it would appear to be optional.

grademax can be changed in the gradebook, but the help text located there suggests it should be set on the activity settings page. i.e. I need to add the ability to set grademax on the BIM config screen.

This has identified that the problem is because the existing BIM code is does not have a value for the grademax field for the gradebook. It appears that the Moodle 2.x code has required that this be not null.

Actually, the BIM 1.0 code doesn’t seem to have this set. A mystery change? Perhaps some boilerplate with a search and replace I put in place when setting up BIM 2.0? Moodle 1.9 doesn’t seem to have required a grademax value. So what does grademax imply?

Common sense would seem to imply the maximum value that can be entered into the gradebook for this component. BIM currently asks for maximums for each question, so a grademax could be calculated. The problem is that BIM only uses the maximums to generate a warning, it doesn’t enforce it. If the gradebook enforces grademax, then this could create some dissonance with BIM’s operation.

As it happens the hard coding of grademax to 10 results in gradebook integration. Or at least the activity being added to the gradebook. When I try to release some results (which includes adding marks to gradebook) I get a coding error which I’ll need to fix. Have added this to the to do list.

Will leave working on this until later.

It also suggests that in lib.php the bim_supports function should report that it has FEATURE_GRADE_HAS_GRADE. I’ll add that for now.

There are also a few examples that provide some extra code missing from BIM. Will add that as well.

Logging

The logging API in Moodle is likely to be replaced in a little while as part of an increasing importance of logging, analytics etc. The new work includes some references which could be used to inform a rethinking about BIM logging. This is one of my areas of interest.

But at the moment, the current BIM logging is working. At least there are BIM entries being added into the dummy course that I’ve been testing with.

Bug fix and to do for BIM

After a short Xmas break it’s time to continue work on getting BIM 2.0 up and going. In this post I’m trying to continue the work from a week or so ago. The main aim is to fix a bug with the manage marking page.

Status: The manage marking bug has been fixed. Mostly related to further migration work from Moodle 1.9 to Moodle 2.x

The manage marking bug

The bug is summarised nicely by the following screen short from the last post.

Manage marking has an error

There appears to be a problem with one of the data structures that results in the BIM crashing and burning. There’s some evidence of an earlier attempt to investigate this, so time to revisit prior posts on BIM development. This post identifies the location of the problem.

The problems are all related to the changes in the database API from 1.9 to 2.x

These are fixed. get_all_marker_stats is working, however, the displaying of the data also needs to be fixed. Replacing flexible table with HTML table.

To do

  • Table of unregistered students is showing some number (student id?) that shouldn’t be there.
  • It isn’t showing the left hand column.
    A broken div

Unregistered students

A few of the pages display a table of students who have not registered their blogs. This needs to be updated to html_table.

  1. Find where it is shown.
    Done using the bim_setup_details_table with the last parameter being unregistered. Once with the marker code and twice in the coordinator code.
  2. Identify the fix
    • replace add keyed data with similar to
      [code lang=”php”]$table->data[] = array( $row[‘username’], $row[‘name’], $row[’email’], $row[‘register’] );[/code]
    • replace table..print_html with
      [code lang=”php”]echo html_writer::table( $table );[/code]
  3. Fix each of those.
    Fixed.

Help text for Manage Marking

The problem with manage marking seems to have delayed the provision of the help text. Need to add that in.

Only the one, but there does appear to be some scope to provide more detailed help messages throughout.

To do list

This post has a list of what was working and not with the coordinator interface and a later post updates some of this. Need to revisit these and start a list in basecamp.

Misc to do

  • Manage marking
    • view students with the missing status appears to be showing a student who has 1 question that has been marked. What is the meaning of the MISSING status?
  • Re-visit the use of tables and how implemented.
  • Help messages
    • Check out other help icons in coordinator views.
    • Think about provide more detailed help via sprinkling help icons throughout all of the views.
    • Look into how some of the older help text can be reused.

BIM: another restart?

The following is essentially an activity log/diary or the first steps of getting back into work on BIM. I’m hoping to have it ready to work with some course redesign I’m working on, but timelines may make that difficult.

The aim of this is to get the current version of BIM for Moodle 2.x up and running with Moodle 2.4+. The next step will be to determine what work needs to be completed on BIM and what new features might be useful.

In summary, it’s surprisingly functional as is, much more than I remembered.

Download and install Moodle 2.4

Moodle 2.4+ downloaded from here

Stick it in an m24 directory under xampp and follow the instructions.

All installed.

Installing bim2

And now to get bim2 off github. Mm, 8 months since I worked on the code. Not good.

[code lang=”bash”]mkdir bim
cd bim
git clone https://github.com/djplaner/BIM.git
mv BIM/* .
mv BIM/.git .
rm -rf BIM
[/code]

Task: I really need to look into the naming of that folder and using of git so there’s no need to play with the file structure.

Visit the local Moodle website, picks up BIM ready to install. Oops, error.

Plugin “mod_bim” is defective or outdated, can not continue, sorry.
Debug info: Missing mandatory en language pack.
Error code: detectedbrokenplugin

That’s because I didn’t clone the bim2 branch

[code lang=”bash”]sudo git clone -b bim2 https://github.com/djplaner/BIM.git[/code]

And that has updated successfully. Now does it actually work?

Testing it out

Ohh, pretty new interface for Moodle 2.4. Looks like the BIM icon will need to updated to work with the slightly bigger and different design for the module icons. (Click on the following images to see bigger versions)

After adding the activity you need to enter the basic configuration details

Add some questions that the students will blog in response to.

What about allocating markers to mark the influx of posts?

No users allocated to the course, so nothing there. Nice to see I’d thought of this condition. Time to allocate some students and teaching staff. So staff enrolled in the course. Can I manage marking now?

Not yet. I need to create some groups for the course. Markers aren’t allocated individual students within BIM. They are allocated groups.

So with groups allocated, I can allocate a marker. Can I manage the markers? The coordinating teacher can see a list of all the markers and what they have (or haven’t) marked yet.

Oops, that’s the first error in the code. Will have to revisit that.

Can I see the students I have to mark as a marker? This is the overview. It shows which of my students have registered their blogs (and for which I can mark something) and which haven’t yet.

Now, let’s see if I can do some marking.

Not really because none of the posts from this single student have been allocated to one of the set questions. I’ll need to allocate one of his posts to a question using the “allocate question” screen.

Now I should be able to mark that allocated question

Student perspective

So, does it work from the student’s perspective. Does the activity show up when they login to the course?

Can the register their blog?

Does it actually work as expected?

What’s next?

Time for a road trip. So no progress for a few days, after that it will be revisiting what outstanding tasks are left to make this truly useful. Gradebook integration is probably the top of the list. Backup/restore may be the next step.

Why Moneyball is the wrong analogy for learning analytics

Learning analytics is one of the research areas I’m interested in. Consequently, I’ve read and listened to a bit about learning analytics over recent times. In that time I’ve often heard Moneyball used as an example or analogy for learning analytics.

I can see the reason for this. It’s a good example of how data can inform decision making in a field many people (especially those in America) are familiar with. Having a best selling book that’s turned into a Brad Pit movie doesn’t hurt either. But I think it’s the wrong analogy for learning analytics.

Moneyball by Kei!, on Flickr
Creative Commons Attribution-Share Alike 2.0 Generic License  by  Kei! 

Why?

As it happens I’ve been reading Nate Silver’s book The Signal and the Noise: Why most predictions fail bus some don’t over recent weeks and I’ll use it to make my case. Silver has had success in applying “analytics” to make predictions in both baseball and US politics and in the book he talks to experts from a range of fields about predictions. Through this process he concluded

I came to realize that prediction in the era of Big Data was not going very well

One of the reasons he gives is

Baseball, for instance, is an exceptional case. It happens to be an especially rich and revealing exception

Why? Well one reason is given when talking about economics, a discipline with a poor track record when it comes to predictions.

This isn’t baseball, where the game is always played by the same rules.

If you don’t play by the rules set down in baseball you are going to get pulled up. What are the rules for learning? How can you be sure that each of the students are aware of the rules, interpreted them the same way, and are playing by them?

A little further on in Silver’s book comes this

The third major challenge for economic forecasters is that their raw data isn’t much good

If the raw data isn’t much good, any predictions you make based on that data is going to have some flaws.

How good is the data in learning? Well, in the face-to-face classroom it’s next to non-existent. At least in the hard, quantitative, consistent form required for most learning analytics. If it’s e-learning, well the data is currently limited to usage logs from the LMS, which are at best a vague indicator of what’s going on.

Intelligent Tutoring Systems tend to solve these problems by having a fixed set of rules (a model) of learning and learners in a particular area. These rules, however, would appear to limit the adoption of the system. How many other contexts can these rules be applied to? Can you actually create such rules for all contexts?

I’m not convinced you can. Especially when broader trends are pushing for an increasingly diverse set of students, but also when learning is seen as a broader, more open and individual happening. Are there “rules” for learning that are broadly applicable?

What the economics analogy suggests for learning analytics

Is learning analytics about prediction? I’d argue that largely it is. You are wanting to understanding what is happening and make predictions about what will happen next. If the learner isn’t going to learn, you want to know that and be able to intervene. You want to make predictions that enable intervention.

The lack of success in prediction in economics suggests that the future of learning analytics may not be bright. At least if it relies on the same models and assumptions as economics. So what needs to change?

Beyond the early adopters of online instruction: Motivating the reluctant majority

The following is a summary and some reflection on Hixon et al (2012). I’m particularly interested in this topic due to my belief (based on 20 years experience and observation) that most institutional approaches to change in learning and teaching has only been successful in moving the same 10% of staff. A 10% that didn’t need a lot of help in the first place.

Thoughts and to do

Fairly disappointed with this paper. Didn’t engage at all with the perspective of Geoghegan (1994) who took the implications from the adopters categories a lot further and questioned some of the fundamental assumptions.

Misc thoughts, questions and to do

  • Is there anyone doing interesting research/thinking around the inherent diversity in academics and the inherent consistency in what passes for institutional e-learning?
    Looking at the work referencing Geoghegan would probably be a good start.
  • How/what does the increasingly universal adoption of e-learning in Oz Universities imply for the adopter categories and from there how e-learning is supported and the subsequent quality of it?
  • Can learning analytics of LMS usage identify/support the adopter categories? Or at the very least some difference between staff?
  • Look at the conceptual paper (Barczyk et al, 2010) that informed the survey

In the following, where I remember, my comments are emphasised. Other text is a summary of Hixon et al (2012)

Abstract

Now that most of the innovators and early adopters of online instruction are comfortably teaching online, many institutions are facing challenges as they prepare the next wave of online instructors. This research how faculty in this “next wave” (the majority of adopters) differ from the innovators and early adopters of online instruction. A specific online course development program is described and the experiences of the “majority” in the program are examined in relation to the experiences of previous participants (the innovators and early adopters).

There is probably a refinement to be made here. There are a number of universities in Australia where the majority of, if not all, courses are taught online. These institutions already have the “next wave” online. The problem though is that the quality of the online experience leaves a great deal to be desired.

Introduction

More folk have to move online. This study designed to help inform best practices “in bringing the ‘majority’ online”

Based on the Distance Education Mentoring program at a Midwestern university. Cohort-based mentoring program to help faculty develop an online course. Over four years of operation it’s been noted that faculty participants are changing. Looks at Rogers’ Diffusion of Innovation theory to understand the changes. A few paragraphs explaining DOI getting to categories of adopters. Moves onto some literature on those using it exploring technological innovations.

Interestingly, they don’t reference Geoghegan (1994) who used Moore’s extension of these to make some points along these lines. Interesting largely because Geoghegan is one of my theoretical/literature “hammers”.

Approaches to development of online courses

Posits two approaches

  1. faculty-driven approach;
    An approach that can work if faculty have the skills.
  2. collaborative approach.
    Seen as the solution to overcome the difficulties (especially pedagogical) of the faculty-driven approach.

I’d argue that the same observation (lack of skills) can be made with the collaborative approach. I’ve been in situations where the “collaborators” haven’t had the necessary skills either. What isn’t explicitly noted in the above is that both normally assume that course development is separate from teaching. A course is redesigned before/after teaching.

Struggles faced by academics from the literature include

  • learning the necessary skills;
    A contributing factor here is the really poor quality of the technical tools. Some of that difficulty arises because the tools are from another context and don’t match the local context.
  • adapting the pedagogic strategies for the online environment;
    At the same time workload formulas, room allocation, legal requirements, policies and student prejudices mitigate against the adoption of those pedagogical strategies.
  • conceptualising their course for the new environment;
  • finding increased time required to develop quality online courses.
    Wonder if the faculty had developed “quality” face-to-face courses? What’s the source of this difficulty online?

The assumption here is that it is course development that must be collaborative. What would it look like, how would it happen and what would be the impacts if the course delivery process was collaborative? i.e. don’t assume that faculty skill-development and course redesign only occurs before the course is taught. What happens if as I’m teaching the course I make changes and am able to learn more about what works. More importantly, that the organisational e-learning systems/policies/processes can learn more.

Method

The mentoring program “is design to educate and certify faculty members in the principles of instructional design”. Each faculty participant (protege) has a mentor from outside the discipline. Uses the Quality Matters rubric. More detail given on how it works.

Courses are taught and then evaluated and given a pass/fail based on the rubric.

By the 3rd year of the program, noticed participants “are hesitant, or even resistant, to consider new approaches and technologies, or even to teach online”. Which is argued to be fitting to the idea that they are “the majority”. Would be interesting to dig further into this? Were this “majority” being required to participate?

Program changed in fourth year. More structure. More defined structure in the online course they complete. Submission by specific deadlines. Formal meeting schedule. A contract required to be signed. Mmm, not a great fan of that. I wonder if they actually looked at when Rogers and others have said about the characteristics of the majority and if the changes to the program were based on those insights? e.g. their communication networks tend to be vertically oriented, wouldn’t having a mentor from outside the discipline be a poor match?

Research questions

  1. In what ways do faculty members participating in the 4th offering differ from prior offerings?
  2. In what ways do the experiences/perceptions of the 4th years differ?

Survey questionnaire developed to ask: skill development, mentoring relationship, its effectiveness, perceptions of teaching as a result, program satisfaction, general beliefs about online education. Wouldn’t connecting this to DOI have been sensible?

47 of the 92 proteges completed the survey: 27% of year 1, 52% of year 2, 57% of year 3, 58% of year 4.

Responses from years 1-3 combined to compare. But they’ve said they noticed differences in year 3?

Results

Those in 1st three years had been teaching longer than 4th years. Simlarly, the earlier group had higher ranks. Oh dear, it appears the 4th years might “junior” academics fighting to establish themselves as researchers forced to engage with the program

Year 4 less likely to identify as early adopters of technology. Continuing stereotypes would have required age to have been mentioned by now wouldn’t it? No significant difference in age.

Both groups were equally looking forward to the program.

Year 4 group reported more benefit from online course. Well this measures the changes in the course, rather than the people Both groups satisfied similarly with program.

References

Barczyk, C., Buckenmeyer, J., & Feldman, L. (2010). Mentoring professors: A model for
developing quality online instructors and courses in higher education. International
Journal on E-Learning, 9(1), 7–26.

Geoghegan, W. (1994). Whatever happened to instructional technology? In S. Bapna, A. Emdad, & J. Zaveri (Eds.), (pp. 438–447). Baltimore, MD: IBM.

Hixon, E., Buckenmeyer, J., Barczyk, C., Feldman, L., & Zamojski, H. (2012). Beyond the early adopters of online instruction: Motivating the reluctant majority. The Internet and Higher Education, 15(2), 102–107. doi:10.1016/j.iheduc.2011.11.005

Developing personal learning networks for open and social learning

The following is a summary and touch of reflection on Couros (2010) and is the another step in thinking about the design/implementation of a course I’m working on.

Thoughts and to do

As expected a good overview/rationale for the type of approach I’m interested in exploring with EDC3100. Some interesting departures to think about. For example,

  • Alec’s course had 16 registered students, mine will have 200+ (first semester), perhaps another 80+ (second semester) and possibly have to be taught by someone else in semester 3.
  • Alec’s much more effective and engaged with his PLN than I am.
  • Alec’s course is post-graduate, mine is under graduate.

to do

  • Look at Tabak’s (2004) concept of distributed scaffolding.
  • Engage in an analysis of the learning environment available for EDC3100. Is Moodle appropriate? Would a self-hosted WordPress be a better fit? Having 200, rather then 20, registered students is an argument for Moodle (perhaps).
  • Think about the question about whether to be overly explicit in terms of what students should post to the blog, or take the more open approach.
  • How many of the student blogs are still active today?
  • Over time it appears there’s been a move away from the Wiki assessment, I wonder why that is?
  • Is it still difficult/different to read social media?

Abstract

Tells the story of EC&I831 an open access, graduate level, educational technology course at the University of Regina in 2008. 8 non-registered participants for every official student. Experience provided insight into the potential for leveraging PLNs in open access and distance education.

Introduction

Course title – “Open, Connected Social”. Fully online course developed using FOSS and freely available services. Design demonstrate “open teaching methodologies: educational practices inspired by the open source movement, complementary learning theory and emerging theories of knowledge. Students builts PLNs to “collaboratively explore, negotiate and develop authentic and sustainable knowledge networks”. Couros (2010, p. 110) writes

It is my hope in writing this chapter that I capture and document relevant reflections and activities to provide starting points for those considering open teaching as educational innovation

That’s what I’m looking for Alec.

Three sections

  1. key theoretical foundations;
  2. the course experience
  3. discoveries related to the role of PLNs, techniques for developing and leveraging PLNs in DE courses and the role of emerging technologies.

Theoretical foundations

  1. The open movement
    Educators participating in FOSS communities had strong tendencies toward: collaboration, sharing, openness in classroom activities and professional collaborations. Technology was a barrier, but Web 2.0 etc addressed this. Now they could easily create, share, collaborate. Added is the greater availability of educational relevant content. So much so that

    The dilemma of the educator shifted quickly from a perceived lack of choice and accessibility to having to acquire the skills necessary to choose wisely from increased options.

  2. Complementary learning theories
    Influences include:

    • social cognitive theory – suggests it is the combination of behavioural, cognitive and environmental factors that influence human behaviour. People learn through observations of others. Self-efficacy is important.
    • social constructivism and
      Related to the above. Sociocultural context and social interaction are important for knowledge construction. Tabak’s (2004) concept of distributed scaffolding – an emerging approach to learning design.
    • adult learning theory.
      Adults learn differently from kids, which results in principles such as: adults be involved in planning/evaluating their instruction; experience (including mistakes) provides the basis for learning activities; interest is generated from subjects that have immediate relevance to their job/life; learning is problem-centred rather than content-oriented.
  3. Connectivism
    Heavily influenced by theories of social constructivism, network theory and chaos theory. Digital technologies become important to learning. Stresses metaskills of evaluating and managing information and the importance of pattern recognition as a learning strategy.
  4. Open teaching
    Defined as Couros (2010 p. 115)

    Open teaching is described as the facilitation of learning experiences that are open, transparent, collaborative, and social. Open teachers are advocates of a free and open knowledge society, and support their students in the critical consumption, production, connection, and synthesis of knowledge through the shared development of learning networks.

    Typical activities of open teachers include

    • Use of FOSS tools where possible and beneficial.
    • Integration of free/open content into L&T.
    • Promotion of copyleft content licences
    • Help students understand copyright law.
    • Help students build PLNs for collaborative and sustained learning.
    • Development of learning environments that are reflective, responsive, student-centered and incorporate diverse learning strategies.
    • Modelling openness, transparency, connectedness and responsible copyright etc. use.
    • advocacy for the participation and development of collaborative gift cultures in education and society.

    That last one is interesting

The course

20 registered students. Mostly practicing teachers or educational administrators. Normally there is a maximum of 16 students (I wish). Development funded by $30,000 government grant. Important: this funding was not used in the design and development side, but instead on hiring learning assistants who “were hired as social connectors, and their primary responsibilities were to support students in the development of PLNs” (Courous, 2010, p. 117)

In terms of selecting a learning environment, WebCT, Moodle, and Ning were rejected. Wikispaces was adopted. Wikispaces (hosted) was used. The site 2007-2010) and now. Have moved to a WordPress site (by the looks).

Course facilitation model

  • Major assessment (3) guided the activities
    1. Personal blog/digital portfolio
      Student responsible for developing a digital space to document their learning through readings and activities. For many these became showcases and acted as distributed communication portals. Most remain active beyond the end date.
    2. Collaborative development of an educational technology wiki resource
      Wiki with collaborative content.

      I’m wondering how collaborative this process was? A group or a network (a la Downes).

    3. Student-chosen major digital project.
      Range of projects (produce videos, instructional resources, social networking activities, participation in global collaborative projects, development of private social networks etc) developing resource specific to their professional context.

    It’s changed a bit and is described somewhat on the assessments page

  • Tools and interactions
    1. Synchronous events
      Two each week. 1.5-2 hours. First on content knowledge in form of invited presenters. Connect/elluminate and ustream.tv/skype used and associated recordings. The second was a hands-on session for technical skills and pedagogical possibilities.

      Combination Skype and ustream.tv became the preferred method for video conferencing. How is explained here

    2. Asynchronous activities
      • reading, reviewing and critiquing course readings in blogs.
      • sharing resources through social bookmarking.
      • Creation of screencasts, tutorials etc for personal learning and that of others.

      And a bunch of others including reading blogs, participation in open professional development, posting content to open sites, microblogging, collaborative wiki design and collaborative design of lesson plans. Most were unplanned.

PLNs in Distance education

First session in course was closed and explanatory. The author’s PLN became important to support the model. Which does raise the question of how someone without the author’s PLN might go.

Conceptualising PLN

Mentions absence of definition and the need to discern PLE from PLN. Offers two images (click on these to see the original) The old and new style “PLN”. A discussion picked up a bit more online here. In short it appears that the PLE are the tools, processes etc that allow management of learning. The definition used for PLN

personal learning networks are the sum of all social capital and connections that result in the development and facilitation of a personal learning environment.

TypicalTeacherNetwork by courosa, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  courosa 
Networked Teacher Diagram - Update by courosa, on Flickr
Creative Commons Attribution-Noncommercial-Share Alike 2.0 Generic License  by  courosa 

PLNs for teaching and learning

Strategies deemed effective in the course

  • Immerse yourself.
    i.e. use and understanding of the social media tools, how they can be used, and how the students can use them.
  • Learn to read social media.
    Social media is read much differently than traditional media. Tools aren’t great. Need to use what’s available.
  • Strengthen your PLN
    Creating content and commenting on the work of others is important.
  • Know your connections
    Be aware of skills/backgrounds of PLN allow identifying who can help students.
  • PLNs central to learning
    Courses/communities hosted in the institutional LMS die. The community in this course lived on.

Final thoughts

two questions often asked after conference presentations on this

  1. How did you get away with this?
    Institutional support for open teaching is essential. Colleagues are “constructively critical of technology, but strongly supportive of innovation in teaching and learning”.
  2. Where did you find the time to teach this way?
    Good teaching always requires more time

References

Couros, A. (2010). Developing personal learning networks for open and social learning. In G. Veletsianos (Ed.), Emerging technologies in distance education (pp. 109–128). AU Press.

Tabak, I., (2004). Synergy: A complement to emerging patterns of distributed
scaffolding. The Journal of the Learning Sciences, 13(3), 305–335.

Understanding management students' reflective practice through blogging

The following is a summary and perhaps some reflection upon Osman and Koh (2013). It’s part of the thinking and reading behind the re-design of the ICTs and pedagogy course I help teach to pre-service teachers.

Abstract

65 business (MBA/Egyptian) students participated in collaborative blogging over 5 weeks. Analysis (content analysis for critical thinking and theory/practice links) support the potential of blogs as a tool “for reflection and learning in practitioner-oriented courses. Implications for the design of blogging tasks are discussed.

Thoughts and todo

Provides some empirical evidence for the use of blogs for reflection and connecting theory and practice. Though the findings are generally what people would expect.

The task for these students was somewhat like a forced connection. You must post 1 topic and comment on two others. I wonder if this was more open/flexible/student controlled more contributions would arise? Perhaps only if appropriate support/connection made.

To do

  • Look at Ho and Richards (1993) for a framework specific to journals of student teachers.
  • Look at framework of Greenlaw and DeLoach (2003) and also Osman & Duffy (2010)
  • Look at Osman & Duffy (2010) for the idea that theories are not actively taken up by students and remain detached areas of knowledge, not integrated into decision-making.
  • Loving et al (2007) another framework for evaluating evaluation.

Introduction

Problems in practitioner courses in combining theory and practice. Need to encourage reflection etc.

Reflection has some plusses. Uses Moon’s (1999, p. 155) definition

a mental process purpose and/or outcome in which manipulation of meaning is applied to relatively complicated or unstructured ideas in learning or to problems for which there is no obvious solution”

Blogs are a recent tool for this, benefits of blogs from the literature are listed and referenced

  • empowering students by giving a voice and venue for self-expression.
  • increasing sense of ownership, engagement and interest in learning.
  • may facilitate enriched opportunities for communication, challenge, cognitive conflict, deeper thinking and knowledge construction.

But scarcity of studies that investigate “empirically”. Many relying on self-report data or anecdotal evidence. Few studies critically examine the quality of students’ reflection, especially in management education. Few provide explanations and thus limit guidelines that suport the design of blogging tasks to facilitate reflection and learning.

Literature review

Starts with references to the problem of MBA programs problem of combining academic rigor and practical application. A problem that teaching programs have had for a long time. Critical reflection is seen as a way to bridge this.

The rest is broken into the following sections

  1. Blogging and reflection.
    Individual journals a common approach. Privacy provides a sheltered place but limits sharing/collaboration etc. Blogging provides some affordances that address this. Value is accepted by enthusiasts, but limited analysis. Some studies mentioned. A few using coding frameworks are mentioned. One shows blogging has a positive impact on reflection, but peer comments has a negative impact.
  2. Critical thinking through blogging.
    Critical thinking defined as “development of a habit of continuous reflection and questioning”. Few studies of blogging looking at critical thinking.
  3. Fostering theory-practice linkages in management education.
    Explains the use Kolb’s learning cycle in this study.

Research questions

  1. How critically evaluative were the reflections of graduate business students when they engaged in blogging?
  2. In their reflections, to what extent did these students link theory and practice? What phases of Kolb’s experiential learning cycle did these students focus on?

Methods

Students blogged for last 5 weeks of 10 week term. 20% of assessment for the task. Guidelines kept to a minimum. Graded on the quantity and quality of postings. Students introduced to reflective practice and Kolb’s cycle prior.

Blogging groups (max 8) were self-assigned and access to the blogs limited to the group. During first week the instructor moderated blog posts. Discontinued as a threat to student ownership.

Students had the opportunity of opting out of their blog contributions from being analysed for the research. 54 provided signed consent.

Blog archives coded by two independent coders.

Results

The assessment task required students to initiate one topic and comment on two posts submitted by other groups per week. 65 students expected to make 325 posts and 650 comments. In the end 144 topics and 399 comments. Students only posted 543 times, 44% less than anticipated.

RQ #1 – how critically evaluative were posts

A peak at simplistic alternatives/argument (30%) and basic analysis (26%). About 14% theoretical inference i.e. building arguments around theory. Apparently this was the expected level.

No significant differences between posts and comments.

RQ #2 – To what extent did students link theory and practice

Focused on higher level critical thinking posts. Used a “Kolb-based framework”.

Significant differences between posts in types of reflection. Students seemed more comfortable considering theory with experience, observation or experimentation.

Discussion

Results support use of blogging as a tool to encourage reflection. Mmm, not sure it’s innate to the technology, though the affordance is there.

Few posts off task – but I think that’s probably a result of asking those questions in other areas. But author’s compare this with content oriented posts in discussion forums only being 40/50% of posts. Again, possibly the design of blogs in this context suggesting it’s not the place to raise non-content questions. Authors do point out that this was a blended context, the discussion forum references were totally online. And they pick up my point.

Surprising level of students reflecting on their learning via blogs. Mostly positive, but a prominent concern was a desire for feedback, especially from the instructor. Suggests some reasons: novelty of reflection requiring reassurance; a by-product of culture.

Some suggestions around student confusion because of reasons from the literature: what to write in a blog post, low self-efficacy re: the worthiness of their contribution; difficulty generating topics.

In this study students wanted instructor to post discussion questions. i.e. the instructor needs to be more active in scaffolding struggling students.

Guidelines for designing blogging tasks

The article closes with the following list of guidelines (p. 30)

  1. Explain the importance of reflection a vehicle for learning and continued professional development.
  2. Provide different forms of scaffolding. Many students are new to reflection and critical thinking as a more formal activity. In addition to giving them a framework and guidelines to inform their reflections, examples that illustrate quality reflection and critical thinking might be necessary. Students in this context seemed to especially need help with building theory based arguments, evaluating theories, and addressing ethical concerns for business issues.
  3. Give prompts to encourage reflections. Some students are often apprehensive about initiating reflections.
  4. Promote reflection and critical thinking over longer durations. A reflection task that extends for part of the semester might not be sufficient to adequately develop students’ reflective and critical thinking skills.
  5. Relate students’ reflections to class topic so that students see the value of reflection as an integral and legitimate ingredient of learning.
  6. Provide technical orientation at the beginning of the session. Although we assume that our students are tech savvy, they might not be.

Nothing to surprising there, it’s what I’ve done in the past and will aim to do next year.

References

Osman, G., & Koh, J. H. L. (2013). Understanding management students’ reflective practice through blogging. The Internet and Higher Education, 16, 23–31. doi:10.1016/j.iheduc.2012.07.001

Ho, B., & Richards, J. C. (1993). Reflective thinking through teacher journal writing: Myths and realities. Prospect, 8, 7–24.

Greenlaw, S. A., & DeLoach, S. B. (2003). Teaching critical thinking with electronic discussion. The Journal of Economic Education, 34(1), 36–52.

Loving, C. C., Schroeder, C., Kang, R., Shimek, C., & Herbert, B. (2007). Blogs: Enhancing links in a professional learning community of science and mathematics teachers. Contemporary Issues in Technology and Teacher Education, 7(3), 178–198.

Osman, G., & Duffy, T. (2010). Scaffolding critical discourse in online problem-based scenarios: The role of articulation and evaluative feedback. In M. B. Nunes, & M. McPherson (Eds.), IADIS International Conference e-Learning 2010: Vol 1 (pp. 156–160). International Association for Development of the Information Society.

Can/will learning analytics challenge the current QA mentality of university teaching

Ensuring the quality of the student learning experience has become an increasingly important task for Australian universities. Experience over the last 10 years and some recent reading suggests there are some limitations to how this is currently being done. New innovations/fashions like learning analytics appear likely to reinforce these limitations, rather than actually make significant progress. I’m wondering whether the current paradigm/mindset that underpins university quality assurance (QA) processes can be challenged by learning analytics.

The black box approach to QA

In their presentation at ascilite’2012, Melinda Lewis and Jason Lodge included the following slide.

ascilite'2012 Lodge & Lewis

The point I took from this image and the associated discussion was that the Quality Assurance approach used by universities treats the students as a black box. I’d go a step further and suggest that it is the course (or unit, or subject) as the aggregation of student opinion, satisfaction and results that is treated as the black box.

For example, I know of an academic organisational unit (faculty, school, department, not sure what it’s currently called) that provides additional funding to the teaching staff of a course if they achieve a certain minimum response rate on end of term course evaluations and exceed a particular mean level of response on 3 Likert scale questions. The quality of the course, and subsequent reward, is being based on a hugely flawed measure of the quality. A measure of quality that doesn’t care or know what happens within a course, just what students say at the end of the course. Grade distribution (i.e. you don’t have too many fails or too many top results) is the other black box measure.

If you perform particularly badly on these indicators then you and your course will be scheduled for revision. A situation where a bunch of experts work with you to redesign the course curriculum, learning experiences etc. To help you produce the brand new, perfect black course box. These experts will have no knowledge of what went on in prior offerings of the course and they will disappear long before the course is offered again.

Increasingly institutions are expected to be able to demonstrate that they are paying attention to the quality of the student learning experience. This pressure has led to the creation of organisational structures, institutional leaders and experts, policies and processes that all enshrine this black box approach to QA. It creates a paradigm, a certain way of looking at the world that de-values alternatives. It creates inertia.

Learning analytics reinforcing the black box

Lodge and Lewis (2012, pp 561) suggest

The real power and potential of learning analytics is not just to save “at risk” students but also to lead to tangible improvements in the quality of the student learning experience.

.

The problem is that almost every university in Australia is currently embarking on a Learning Analytics project. Almost without exception most of those projects have as their focus, “at risk” students. Attrition and retention is the focus. Some of these projects are multi-million dollar budgets. Given changing funding models and the Australian Government’s push to increase the diversity and percentage of Australians with higher education qualifications, this focus is not surprising.

It’s also not surprising that many of these projects appear to be reinforcing the current black box approach to quality assurance. Data warehouses are being built to enable people and divisions not directly involved with actually teaching the courses to identify “at risk” students and implement policies and processes that keep them around.

At it’s best these projects will not impact on the actual learning experience. The interventions will occur outside of the course context. At worse, these projects will negatively impact the learning experience as already overworked teaching staff are made to jump through additional hoops to respond to the insights gained by the “at risk” learning analytics.

How to change this?

The argument we put forward in a recent presentation was that the institutional implementation of learning analytics needs to focus on “doing it with academics/students” rather than on doing it “for” and “to” academics/students. The argument here is that the “for” and “to” paths for learning analytics continues the tradition of treating the course as a black box. On the other hand, the “with” path requires direct engagement with academics within the course context to explore and learn how and with what impacts learning analytics can help improve the quality of the student learning experience.

In the presentation Trigwell’s (2001) model of factors that impact upon the learning of a student was used to illustrate the difference. The following is a representation of that model.

Trigwell's model of teaching

Do it to the academics/students

In terms of learning analytics, this path will involve some people within the institution developing some systems, processes and policies that identify problems and define how those problems are to be addressed. For example, a data warehouse and its dashboards will highlight those students at risk. Another group at the institution will contact the students or perhaps their teachers. i.e. there will be changes to the institutional context level that will essentially by pass the thinking and planning of the teacher and go direct to the teaching context. It’s done to them.

Doing it to

The course level is largely ignored and if it is considered then courses are treated as black boxes.

Do it for the academics/students

In this model a group – perhaps the IT division of the central L&T folk – will make changes to the context by selecting some tools for the LMS, some dashboards in the data warehouse etc that are deemed to be useful for the academics and students. They might even run some professional development activities, perhaps even invite a big name in the field to come and give a talk about learning analytics and learning design. i.e the changes are done for the academics/students in the hope that this will change their thinking and the planning.

Doing it for

The trouble is that this approach is typically informed by a rose-coloured view of how teaching/learning occurs in a course (e.g. very, very few academics actively engage in learning design in developing their courses); ignores the diversity of academics, students and learning; and forgets that we don’t really know how learning analytics can be used to understand student learning and how we might intervene.

The course is still treated as a black box.

Do it with the academics/students

Doing it with

In this model, a group of people (including academics/students) work together to explore and learn how learning analytics can be applied. It starts with the situated context and looks for ways in which what we know can be harnessed effectively by academics within that context. It assumes that we don’t currently know how to do this and that by working within the specifics of the course context we can learn how and identify interesting directions.

The course is treated as an open box.

This is the approach which our failed OLT application was trying to engage in. We’re thinking about going around again, if you’re interested then let me know.

The challenge of analytics to strategy

This post was actually sparked today by reading this article titled “Does analytics make us smart or stupid?” in which someone from an analytics vendor uses McLuhan’s Tetrad to analyse the possible changes that arise from analytics. In particular, it was this proposition

With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning. While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

This seems to resonate with the idea that perhaps the black box approach to the course might be challenged by learning analytics. The “to” and “for” paths are much more closely tied with traditional views of QA which are in turn largely based on the idea of strategy and top-down management practices. Perhaps learning analytics can be the spark that turns this QA approach away from the black box approach toward on focused more on execution, on what happens within the course.

I’m not holding my breath.

References

Lodge, J., & Lewis, M. (2012). Pigeon pecks and mouse clicks : Putting the learning back into learning analytics. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 560–564). Wellington, NZ.

Trigwell, K. (2001). Judging university teaching. The International Journal for Academic Development, 6(1), 65–73.

Tertiary course design is very poor, and we solve it by "blame the teacher"

The following is inspired by this tweet

which links to this newspaper article titled “Tertiary course design ‘very poor'”. An article certain to get a rise out of me because it continues the “blame the teacher” refrain of common to certain types of central L&T type people

After 33 years of working in higher education in all parts in NZ, the US and UK, the one thing we’ve become very clear about in curriculum design is that our people in higher education need to actually be educated as educators to work at that level

This seems to imply then that all of the courses taught by those with teaching qualifications should be beacons of quality learning experiences. My observations of courses at a number of universities taught by graduates of higher education teaching certificates and by those in Faculties of Education would seem to indicate otherwise. Not to mention reports of “ticking the box” from colleagues at top universities required to complete graduate certificates in higher education teaching. i.e. they have to complete the certificate to have a job, so they complete it. They are successful products of formal education, they know how to successfully jump through the required hoops.

This is not to suggest there is no value in these courses. But it’s not the solution to the problem. It’s not even the best way to build knowledge of teaching and learning amongst academics.

The following figure is from Richardson (2005)

Integrated model of teachers' approaches to teaching

The findings from this research is that there can be significant differences between the espoused theories information teaching and learning and the theories in use (Leveson, 2004). Teachers can know all the “best” learning theory but not use that in their teaching. While teachers may hold higher-level views of teaching, other contextual factors may prevent the use of those conceptions (Leveson, 2004). Environmental, institutional, or other issues may impel teachers to teach in a way that is against their preferred approach (Samuelowicz & Bain, 2001). Prosser and Trigwell (1997) found that teachers with a student-focused approach were more likely to report that their departments valued teaching, that their class sizes were not too large, and that they had control over what was taught and how it was taught. In examining conceptions of e-learning held by academic staff, Gonzalez (2009) found that institutional factors and the nature of the students were the most relevant contextual factors influencing teaching.

Now, consider the world of Australian (and New Zealand?) Universities as we move into 2013. Do you think the environmental factors have gotten any better in terms or enabling teachers to teach in the ways they want? An increasing focus on retention, an increasingly diverse intake of students, decreasing funding, increasing use of e-learning, decreasing quality of institutional e-learning systems, increasing casualisation of the academic work-force, research versus teaching, increasing managerialisation and increasingly rapid rounds of restructuring….are any of these factors destined to encourage quality approaches to teaching and learning?

My argument is that given this environment, even if you could get every academic at a university to have a formal qualification in learning and teaching, there wouldn’t be any significant increase in the quality of student learning because the environment would limit any chance of action and only encourage academics to “tick” the qualifications box.

On the other hand, if the teaching and learning environment at a university wasn’t focused on the efficient performance of a set of plans (which limit learning) and instead focused on encouraging and enabling academics and the system to learn more about about teaching and learning within their specific context…….

References

Leveson, L. (2004). Encouraging better learning through better teaching: a study of approaches to teaching in accounting. Accounting Education, 13(4), 529–549.

Prosser, M., & Trigwell, K. (1997). Relations between perceptions of the teaching environment and approaches to teaching. British Journal of Educational Psychology, 67(1), 25–35.

Richardson, J. (2005). Students’ approaches to learning and teachers’ approaches to teaching in higher education. Educational Psychology, 25(6), 673–680.

Samuelowicz, K., & Bain, J. (2001). Revisiting academics’ beliefs about teaching and learning. Higher Education, 41(3), 299–325.