Assembling the heterogeneous elements for (digital) learning

Category: teaching Page 1 of 8

What’s changed in academic staff development?

The following is my initial response to this exercise from the week 3 learning path. It’s an exercise intended to get folk thinking about what practices, if any, have emerged in their disciplinary teaching context from when they were undergraduates until now. It asks them to consider some of the emerging practices mentioned in the Horizon and New Generation Pedagogy reports. It also asks them to consider if any of them are visible in “good practice” within the discipline.

As per the exercise instructions, the following is not a formal academic document. It’s a bit of writing to think. The exercise is intended to encourage folk start framing thoughts that will become the basis for an assessment task.

The following also tends to be specific to my context.

My Discipline?

I’m currently onto my third or fourth discipline. My journey in higher education has gone through computer science/information technology; information systems, teacher education; and what I’ll call academic staff development (i.e. helping other academic staff teach).

I’ll stick with my current “discipline” – academic staff development.

What was it like?

When I first stated teaching in higher education (in the Information Technology discipline) back in the early 1990s I was teaching in a dual-mode university. i.e. my students studied via two modes (on-campus and via distance education). In those days, distance education meant the production of slabs of print-based material that was posted out to students before the semester started. A process that in the early 1990s relied about an production-line type process for generating the print material.

My recollections of academic staff development in those days mostly involved the distance education folk running sessions or distributing print-based material designed to help academics develop the knowledge/skills to develop good print-based material.  I don’t remember too many workshops or presentations, but I remember huge folders of print material.

There were the occasional presentation on a teaching related topic and there were even some early forays into what might be characterised as communities of practice. e.g. I was involved with a computer-mediated communications working group in the early 1990s (pre Internet Services Provider days) that eventually developed some print material to help staff and students using CMC in learning and teaching.

There were also grants to fund innovative developments associated with L&T (I got one of those) and there were also teaching awards (I got one of those).

What’s changed?

To be brutally honest. Not much.  Perhaps the major change is that there are no longer any big sets of folders of print material. All that is now online. The nature of the online material and how you access has changed somewhat. There’s been a recent move to more contextual material.  But it’s still fairly kludgy and much of it is still in a print format (i.e. PDF documents).

There is still a reliance on presentations and workshops. Though these are increasingly available via Zoom and a couple of weeks ago a remote participant did engage with the institutional L&T orientation using a Kubi telepresence robot.

There are still L&T grants (some announced last week) and awards (announcing real soon).

However, there has been a shift in focus away from “academic staff development”. Seen as something done to teaching staff. Towards the idea of professional learning and professional learning opportunities. Moving the focus toward designing contexts/environments/opportunities for teaching staff to engage in professional learning.

What about ideas from the Next Generation Pedagogy report?

The Next Generation Pedagogy report offers five signposts on the roadmap to innovative pedagogy

  • Intelligent pedagogy – using technology to enhance learning, including beyond institutional confines.
    Technology use in academic staff development (in my context, but in a lot of others as well) is still somewhat limited. There’s no use of learning analytics to understand the teaching experience. Technology is largely used to supplement existing face-to-face approaches, rather than do something radically different.  Though aspects of this might be coming. The idea of untethered faculty development is indicative of early moves in this space.On the other hand, the academic staff who are our learners now have access to the abundance of resources that are on the Internet. There are staff drawing heavily on these, but there appears to be many that are not.
  • Distributed pedagogy – ownership of learning is shared amongst different stakeholders allowing students to source learning from competing providers
    There are aspects of this happening in how learning and teaching operates. e.g. TurnitIn is external and offers some staff development. This is happening more to support University students in their learning, than to support University teaching staff.
  • Engaging pedagogy – encouraging active participation from learners.
    There are early signs of this – e.g. the shift away from academic staff development in the broader field.  Locally, the approach used in our L&T orientation has moved away from experts leading sessions to participative, co-construction/solving of problems. But more could be done.
  • Agile pedagogy – flexibility/customisation of the student experience.
    There are attempts to do this, but not directly support by systems and processes.
  • Situated pedagogy – contextualisation to maximise real-world relevance.
    There are signs of this (e.g. how workshops are run) and approaches like Teaching@Sydney allow for more contextualisation. As do some move to contextualising access to resources.  But still fairly limited.Currently much of it relies on someone doing the customising/situating/personalising for the learner.

And the Horizon report

The 2017 Horizon report is the other source examined. It offers the following key trends

  • Advancing cultures of innovation
    Not so much. Innovation is suggested to be a good thing, but a “culture that promotes experiementation” it is not yet.
  • Deeper learning approaches – project-based, inquiry learning
    There are glimmers of this, but there’s also a strong pragmatic need amongst teaching staff.  I need to know how to do X now.
  • Growing focus on measuring learning
    In terms of external quality indicators (such as QILT) and quantitative measures such as pass/fail rates and results on student evaluation of teaching, this is increasing. Perhaps increasing beyond where it should be. However, there remains little use of learning analytics and other more interesting approaches for measuring the learning and learning needs of teaching staff.
  • Redesigning learning spaces
    Moves around this for students, but not so much for teaching staff.
  • Blended learning designs
    Much of staff development appears to stick with the face-to-face methods. Even when it moves online it is to video-conferencing in an attempt to continue with face-to-face, rather than explore the blend of affordances that both online and face-to-face might offer.
  • Collaborative learning
    One of the Horizon Report “predictions” that Audrey Watters labels as not even wrong. Communities of Practice and Learning Communities have been a feature of academic staff development, more broadly and locally (even back in the early 1990s). However, I’m not sure how truly collaborative those approaches have been.

What’s relevant now?

Many of the above offer interesting possibilities, some are inevitable, and some have always been a feature.

Institutional academic staff development has yet to scratch the surface in terms of how digital technology could be used. It does appear to be increasingly “strategic” in its intent. This may make it more difficult to be agile, situated and engaging.  Three signposts that could be very relevant.

Situating staff development within the context of the member of teaching staff strikes me as very relevant. Expanding upon the idea of professional learning opportunities and encouraging active participation from teaching staff seems very relevant. Providing examples and scaffolds around how to do this.

 

My current context and some initial issues

Semester is about to start and I’m back teaching. This semester I’m part of a team of folk designing and teaching a brand new, never been taught course – EDU8702 – Scholarship in Higher Education: Reflection and Evaluation. The course is part of the Graduate Certificate in Tertiary Teaching.

In the course, we are asking the participants to focus on a specific context into which they are (or will) teach. That context will form part of an teacher-led inquiry into learning and teaching that will underpin the whole course. Early on in the course we are asking the to briefly summarise the context they’ll focus on and generate an initial set of issues of interest that might form the basis for their inquiry. Get them thinking and sharing and provide a foundation for refinement over the semester.

The plan is that we’ll model what we ask, hence this blog post is my example.

Context

My current context is within a central learning and teaching unit at a University. My role is charged with helping teaching staff at the institution work toward and be recognised for “educational excellence and innovation”. i.e. we’re part of a team to helping teaching staff become better teachers and thus improve the quality of student learning. To that end we, amongst other things

  • Teach into the institution’s Graduate Certificate in Tertiary Teaching.
  • Develop a range of professional learning opportunities (PLO), including L&T orientation, workshops, small group sessions, online resources etc.
  • Develop and support programs of L&T scholarships and awards.

Issues

As a group that’s still forming a bit, there are a range of practical issues.

However, there are also a collection of issues that arise from the “discipline” of professional learning for teaching staff, some of these include:

  • Preaching to the choir.

    A perception that the people who engage with the professional learning opportunities we provide, are perhaps not those who might benefit most.

  • Difficulty of demonstrating impact.

    It can be very hard to prove that what is done, improves the quality of learning and teaching.

  • Perceived relevance of what we offer

    Often the focus can be on developing well-designed workshops and resources, rather than try to understand authentic, contextual needs.

  • A tendency to focus on designing a learning intervention when performance support might suit better.
  • How best to modify what we do to respond to an era of information abundance.

    A lot of traditional professional development arose from a time of scarce information. Developing a workshop/resource on topic X specifically for institution Y made sense, because there was no other way to get access. Chances are today you could find a long list of workshop/resources on topic X. Should you still develop yet another resource on topic X?

There are also some issues around the course we’re teaching

  • Limited insight into how the participants are, their backgrounds and reasons for enrolling.
  • The current small number of participants.
  • How to design an effective course within this context and within current constraints.

Tracking task corruption with Moodle activity completion

The following documents a quick kludge required for the assessment for a course I teach. It’s primarily a document to help me think through the task and document what was done and why.

Background

A small percentage of the overall mark in this course is generated by completing a weekly list of activities on the course site. Each student’s completion of of these activities is tracked using Moodle’s activity completion feature.

The activities are (in theory) designed to enhance student learning and are aligned with the other course assessment. Obviously they are so well designed and the students so well motivated that they complete the tasks as intended – of course not. There is growing evidence of task corruption and in particular of simulation.

In response, I should be exploring why this is the case and modifying the course, but that will have to wait until later. Right now with the looming end of semester I’ve decided I need to ensure that those engaging in simulation are not rewarded for completion. My rationalisation for doing this is to provide some little reward for those students who engage with the tasks.

Design

Moodle activity completion is one way (apparently). Once a student completes the activity it is recorded in the database and displayed to the student and the staff. There doesn’t appear to be anyway to change the status of this activity completion to “not finished”. Essentially a reset button.

The marking for this aspect of the assessment is done via a perl script. That script draws on the fact that I have a local version of Moodle into which I’m importing the activity completion data from the institutional system. The plan here is to modify the marking script so that it draws on data of the form “student X did not complete activity Y” when calculating the mark of individual students.

I don’t want to modify the Moodle database structure. Even though it’s a local copy this is probably too much effort for this type of kludge. So it appears the easiest approach is to create a new data structure in the script of the form

[code lang=”perl”]
my %simulation = (
"studentX" => ( "Checking your understanding of some models, frameworks…." => 1 ),
"studentY" => ( "Share your posts on the Connect.ed resources" => 1,
"Share what you already know about lesson planning" => 1 )
);
[/code]

A hash keyed on student ID which points to a hash keyed on activity names.

As the script is calculating the mark for each student it will check this hash for entries for the student and modify the mark accordingly.

Implementation

Will those keys work?

The student ID will work, though there will be some manual work identifying it for each student. The question is whether the activity name can be used as an ID. Is it stored in the local database?

Activity completion is kept in mdl_course_modules_completion. It doesn’t store the activity name. Do I store it elsewhere? Nope, not storing that information.

Is there a table for it? For the life of me (or at least this early on a Saturday morning) I can’t find where this information should be stored in the Moodle tables.

The import script I use can easily insert data into a table linking the activity id with the activity name. But for the immediate kludge dumping a Perl data structure might do. In fact there’s a variable already to dump.

Now what about the students and the activities they’ve simulated? What I currently have is their name and the name of any simulated activities. The student ID being used is Moodle user id. Only a half-dozen students, so some manual SQL will do, implementing the above idea

[code lang=”perl”]
my $SIMULATION = {
283 => { "Share your posts on the Connect.ed resources" => 1,
"Share what you already know about lesson planning" => 1 },
149 => { "Checking your understanding of some models, frameworks…." => 1 }
};
[/code]

Re-calculating mark

The completion data is taken from the database via a class. That same class generates a PERCENT completion figure for each student. That seems a sensible place to stick the code to re-calculate the mark. Use case something like

[code lang=”perl”]
$completion->reCalculate( %activityMapping, $SIMULATION );
[/code]

The hash from above is passed into recalculate which then (surprise, surprise) recalculates the grades. This way I don’t have to modify any of the script code. Good plan.

Of course, the module is actually getting SQL to do the calculation rather than Perl. So this implies a rewriting of the module to

  • Get all rows of completion data for the BIM activity.
  • Have Perl generate an array for each user with the number they’ve completed based on START/STOP, actual completion and the data in SIMULATION.

All that’s done and seems to be working

Modifying the report

With the data changed, now need to update the report to include mention of this simulation. The function calculateLearningPath seems the best bet.

All done.

Looking for a new "icebreaker" for #edc3100

As mentioned previously the simplistic (lazy) introductory forum for #edc3100 didn’t achieve it’s ill-defined goals. I need to find a new one.

Given I hate ice-breaker activities, I doubt this is going to be very creative. Plus time is against me.

The context

#edc3100 is a 3rd year course for pre-service teachers trying to engage them in the task of using ICTs in their teaching. The students are all required to have created their own blog and engage with other social media.

The goals

The primary goal is to encourage students to make connections with others. To find out who might be good to follow.

A secondary goal could be to see some different ICTs in action and be required to actually use them for a purpose. This experience can provide grist for reflection.

The options

This list of 10 icebreakers includes an idea for students creating a trading card of themselves using a tool from Big Huge Labs. It moves beyond the textual, requires the students to engage with a new service. The purpose of the slide is a question. Something about them, something about experiences/perspectives of ICTs?

This from Curtin University provides a bit more in the way of design principles from the literature. Interestingly, I’m not certain that the suggested activities are always a good fit for the design principles. e.g. how does creating a video bio (or a trading card as above) “require the learners to read each others entries” which is one of my problems.

This page has some background on ice-breakers and a few suggestions. One is to require students to find 3 people with whom they have something in commmon and comment on those posts. This could could work in a Moodle discussion forum with activity completion.

@catspyjamasnz suggested


Actually much of the rest of week 1 is focused on students applying Toolbelt theory/TEST framework to their own study habits. This suggestion might be a bit of duplication, but it may also be a good lead in…mmm.

@katemfd suggested
https://twitter.com/KateMfD/status/435302520076775424
Another case of possible duplication. Later in the semester we do a Flickr/image activity based around the weather borrowed from @courosa.

Pondering

The activity from last year required students to create the introduction on their blog. The post to the discussion forum only included a link to the student’s blog post. This creates the problem of having to click through to the blog post. You can’t see anything interesting in the discussion forum. This was probably a factor in the use of the forum.

This and the above suggests some principles

  1. Have the forum post contain something interesting (i.e. actual information about the student).
  2. Use the activity completion to require looking and commenting on others.
  3. Rather than limit to just text, have some form of multimedia involved.
  4. Have some link to their blog linked to the activity (perhaps reflecting on the task of using the specific ICT)

This is leaning back towards the activity we used in 2012 – borrowed from ECMP355 and @courosa again – with the addition of asking the students to find someone they have something in common with and someone they are very different from.

I have added in the suggestion to create a trading card (like this one) using this web-based tool and also suggested Popplet or Padlet as options and linked to my 2012 poplet.

I’d actually done much of this prior to seeing the suggestion from @catspyajamasnz, now I’m pondering tweaking it a bit. Have them add in “One thing that annoys me about learning at USQ” — sounds like a plan.

How are they feeling – Semester 2 – Part 1

The following is a repeat of this post for a different offering of the same course. It’s also a quick how to primarily intended for the students in the course. Summary and comparison first, then the how to.

Doing this now mainly in preparation for a session with the students tonight.

Summary

I’ll focus here on answers to “How do you feel about the course”. A question from which the students can select from a provided list of words (Interested, relaxed, worried, successful, confused, clever, happy, bored, rushed) and add their own.

A word cloud based on the students responses at the end of the first week of the Semester 1 offering of the course looked like this.

How are you feeling?

The word cloud for the semester 2 responses near the end of week 1 (a much smaller sample – semester 1 = 121 students, semester 2 – 17, so far) looked like this.

Feeling - EDC3100 Semester 2, Week 1

Looks like there’s some improvement. Stressed and overwhelmed aren’t present and these were optional words folk could add. Confusion and worry – provided choices – are still present. As is a feeling of being rushed. So, still a challenge, but perhaps a bit better?

Of course, these conclusions are based on a much smaller sample and there are some significant sources of bias. I’ll mention just two of those. First, this represents the 17 early starters, those who are keen and got started quickly and potentially had fewer problems. It’s possible that the 90 or so students still to complete the tasks may be feeling very different. It’s probably a biased sample. Second, my conclusion is based very much on my own beliefs about the course. I’ve redesigned the week’s activities to be more manageable, so I’m looking for that to be reflected in the data.

For these reasons, will be interested in hearing what others – especially the students – perceive from the above (if anything).

How to

I wouldn’t go into a hug amount of detail. A quick search online will reveal a good collection of tutorials on most of the following.

The process for this was

  1. Set up the Google form.

    Google forms provide the interface students use to respond to the “survey”. It helps gather the data.

  2. Extract the responses from the Google spreadsheet.

    A simple copy and paste into a text document.

  3. Feed them into Tagxedo.

    In this case, a simple copy and paste into tagxedo. Choose a circle shape, horizontal orientation. All good.

  4. Export the Word cloud and upload to Flickr.

More evidence of the limits of student technical knowledge

The following is just a “diary entry” recording a bit more evidence for the story that our students are neither digital natives nor digitally literate. It may or may not become useful in future research/writing. It’s not meant to be insightful, just a record of an experience.

The context is marking of assignment 1 for EDC3100. 300 odd students have created online artefacts via their choice of online tool. Youtube videos, Wix/Weebly/Wordpress websites, Sliderocket and Prezi are the most common I’ve seen so far. There have been some really good ones and some not so good ones. But there’s also been some evidence to suggest limits on the student’s technical knowledge.

Most of the problems appear to revolve around the idea of providing a URL to a post on the student’s blog that includes a URL to the online artefact. The double link caused some problems, but also has the idea of providing a URL. Some examples from tonight

  1. Rather than provide a URL for the post, students are providing the URL for their blog.
  2. A small number of students is providing a URL to their blog, which doesn’t have any posts with links to their online artefact.
  3. Prezi URLs.

    Been a small trend with Prezi URLs not working. It appears that the students are providing a “long URL” generated from something they see. I’m assuming copying from the browser. This URL doesn’t work for anyone but them. If we cut away some extraneous material, we get to a URL that works.

  4. Spectacularly wrong URLs.

    For example, we’ve seen URLs like this

    davidjones@edublog.org.com

    for blogs that are actually located at

    http://davidjones.edublogs.org

As mentioned previously

  • These are 3rd year students the majority of whom have some significant online learning experience beyond their own typical use of social media.
  • This perhaps says more about the technology and its design and use than the students themselves.
  • It raises questions about some of the assumptions underpinning common institutional e-learning practice within universities.
  • It raises questions about whether encouraging exploration, creativity and student choice can be viable in a course with 300+ students and limited time and support resources.

    i.e. the time I’ve spent diagnosing and fixing these mistakes has taken time away from engaging with student queries about the course content and assessment.

Meaningless freedom and auto-marking the learning journals

The course I’m teaching requires each student to create and user an individual blog. The blog should be created on an external blogging platform of their choice and used to reflect on their learning in whatever way they see fit. There are a couple of constraints around regularity (at least 2/3 posts a week), length (average of 100 words), links to resources (60% with links to online resources), and links to other student blogs (2 of all posts over a 3 week period). All this is meant to be automatically marked.

The following is the story of putting in place the code to check, track and mark the student blogs. Much of it has been written over the last few weeks. I’m adding the last extra touch today as the assignment has been submitted and the automated assessment needs to be completed.

As it happens, I’ve also just read Lisa M Lane’s “The illusion of the LMS/cloud-based/self-hosted solution” and am finding that it resonates strongly. If I didn’t have the technical background I have, none of the following would have been possible. I’d be constrained by the tools available in the LMS and any manual workarounds I could come up with. As it is, I could have done without the additional work required by the following.

At the moment, the message I’m taking from both Lisa’s and my own experience is that the use of technology in learning and teaching is messy. Especially when you’re trying to do something different. Being an explorer is always going to be difficult. The institutional systems and support processes are not set up for exploration, they are set up for exploitation. This is why they are constraining. If you want to be an explorer, it’s going to be hard, but it can also bring benefit. Alen Levine’s comment on Lisa’s post perhaps contains the main solution to this problem

the way to do this on the open/free/public end is to leverage the connections of others. I rely on this all the time. The “solving” is in our human networking.

No BIM

I haven’t completed BIM2 in time for the organisational processes to consider installing it into the institutional version of Moodle (this post details one step in the process). So the plan is

  1. Students register their blog via a Moodle database activity.
  2. That is exported, checked and stuck into a local version of Moodle (with BIM) on my laptop.
  3. Marking of the blogs will be done via some additional code, either in BIM or in Perl.

    At this stage Perl has been used because I have a large collection of infrastructure and experience with a Perl code base that was developed as part of my PhD work. i.e. I’m a native Perl speaker, PHP and the Moodle code remains a second language to me. Eventually this work will need to be brought into BIM in some ways.

Registration, reassurance and the perils of meaningless freedom

Way back in the late 1990s experience with the design and use of online assignment submission systems led to this observation (Jones, 1999)

An important lesson from the on-going development of online assignment submission is to reduce the amount of “meaningless freedom” available to students. Early systems relied on students submitting assignments via email attachments. The freedom to choose file formats, mail programs and types of attachments significantly increased the amount of work required to mark assignments. Moving to a Web-based system where student freedom is reduced to choosing which file to upload was a significant improvement.

Having the students register their blogs with a Moodle database activity meant that the students had to correctly

  • Enter their student number.
    USQ has two types of student number that it users interchangeably.
  • Copy the URL of their blog.

Here’s a list of what I’ve found in the registered data tonight, out of 275 registered blogs

  • 105 students used one form of number, 170 the other sort (roughly).
  • 10 student numbers were incorrect.
    Some were just minor typos, but others were more major.
  • 37 URLs were incorrect
    Missing the http://, typos (edublog.org not edublogs.org etc.), copying the dashboard URL not the home page.

This is not to suggest that the students are stupid. It’s to show how badly designed systems (i.e. the stuff I’ve cobbled together) allow mistakes to happen. If BIM had have been available none of these errors would have been possible and I would have saved quite a few hours of work.

Not only would BIM have provided immediate feedback on registration, it would have allowed the students to be reassured what what was known about their blog. With BIM the just visit the activity and its there. In this semester, I’ve had to send bulk emails out letting students know what the system knows about their blog.

Statistics

Time now to finish off the script that will generate statistics about the students’ blogs and generate their mark. As shown in this prior post I’m also using this facility to generate some visualisations of the interconnections, but that’s another post.

The statistics being used for marking include

  1. Number of blog posts per week.

    Currently being calculated by dividing the number of existing posts on the student blog by the number of weeks.

  2. Average length of blog posts.
  3. % of posts that contain links to external resources.
  4. number of posts that link to other student blogs.
  5. % of the learning path activities completed.

    This isn’t a blog statistic. It’s from the activity completion report on Moodle. Each week has a collection of activities/resources (the learning path) and students are expected to complete them.

Each of these is currently being generated. But I need to

  1. Double check the links to other student blogs, not sure it’s counting blog posts.
  2. Exclude links to their own blog.

This is all done. So some statistics. With 330 students mostly still enrolled in the course

  • Average word count per post – 184.9
  • Average posts per student – 11.5
  • Average posts with links – 7.7
  • Average posts with links to another student blog – 1.7
  • Average completion of Moodle activities – 89.8%

The last one is a bit disappointing. Need to explore it more.

Missing students

I have a script that automatically “marks” the students blogs and also their completion of activities on the Moodle study desk. Trouble is it appears that at least one student is missing from that list. Why?

Some possibilities

  • The student has dropped the course? – NO, still there
  • The student didn’t register their blog? – YES, that’s the problem

If there’s one, I wonder how many others there are? Even after we did a dry run a couple of weeks ago to identify folks in this situation there appear to be a few. In theory, there are 327 students still enrolled in the course. Of those, 20 students haven’t successfully registered their blog.

Question is whether this is a problem with my kludges, or the students haven’t registered their blog. I’ll let them figure that out.

A visualisation

The following is the latest Gephi visualisation of the links between student blogs. A bit more complex than the last one, but obviously connections aren’t a priority.

Blog connections - EDC3100 1 April

Visualising the blog network of #edc3100 students

The following describes the process and results of using Gephi to generate some visualisations of the inter-connections between the blogs of students in the course I’m teaching. The process is heavily informed by the work of Tony Hirst.

The result

The following represents the student blogs that have connected with each other. Size of the node is weighted towards the number of connections coming in. You can see a couple in the bottom right hand corner who have linked to themselves. The figure also suggests that there are 6 or 7 communities within these.

Network

There are actually 300+ blogs in the data set. However, a significant number of those are not yet connected to another blog. Hence the small number in the above image. Two possible explanations for this

  1. Many of the students haven’t yet taken seriously the need to connect with each other.
  2. There’s a bug in the code producing the file.

Will need to follow up on this. Will also need to spend a bit more time exploring what Gephi is able to do. Not to mention exploring why 0.8.2 of Gephi wouldn’t run for me.

The process

The process essentially seems to be

  1. Generate a data file summarising the network you want to visualise.
  2. Manipulate that data file in Gephi.

The rest contains a bit of a working diary of implementing the above two steps.

Generating the file

The format of the “GDF” file used in Tony’s post appears to be

  • A text file.
  • Two main sections
    1. Define some user/node information.

      The format is shown below. The key seems to be the “name” which is a unique identified used in the next section.

      [code lang=”bash”]
      nodedef> name VARCHAR,label VARCHAR, totFriends INT,totFollowers INT, location VARCHAR, description VARCHAR
      67332054,jimhillwrites,105,282,"Herne Hill, London","WIRED UK Product Editor."
      [/code]

    2. Define the connections

      Essentially a long list of id pairs representing a user and their friends. I’m assuming this means the use connects to the friend.

      [code lang=”bash”]
      edgedef> user VARCHAR,friend VARCHAR
      67332054,137703483
      [/code]

More on the GDF format available here. It mentions a minimal GDF file and also mentions that the edge thickness can be specified. This seems useful for this experiment i.e. edge thickness == number of links from one student blog to another.

So the file format I’ll use will come straight from the minimal spec, i.e.
[code lang=”bash”]
nodedef>name VARCHAR,label VARCHAR
s1,Site number 1
s2,Site number 2
s3,Site number 3
edgedef>node1 VARCHAR,node2 VARCHAR, weight DOUBLE
s1,s2,1.2341
s2,s3,0.453
s3,s2, 2.34
s3,s1, 0.871
[/code]

Thinking I’ll use the “hostname” for the student’s blog as the “site number”. Maybe just the first four letters of it. Just to keep student anonymity.

Questions for later

  1. Can I modify the file format to include with each “friend” connection a date?

    The idea is that the date will represent when the post was made. Using this I might be able to generate a visualisation of the connections over time.

  2. Is there value in also mapping the connections to the other links within the students’ posts?

    Could provide some idea of what they are linking to and help identify any interesting clusters.

The data

The database I’m maintaining contains

  • All the URLs for the students’ blogs.
  • All the posts to the students’ blogs.

I already have a script that is extracting links from each of the student blogs, I just need to modify this to count the number of connections between student blogs…..a bit easier than I thought it might be.

Now a quick script to generate the GDF file. Done.

Using Gephi

This is where I’m helping Tony Hirst’s instructions work with a minimum of hassle.

That’s a bugger. Empty menus on Gephi. It’s not working. Is it wrong of me to suspect Java issues?

Going back to older version. That’s worked, but I haven’t installed it yet into Applications. 0.8.2 seemed to have worked previously as well. Get this done and figure it out later.

File opened. We have a network.

001 - First Graph

Tony Hirst then removes the unconnected nodes. I’d like to leave them in as it will illustrate the point that the students need to connect with others.

The Modularity algorithm doesn’t seem to be working as I’d expect (on my very quick read). It’s finding 200+ communities. Perhaps that is to be expected given that most blogs are connected by one or two links and that I haven’t removed the unconnected nodes. Yes works much better if you do that.

A bit more playing produces the final result above.

Many of our students are neither digital natives nor digitally literate

Yesterday I attended a session with three different presentations focused around “student voices and their current use of technologies at USQ”. There was some very interesting information presented. However, I have a few reservations with aspects of the research and especially with some of the conclusions that have been drawn. I’m hoping to reflect upon and post more about this when I have some time. But an experience just now reinforced one of my reservations and is worth a short sidetrack.

A largish survey of students found that all of the students had some form of access to the Internet. Given that it was an online survey, this is perhaps not surprising. But that’s not the problem I particularly want to address here.

The big reservation I have is that one of the conclusions drawn from this work was that our students are “Digitally literate and agile”. My experience suggests that this is not the case.

My experience

The course I’m currently teaching has 300+ students, mostly 3rd year Bachelor of Education students, spread throughout Australia and the world. 200+ of these students are online students. i.e. they generally don’t set foot on a campus. The course itself is titled “ICTs and Pedagogy” and as you might expect we’re pushing some boundaries with the use of ICTs. Attempting to model what we espouse. Some examples of that include, amongst others

  • Students are required to set up their own blog and use it as a reflective journal.
  • They are required/asked to sign up for Diigo and join a course Diigo group
  • We’re using Diigo’s annotation facility to mark up online readings and the assignment pages.
  • They are encouraged (but not required) to join Twitter.
  • We use Google docs for shared content creation.
  • The course Moodle site is heavily used including the discussion forums leading to a lot of email traffic (this is picked up below).
  • We use a range of ad hoc activities to demonstrate different ICTs.
    e.g. a version of the “weather Flickr” activity of @courosa

Many of the students, especially the online students, have been studying online for 3+ years. Some of the earlier courses these students have completed encourage them to engage with different ICTs e.g. developing a webquest or creating digital stories.

So, obviously these students are “digitally literate and agile”?

Some gaps in digital literacy

Here’s a quick list of some of the questions/problems students have asked/had in the first three weeks of semester

  • Not knowing what their university provided email address is.
  • Not knowing to look in the junk folder for a confirmation email.
  • Not knowing how to add a link using one of the WYSIWYG editors provided on web-based services such as Moodle, WordPress etc.
  • Not knowing that you have to “right/ctrl” click on a link to download a file rather than display it in the web browser.
  • Not knowing about email filters.

The last point is a big one. It’s fairly common for students taking four online courses to get a lot of email from the discussion forums for those courses. This flood of email messages take over the Inbox and lead to confusion and missed information. Many of these students have been experiencing this for 3+ years. Yet, almost none of them knew about email filters.

Perhaps one of the most successful learning activities in the course was a Google doc that was created for the students to list the problems they were having with the course and any suggestions they might have for tools or practices that might help solve those problems. The following image is a screen shot of a section of that document about email filters. Click on it to see it bigger.

email filters

This particular activity was combined with reading about Toolbelt theory and encouraging the students to start building their toolbelt. To have them start taking control of their problems and identifying how they can solve them.

Not digital natives

Perhaps the major mistake (one of many) I made in the design of the first few weeks of this course was that I assumed that the students were far more digitally literate than they appear to be. Not only that, as someone who is fairly “digitally literate”, I assumed that they would have the experience/knowledge to be able to implement the “Tech support cheat sheet” without much prompting.

Lessons

When we are designing our learning experiences we (i.e. I) cannot assume that they are “digitally literate and agile”. I need to give more thought to scaffolding these experiences and perhaps exploring ways to better help them develop what @irasocol describes as an essential survival skill

knowing how to pick the right tool for the job and moment, how to use that tool well, and how to find new tools

I also think there’s a lesson here about research methodologies. Research methodologies – e.g. surveys and focus groups – that capture insights from people divorced from the actual activity (e.g. the “current use of technologies at USQ”) – are going to overlook important insights. That limitation has to be kept in mind when drawing conclusions and recommendations for action.

The absence of a search function – my current big problem with a Moodle installation

Consider this a plea for suggestions. In particular, consider it a plea for workarounds that I can implement quickly (and painlessly).

The problem

I have a Moodle course site. It has a range of activities, many with a page or two of text that sets the context and explains the task. The image below shows what the activities for one week look like.

Week 1 learning path

Now this works fine if a student works sequentially through the activities. It tracks what they’ve completed etc.

It fails miserably when they want to revisit the page about “X”. They have to remember in which week “X” was talked about, under which activity “X” was addressed.

I have problems doing this and I wrote the stuff.

The “web way” solution

If this was any other website, we’d follow the advice of Jakob Nielsen

Search is one of the most important user interface elements in any large website. As a rule of thumb, sites with more than about 200 pages should offer search.

The “web way” solution would be to have a search engine. But the Moodle installation of the University I teach the course for doesn’t appear to provide this functionality. I believe the only way this can occur is to allow Google to have access to all courses on the site. While there may be reasons for this, it’s not a solution I’m pushing just to solve my problem.

How can I provide my students with a search function? How can I make my course site “of the web” and not “on the web”?

I have heard mention made of being saved by repositories. i.e. Moodle is not a content hosting platform and doesn’t try to be. If you want searchable content, place it in a repository. The trouble is we’re not talking here about large documentation. Just a lot of small pages that are closely wrapped around specific learning activities in Moodle. I’m yet to see an information repository integration that works as seamlessly as I’d expect.

My interim solution

In the absence of any brilliant ideas, it appears that the only way to do this is to create a duplicate website that is actually “of the web”. i.e. one that is indexed by Google. I’m thinking probably a blog with pages set up to match the weeks and other components.

Some have suggested providing the pages as a PDF document (or three). The problem with this is that there is web content (videos, animations etc) embedded throughout. Producing a print document would allow folk to search, but then they wouldn’t have access to the web content (unless they clicked on a link etc).

Producing a second website is by no means a perfect solution, some of its limitations include

  • Extra workload for me.
  • Large potential to create confusion amongst the students
    e.g. which website do I visit? Which website has the correct content? Do I need to check both websites?
  • Loss of some Moodle functionality.
    The course currently uses the Moodle activity completion functionality to allow students to track their completion, but also as part of the assessment. If students start working through the blog version of the website it will lead to “But I already did that activity!” problems.

Surely there has to be a better solution?

How much of a cage should I build?

Just how much of a cage should I make my course into? How far should I take the constraints? The following sets the scene and asks the questions. Would love to hear alternate views.

The course

The course I teach has 300+ students spread throughout much of Australian, parts of Asia and perhaps other parts of the world. Studying both on-campus and online. It has folk who will be teaching everything from Early Childhood through to TAFE/VET, and everything in-between.

The course website is a Moodle site. Each week the students have a list (perhaps too long a list) of activities to complete (see the image). To help them keep track of what they have and haven’t done the Moodle activity completion functionality was used. That’s what produces the nice ticked boxes indicating activity completion.

Week 1 learning path

Building on this, part of the assessment of the course is tied to how many of these activities they complete. Activity completion is actually linked to keeping a learning journal and contributes 15% of the total course mark (5% for each of the 3 assignments).

Task corruption

Given the pragmatic nature of students today and especially given the perceived large amount of work in the first week. It was not surprising that a bit of task corruption” is creeping in. I assumed that some students would figure out that for the activities that are “web pages” with exercises, simply visiting the page would tick the box.

But there are other activities that require students to post to a discussion forum with some form of artefact. For example, a description of a resource found on Scootle and a description of how it links to their curriculum. These are posts that I see and try to keep a track of. So a bit of a deterrent?

Turns out perhaps no. I’m starting to see the odd post that either purposely (or not) does address the task, but is sufficient to be recorded as an activity completion.

The question is whether or not I should be actively policing this?

The trade-off

I don’t want to set myself the task of being a policeman. But perhaps I need to implement some penalties here, some options might include:

  • A gentle warning, at least initially.
  • Warn the student and delete their activity completion for the given activity (i.e. do it again).
  • Deduct marks for task corruption.

    Of course, there will always be the “but I misunderstood the task sir” excuse.

.

A purely pragmatic reason against doing this is that it will take a lot of work to police this. For another, I’ve already expressed some reservations about what it means to impose new learning strategies on a group of learners. That’s certainly something the course is currently doing.

We’re also talking about 3rd year University students, shouldn’t they live with their choices? If the don’t engage in these activities I do believe they will learn less and perform worse on the other assessment.

Then there’s the question of the students who are engaging with the activities and may potentially be receiving the same marks as those who have engaged in task corruption. I’m sure there would be a view amongst both sets of students.

Perhaps I should just mention this to the students to discourage (though not prevent) this practice?

Thoughts? Suggestions?

And it starts again, edc3100 in 2013

It’s that time of year again – week 1, semester 1 – and after almost three-quarters of a year there are face-to-face students to tutor and lecture. Have to love the pedagogical assumptions built into the fabric of the technology that is a University education (a good example of technology becoming mythic). The following captures a few thoughts from the first lecture/tutorial.

The need to define ICTs by example?

The first is the need to define what ICTs actually are. The tutorial reinforced that this was a good idea and that perhaps there needs to be a bit more of it.

Apparently, the Google doc into which we’ve been asked to contribute the ICTs we’ve seen and used includes a laminator. At least that’s the report from some of the students. I think this is perhaps one of the flaws of a few of the activities this week. The students are being asked to contribute, but they aren’t necessarily getting feedback (good or otherwise) on those suggestions.

Wondering if there’s an online tool we could use to have different folk sort a list of technologies into ICTs and not-ICTs? Do it collaboratively so you can what others have said, see what the expert said and perhaps raise a challenge. i.e. give the argument why you think X does/doesn’t belong to a certain category. A crowd-source answer perhaps, save having the expert give an answer?

This also suggests the potential need for more work around the students discussing their understanding of pedagogy and the combination/integration of ICTs and pedagogy.

Time, repetition and something unique

The lecture went longer than I thought, a standard worry. More importantly it has me wondering about how to distinguish between the online and the on-campus cohorts. The course site has the collection of activities and resources that I want the students to engage with. I don’t want to create something brand new for the on-campus students (workload and the online students miss out) but there’s the need to make appropriate use of the f-t-f medium.

Blended learning as a concept doesn’t seem to fit too well. The activities for the solely online students are designed for them. “Blending” those activities into the on-campus lectures/tutes is difficult because of variability amongst the online students. Some have worked through all or most of the online activities leading to repetition and boredom. Other students haven’t looked at it yet. Blending appears to require a more fixed specification of what is done online and offline. This static, fixed approach doesn’t cater well to student variability.

Will need to think more about how to better “blend” the online and the offline when the online is designed to be stand alone.

There’s more to a PLN than technologies

So far the students are starting to use blogs, Diigo and even Twitter, but I’m not sure most of them are really building a

  • personal – one that is unique to them and is designed to respond to their needs.
  • learning – with a focus on learning about ICTs in teaching.
  • network – with an appropriate size and variety in the network connections.

Next week will need to build on these.

1000 blog posts – a time to look back

According to the WordPress dashboard for this blog this is the 1000th published post (I have 100 odd drafts that I never finished or thought better of posting). Given I’m about to mark first year in a new job in a new institution in a new region, undergo my annual performance review and commence a new academic year, it would seem time to reflect and think about the future.

But first, thanks to those folk who have read and contributed to the blog over the years. Much of the good from blogging has arisen from those connections.

Reflections on this process

A few reflections from writing the summary below.

The staying power of bad ideas

Many of the problems I see with institutional attempts to support quality learning and teaching remain. Mainly, I would propose, because it’s easier to accept the simple practice everyone else uses than try and address the known problems. Examples of this include strategic management of universities (perhaps the fundamental cause of the rest), attempts to make teaching conform to a standard (quality through consistency), the reliance on end of semester student evaluation to determine quality of teaching and the teachers.

Purgatory was the best time to blog

2008 through 2010 was the best time for blogging. There was more engagement with and reflection upon the literature. Need to get back to that.

Perhaps the workload associated with life as a contemporary university academic, lots of centralised hoops, poor systems etc.

Still think the quality of my blogging – in terms of the writing, the referencing, the depth, the contribution, the connections – could stand for some improvement.

Workload implications

My blogging last year wasn’t great. Certainly less (quantity and quality) than in 2008-2010. One reason for that is workload. The workload of a contemporary Australian academic in a regional university is high. In terms of the workload and the nature of the work there are certainly much worse jobs (but to some extent that evaluation is fairly subjective). But the problem with academia in this context is that the workload is increasingly be increased due to management mandates at the expense of research/thinking time. Subsequently academics are getting in trouble for not producing research outputs. To make it worse, some of that workload is being created by the poor quality of the institutional systems being put in place to help meet those mandates.

What to do

I’m not going to bother with grand plans. To some extent it’s wasted effort. You can’t predict what is going to happen, at best you can only create the capability to respond.

I need to blog more and to think about different approaches to improving the blogging. I need to focus more and stop retreating to some of the same topics. I need to start producing some work that moves some of these forward a bit. I need to connect and comment more, get out of my shell.

Perhaps I need to do more work to address the workload implications. Give myself some space by exploring solutions to the workload problem, especially around learning and teaching.

Origins

The first blog post was written in March 2006 when the blog was on a self-hosted version of WordPress (most of the links to blog posts within those early blog posts won’t work, they still point to the old host). So it’s taken almost seven years to clock up 1000 published posts.

At that stage I was thinking about using blogs in the teaching of C++ to first year Information Technology students. The first few posts are little more than experiments trying to find out how WordPress worked and what it felt as an author/commenter.

It’s not until the 4th and 5th posts (in August 2006 suggesting the initial plan died) that I start storing away a bit of knowledge. The 5th post has a couple of Drucker quotes which still resonate strongly with me and perhaps should inform the “thinking about the future” part of this post. e.g.

Planning’ as the term is commonly understood is actually incompatible with an entrepreneurial society and economy….innovation , almost by definition, has to be decentralized, ad hoc, autonomous, specific and microeconomic.

Not long after that is the first post about BAM (which I’ve just edited to point to the new home for BAM/BIM. It was actually being used at this stage by a Masters-level course in Information Systems, some of that story is told in this paper.

That experience led to the first mention of Web 2.0 course sites in September 2006. Given this 2013 post this appears an issue that remains open. Somewhat surprisingly a similar post from that time shows I was reading Steve Wheeler’s (@timbuckteeth) years ago.

It would seem that late 2006 saw me on quite the kick around this idea.

It’s also when I started work on the “The missing Ps” an attempt to develop a framework for thinking about technology adoption (and what was missing) in universities. This ended up with my first presentation of the idea which is also my first Slideshare presentation. That eventually morphed into the Ps Framework and became the lens for the analysis done in the literature review of my thesis.

The move to middle management

In February 2007 I started a new job. For various reasons this was not a long term role, nor was it from a number of perspectives a successful one. But it did commence my move away from being an Information Systems/Technology academic, which given shrinkage in those disciplines over recent years is probably a positive.

The group did some interesting things. I got my chance to help implement a Web 2.0 course site and I blogged a bit more. Time in that job reinforced the silliness of much of what central L&T organisations/management do to encourage quality learning and teaching. I also participated in the ASCILITE mentoring scheme in 2007.

I did propose the idea of extreme learning during 2007. We also got involved in PLEs. Both ideas never really went far at the institution.

We also engaged in a bit of an exploration of what students find useful.

Purgatory and the PhD

By the end of 2008 the writing was on the wall. It appeared likely I’d be made redundant and for other reasons decided it was time to move my website into a WordPress blog. The self-hosted website I’d started 14 years previously had to go away.

What perhaps irked me most about that move was the Google ranking. In 2008, the website was 7th on a Google for “david jones”. Given the commonality of the name, this was quite nice. I wonder where it sites now? (I realise that Google’s search algorithm etc has changed considerably over time). After 16 pages of Google results I won’t go any further. The “David Jones” chain of stores consumes most of those. “david jones -store” removes most of those, but still no me. Of course, the blog is hosted in the US and it’s hostname is davidtjones etc. A google of “david jones blog” returns my blog as the 3rd result.

Late 2008 saw us purchase a new bull.

Wandilla Zanzibar - Big Z

By the end of 2008 I did get to visit Paris with my wife. She presented a paper and I got an award which had more to do with my esteemed PhD supervisor (as it happens I’ll meet up with her again today) than me.

Eiffel from Trocadero

During the Paris trip I found out that I was getting a redundancy, but it wasn’t entirely clear what I would be employed as.

It was during this time that I first posted about the silliness of L&T evaluations, academic staff development, and minimum standards for course websites. Somethings which four/five years later has changed little.

During this time I did finally start working fairly consistently on the PhD which did eventually get finished. I also worked and thought more about BAM and BIM. Must get back to the idea of cooked feeds for BIM. Other vague ideas and interests from that time include: reflective alignment, task corruption, the myth of rationality, the fad cycle and management fashions, the grammar of school, nudging, the Chasm,

Early 2009 saw us become the breeder of race horses!!

Malina - the new money burner

July of 2009 also saw commencement of work on BIM

More importantly it was this period that really saw the growth of my PLN. e.g. this post which mentions Mark Smithers (@marksmithers) and Claire Brookes (@clairebrooks)

It also saw the start of the Indicators project our little foray into “learning analytics”.

By the end of July 2010 I was finally made redundant.

Student life

Initially I became a full-time PhD student and a Dad.

Kronosaurus corner

By the end of 2010 I’d just about finished the PhD and enrolled to become a high school teacher. By January 2011 the thesis was finished, by May it was accepted and graduation was July.

Dr Jones

While I kept blogging about educational technology stuff, most of the blog for 2011 was spent reflecting on what I was doing in the Graduate Diploma of Learning and Teaching. The experience as a student of institutional e-learning was interesting.

And then in November I had the opportunity to return to “life” as a University academic.

Academia redux

So almost a year ago today I started life as an education academic. Still feel like an immigrant, which is probably a good thing. A year of teaching other people’s courses will be redressed somewhat this year.

The blog posts last year focused on understanding the courses I was teaching, grappling with the current state of institutional e-learning, a bit of work on BIM, and sharing a bit of research thinking and writing.

Some statistics

  • 2012 – Visits: ~83,000; 92 posts
  • 2011 – Visits: ~70,000; 142 posts
  • 2010 – Visits: 59,880
  • 2009 – Visits: 48,482
  • 2008 – Visits: 3,181
    Moved to WordPress.com in October 2008.

Taking a look at the "Decoding Learning" report

Late last year Nesta – a UK-based charity – released the report Decoding learning: The proof promise and potential of digital education. Nesta commissioned the London Knowledge Lab and the Learning Sciences Research Institute at the University of Nottingham to “analyse how technology has been used in the UK education systems and lessons from around the world. Uniquely, we wanted this to be set within a clear framework for better understanding the impact on learning experiences”.

The following is a summary and some reflections on my reading of the report. I’m thinking of using it as a resource for the course I’ll be teaching soon.

If you’re after a shorter summary, the Nesta press release might provide what you’re looking for.

Reflections

While there appears some value in the themes of learning, I thought there were some definite grey areas in terms of innovations being allocated to particular themes.

That said, the collection of examples of technology use divided into these themes provides what I see as a very useful resource for pre-service teachers. It gives them a taste of what is possible and what good uses of technology look like. This is something I think might be valuable in the early days of the course. Start with concrete examples, before getting into the theories and the planning.

The idea of using this list and the themes as the foundation for the co-construction of some sort of database or site of examples. A list students could add to through their explorations. Perhaps later expanding on each of the examples by suggesting what learning theories, curriculum elements, year levels, etc might be relevant to each example.

There are also a few other points apparently useful for a pre-service teacher thinking about ICTs (i.e. reflect some of the limitations of thinking about ICTs that I saw last year)

  • Starting with the learning theme, rather than the technology.
  • The point about linking learning activities across themes and experiences to reinforce learning and other plusses.
  • The importance placed on context. The ecology of resources model may be useful in scaffolding some thinking.

Chapter 1 – Introduction and scene setting

Key questions for education

  • Has the range of technologies helped improve learners’ experiences and the standards they achieve?
  • Is this investment just languishing as kit in the cupboard?
  • What more can decision makers, schools, teachers, parents and the technology industry do to ensure the full potential of innovative technology is exploited?

Digital technologies have a profound impact on management of learning but “evidence of
digital technologies producing real transformation in learning and teaching remains elusive” (p. 8)

“Our starting point is that digital technologies do offer opportunities for innovation that can transform teaching and learning, and that our challenge is to identify the shape that these innovations take.” (p. 8)

Has been much research. “synthesising reviews do find some evidence of positive impact” but there are 2 complicating factors that limit these findings

  1. the evidence is drawn from “a huge variety of learning contexts” (p. 9).
  2. “findings are invariably drawn from evidence about how technology supports existing teaching and learning practices, rather than transforming those practices” (p. 9)

Learning themes

Based on the learner’s actions and the way they are resourced and structured, the report is organised around 8 effective themes

  1. Learning from experts.
  2. Learning through inquiry.
  3. Learning with others.
  4. Learning through practising.
  5. Learning through making.
  6. Learning from assessment.
  7. Learning through exploring.
  8. Learning in and across settings.

Research process

Used both research and “grey” (blogs etc) literature

  • Review of last 3 years of academic source – 1000 publications – from which 124 research-led example innovations chose. Relevant reviews and meta-reviews included.
  • Informal literature identified 86 teacher-led innovations from a pool of 300.
  • The combined 210 cases form the basis for the report.
  • Use a comparative judgement method/tool (described in an appendix) to have 150 innovations ranked/compared by a group of experts.

There is an Excel spreadsheet with the top 150 innovations, including URLs.

Chapter summary

  • Chapter 2 – discusses evidence of innovation in each of the learning themes.
  • Chapter 3 – how are the 8 themes related and how they can be linked by technology to produce a rich learning experience.
  • Chapter 4 – looks at how learning context shaped the impact of new technologies on learning.
  • Chapter 5 – identify what needs to be done if innovative and effective uses of technology in education are to happen.

Chapter 2 – Learning with technology

Using the learning themes. Explains the type of learning and then present examples of the 210 innovations with the greatest potential.

  1. Learning from experts.
    Highlights are

    • The increasing wealth of online resources offers great potential for both teachers and learners; but places great demands on both to evaluate and filter the information on offer.
      So YouTube videos, e-books etc fit here.
    • Innovations in Learning from Experts have tended to focus on the exposition of information rather than fostering dialogue between teachers and learners.
    • Digital technologies offer new ways of presenting information and ideas in a dynamic and interactive way. However learners may need the support of teachers to interpret those ideas and to convert that information into knowledge.
    • New forms of representation (e.g. augmented objects) offer the potential to enrich the dialogue about information between teachers and learners.

    The Mathematics Image Trainer (described in this paper, the paper offers some theoretical/pedagogical rationale for why this type of approach is important for learning mathematics) is an example. Luckin et al describe how it allows the teacher to focus on asking the student to explain what they think is happening. Hence the innovation is framed as “a powerful too to enhance discussion between the teacher and the learner”

    Aside: I wonder how hard this would be to implement using ARSpot. Might make it more widely available since it would only require a Windows computer with a camera and the AR spot print outs. Rather than a Wii or similar

    Tutorial and Exposition are the two kinds of interaction between learner and teacher. Mentions Bloom’s suggestion that one-to-one teaching is the most effective way to learn. Methods that represent traditional approaches to teaching and that many examples of technology build upon (e.g. Khan Academy)>

    Mentions lots of different examples.

  2. Learning with others.
    Considerable enthusiasm. But academic research not filtering into the classroom. Teachers’ awareness of tools needs to be raised. “Priority should be given to developing tools that allow teachers to organise and manage episodes of joint learning.

    Identify four social dimensions

    1. collaborative – help learners develop mutual understanding.
    2. networked – help learners interact.
    3. participative – help learners develop a strong community of knowledge.
    4. performative – allow the outcomes of collaborative learning to be shared.

    Three promising areas for development:

    1. representational tools that enable activities taking place to be presented to other learners;
      e.g. technology-enhanced spaces for acting; tools for capturing and sharing on-going achievements…
    2. scaffolding tools that provide a structure for learning with others.
    3. communication tools that support learners working at a distance to collaborate.
  3. Learning through making.
    Making and sharing is “one of the best ways people can learn”. One example is the construction of an environmental sensor and linkage with a mobile phone app.

    Highlights of the section are:

    • Success rests on two principles: learners must construct their own understanding; and create something they can share with others.
    • Digital technology can bring it alive by making it possible to construct just about anything and share, discuss, reflect and learn.
    • The motivational aspect and benefits of producing real world outcomes of learning through making frequently cited in teacher-led innovaitons.
    • Depend son the appropriate use of digital tools in suitable environments.

    Mentions Papert and constructionism. Links to to Logo and computer programming. Mentions Maker Faires etc.

    Most of the innovations are teacher-led, apparently little research.

    • Examples help learners construct notes and other material to improve their learning, electronic outlining tools, learners developing presentations based on the information they collected during visits.
    • Scatch mentioned, also blogging and storytelling through Web 2.0 applications. ZooBurst create 3D pop up books with augmented reality features a bit like ARSpot. But also a bit more than that.
    • 3D printing gets a mention.
    • The need for teacher support, they help students to learn how to use technology critically link multiple representations and make the connections between individual learner’s constructions and whole class understanding.
  4. Learning through exploring.
    Learners have always browsed, but information is abundant. Need to new skills/strategies. Technology can help. 3D simulations, visualisations, technology-augmented spaces. Found few examples of innovations in this theme Gives electronic blocks as one example.

    Includes work where learners search or browse information or engage in playful, game-like interactions. It can be opportunistic or more structured.

    • two principles: learners are given the freedom to act; they need to regulate their own actions (which is itself an important skill for learning).
    • Digital tools provide new ways to explore information and structure the environment to explore.
    • Limited research studies suggest it is underused and undervalued.
    • The few examples were of high quality suggesting potential.
  5. Learning through inquiry.
    Exploring the natural or material world by asking questions, making discoveries, and rigorously testing them. Technology may help organise inquiry or connect learners’ inquiries to real world scenarios.

    Enables learners to think critically and participate in evidence-based debates. More structured towards an end than learning through exploring. Seen to include: simulation, case-based learning, problem-focussed learning and scripted inquiry. The degree of structure varies.

  6. Learning through practising.
    Perhaps the most contentious application in some areas, but probably the most used. Helping learners practice skills. Most effective when a variety of representation and interactions are used and doesn “simply sugar-coat uninspiring or unchallenging activities”.

    Practice builds foundational knowledge to be used in other context. use of tech in this sphere is rarely seen as innovative, but plusses include rich multimodal environments used to create challenging problems and appropriate feedback.

    Zombie Division is given as an example. Which leads me to the “Serious Game Classification” site. Another example has kindergarten students using a digital dance mat to practice/compare number magnitude. Light-bot is nice, appeals to the inner-programmer in me.

    Hello programmed instruction.

  7. Learning from assessment.
    Being aware of what a learner understands is fundamental to increasing their understanding and knowledge. Technology can help: compile learning activities and enable both teachers and learners to reflect upon them; track progress of learning and present that information in rich and interactive ways. There is little innovation in technology-supported assessment. Research innovation is modest. Most innovative focusses on self-assessment through reflection, not teacher-led. Most innovation is based on summative assessment of traditional subject. More work on formative assessment and assessment of other skills is required. Suggests learning analytics holds promise. Also e-assessment using social networks and other technologies that facilitate peer, collaborative and self-guided learning.

    The subtle stone is used as a way to gain insight into students’ emotion.

  8. Learning in and across settings.
    Context of learning plays an important role in the quality of learning. Knowledge is deepened when applied across different locations, representations and activities. Technology provides a variety of devices to capture, store, compare and integrate material from a variety of settings.

    Key success factors

    • Understanding what parents really need in order to get them involved;
    • Recognising that activities designed for school are not necesarily transferable to the home (and vice versa)
    • providing on-going support and ensuring use of technology at home is purposeful.

    Purple mash is used as an example of transferring learning between home and school. Augmented reality for field trips gets a mention and uses of mobile devices to support field trips etc.

Chapter 3 – Bringing learning together

“To achieve a more rich, cohesive, and productive learning experience, we must consider the
links that exist between different learning activities within and between themes.” Providers learners with a coherent episode. Reinforces learning and strengthen future learning.

Suggests the following

  • Learning themes are mad up of
    • Learning activities (e.g. creating an animation) which are connected/embedded across different themes into
    • Learning episodes (e.g. lessons, projects units) that are linked/sequenced to create..
    • broader Learning Experience at class, school etc levels.

Linking learning activities

57% of examples encompassed two or more forms of learning. Some with different learning activities within the one theme, others had learning activities across multiple themes. Often there would be a primary theme and another used as supports.

Through making, with others and through exploring most often used in a supporting role.

Chapter 4 – Context is important

Context is crucial for success with technology. Realising the potential of digital tools is contingent on how we use them and the context of learning.

Uses one of the author’s models – Luckin’s Ecology of Resource which essentially

  • Has the learner surrounded by
    • Environment;
      Most examples from formal schooling – primary and secondary. The classroom may have specialist equipment/expertise that makes it easier. Digital tools tend to be usable in many environments.

      All learning environments have formal/informal rules for behaviour of teacher and student. This can limit technology use. Existing infrastructure may also limit it.

    • Knowledge and Skills;
      The way knowledge is organised shapes learning. e.g. separation in disciplines. Certain learning activities better suit some subjects. The whole question of what is knowledge is also a factor.
    • People;
      Teachers’ have a role to play in having innovations succeed. PD is an issue. Peer learners also impact on learners. Technology can help. Not to mention other people within school – technical staff, teaching assistants, leadership etc. Not to mention the broader community.
    • Tools.
      Breaks digital tools into hardware, applications, networks and platforms. Mentions infrastructure. Cloud computing. Thin clients (dead already). BYOD not mentioned. Lists three factors that can constrain wider adoption: cost, complexity, safety.
  • Between those and the learner are a set of filters.
  • Understanding these helps predict likely impact of technology and help roll them out.

Chapter 5 – Bringing research into reality

Understanding how technology can be used to improve learning is only part of the answer. Systemic challenges need to be addressed.

Learning from the evidence

Repeats the adage that technology alone won’t improve education, “we need to make better and more creative use of them” (p. 59)

The most compelling opportunities to improve learning through technology are

  • Improve assessment.
    “too little innovative technology-supported practice in the critical area of Learning from assessment“. Don’t restrict it to the end of a learning episode and don’t make it “dull or dispiriting”. Learning analytics, adaptive assessment and the potential for instant statistics, knowledge maps, class data and badges. Also, how to assessment knowledge and skills such as collaboration and leadership.
  • Learn by making.
    Lots of digital tools being used in making. Coding, robotics kits etc. But “careful consideration needs to be given to how the process of making leads to the desired learning outcome”.
  • Upgrade practising.
    The longest and most popular aspect of technology. “But not all types of practice are equally beneficial”. It is most effective when it involves rich, challenging problems with appropriate feedback, rather than on easy activities. Challenge here is in determining which ones are most effective, for whom and in what context.
  • Turn the world into a learning place
    Most learning is in school and escaping the constraint of location is not simple. But digital tools enable this. It can “link learners with other learners, experiences and settings”. We need to stop thinking of learning taking place in isolation, in schools.
  • Make learning more social.
    Promote better teacher/student discussion and learner/learner discussion. Use technologies to create audiences for participatory or performance activities.
    • Key priorities for technology in learning

      • Link industry, research and practice.
        The gap between these groups is problematic. Advantages to all 3 if connected. Role of government and other stakeholders. Informal connections help, but formal connections required.

        The role of context in research also needs to be mentioned to enable comparisons.

      • Make better use of what we’ve got.
        WHile access to technology is important, an emphasis on hardware limits examination of other opportunities.

        Teachers need to move to a “think and link” approach where tools are used in conjunction with other resources and a variety of learning activities. Teachers need to be able to digitally “stick and glue”. Teachers need ways to share ways of using new technologies.

      • Connect learning technologies and activities.
        “Linking learning activities and using a variety of technologies and approaches” can lead to a richer experience. “Focusing on individual learning activities with single use technologies will not achieve the maximum impact”

        But the tools aren’t there yet.

Making some "3100" thinking explicit

In around two months a couple of hundred pre-service teachers will be wanting/required to start engaging with the course EDC3100, ICTs and Pedagogy. For the last 3 or 4 months I’ve been reading a bit and generally mulling over what I’ll do and how far to go. Back in July I started off with this initial post. It’s now (past) time to make some of this explicit, make some design decisions and implement it. This is the start. Littered through the following will be questions and reminders to myself for further consideration.

If you have thoughts and criticisms, now would be a good time to contribute. Any and all feedback is very much appreciated. Littered through the following are Questions I’ve left myself. Feel free to make suggestions.

This post has been an on-going development process over a week or so. It’s a somewhat organised collection of thoughts, but it is time I stopped following leads, posted it and started seriously thinking about the implementation specifics. That will be next week’s task.

The rationale

Lisa Lane writes in this post about what she sees as the purpose of MOOCs

the opportunity to exploit the opportunities of the web, to form learning communities, to blow apart top-down teaching models, and to create something meaningful and valuable to participants.

While EDC3100 will not be a MOOC, Lisa’s description resonates strongly with what I’m trying to do and what’s influencing my current thinking. Here’s a bit more of the rationale.

Transformation

EDC3100 aims to help pre-service teachers figure out how to design learning and teaching in “ICT enriched environments”. Larkin et al (2012) suggest that how academics model the use of ICTs in learning and teaching is an important factor in developing the ICT competence of students. Our experiences shape us. The use of ICT in the courses taken by pre-service teachers are shaping their knowledge of ICTs and Pedagogy. This begs the question about the types of experiences with ICTs for learning that students gain from EDC3100 and whether it be better?

How do you evaluate “better”? Well one approach is to use something like the SAMR model (see the following image). There are similar frameworks/models, including: the RAT Framework and the Computer Practice Framework. But the story is just about the same. Using SAMR most of the use of ICTs in prior offerings of EDC3100 site within the enhancement stage.

SAMR model image from Jenny Luca’s TPACK and SAMR blog post

Is there any wonder then that much of the use of ICTs we see when our students head out to teach also appear at that same level? Certainly what they see in 3100 is not the only, or even the major, reason the students struggle to be transformative (3 weeks in a new school in someone else’s class working to their plans isn’t a great context for transformation) in their use of ICT and pedagogy. However, it would seem important for a course like 3100 to provide them with an opportunity to observe and participate in some examples of transformation.

Question: How do we provide students with the preparation and opportunity to demonstrate/experience the design of transformation with ICTs?

Question: A model like SAMR might solve a problem observed last year where pre-service teachers weren’t aware that there was more to ICTs and pedagogy than using an Interactive White Board to show a YouTube video. Or that having the students use Popplet to create a mindmap perhaps wasn’t a great advance over butchers papers and pens. Are there better alternatives to SAMR? The Computer Practice Framework adds a few extra considerations linked to common mistakes.

Building confidence and experience

Hammond et al (2009) identifies personal experience of using ICT as an important contextual factor explaining why a pre-service teacher will use ICTs. This experience (Hammond et al, 2009, pp 70-71)

gave student teachers the confidence to use ICT, allowed them to develop effective strategies for learning new skills and gave them an awareess of the value of ICT based on its application in their own learning

Given this, it would appear important for EDC3100 to provide a space for the pre-service teachers to expand their experience with ICTs and develop their confidence with them. To provide them with a foundation.

Hopefully we can help them develop their tech support skills and perhaps build on the advice from xkcd.

xkcd comic strip “Tech support cheat sheet”

Barton and Haydn (2006) found that the sheer volume of information about using ICT and pedagogy can overwhelm students. The volume of information, or its rate of increase, is not likely to have significantly reduced since 2006. Consequently the course has to help students develop the confidence and experience in dealing with this volume. We have start their journey towards becoming, as @palbion describes it, expert learners.

Question: Where are the good insights, models, theories etc around designing and scaffolding people’s development of these skills? Information literacy?

Reflection and feedback

Hammond et al (2009) quote a range of authors to identify structured reflection on the use of ICT as a powerful technique in developing pedagogic understanding. Similarly, “assignments which relate ICT to developing practice can be influential” (Hammond et al, 2009, p. 60). In the version of EDC3100 that I taught last year, there wasn’t a lot of opportunity for students to engage in reflection. Even less opportunity to receive feedback on those reflections. Hopefully in 2013 we can change this. Increasing the levels of reflection and feedback will hopefully improve learning. Feedback must be good since Hattie identifies it as what works best for improving learning.

Question: What are the good insights/models/theories etc around designing activities that encourage reflection?

Learning to be

EDC3100 is a professional course. It’s in the 3rd year of a four year degree program that is hopefully producing effective teaching professionals by the end. This is why I’m interested in how EDC3100 can help the pre-service teachers taking the course “learning to be” a teacher. Seeley Brown (2008, p. xii)

perhaps even more signifi cant, aspect of social learning, involves not only “learning about” the subject matter but also “learning to be” a full participant in the field. This involves acquiring the practices and norms of established practitioners in that field or acculturating into a community of practice.

I’m hoping EDC3100 can encourage/enable the students to engage in social learning with other students and join in broader teacher networks. To really engage with the task of “learning to be” a teacher.

Diversity, personalisation and pedagogies

During 2013 there are likely to be 300+ students take the course. They will be a very diverse student cohort on a number of criteria. For example, in terms of technical skill last year’s cohort included everything from an ex-IT development professional through to technophobes. As the graph below shows students’ ages ranged from 18 through 60. Though over 50% of the students were traditional 3rd year students straight from high school – aged 20 years old.

age Distribution

There was also significant diversity in the type of teacher they were preparing to be. EDC3100 last year included “pre-service teachers” training to teach everyone from pre-school, primary school, high school and vocational education. Not to mention the different disciplines and knowledge areas these pre-service teachers will cover. Barton and Haydn (2006, pp 267) suggest that

Training needs to be differentiated to take into account the differing ways in which ICT helps teachers of different subjects to improve teaching and learning

EDC3100 Sector breakdown

As it happens the university that employs me to teach EDC3100 has adopted “Personalised learning” as one of its four overarching themes for its 2022 Vision. Some of the institutional words around this theme include

We promise to partner with learners in the pursuit of their study objectives regardless of their background, location or stage in life.

Through innovation USQ harnesses emerging technologies to enable collaborative teaching and individualised learning for its students regardless of their location or lifestyle.

We understand our remarkably diverse and global student population. USQ seeks to accommodate individual learning styles and to provide students with personalised adaptive learning support. We are known for our capacity to research and anticipate technological advances and to capitalise on these.

Question: How does this type of sentiment match some of the broader ideas of personalised learning (e.g. this one)?
Question: How far can you take personalised learning within an institutional context with a significant focus on consistency and quality assurance (where consistency and quality are often equated)?

While perhaps not going quite as far as suggested by this high school student. There’s a lot to be said for aiming EDC3100 toward this goal

Let’s bring learning back to the learners. Why are we disregarding the brilliant work of progressive educators like John Dewey, Maria Montessori, Jean Piaget, and Paulo Freire? We need to allow students to craft their own learning experiences through projects, apprenticeships, and hands-on engagement. ….. My advice: Let’s get over the fads and understand that learning is best done through doing, creating, and exploring.

The dead viola player?

In critiquing xMOOCs and Learning Analytics Michael Feldstein identifies another reason driving the changes in EDC3100. Feldstein breaks the class experience into three parts and talks about how well they scale with technology. The parts are

  1. Content transmission;
    Which scales well. Feldstein gives Khan Academy as an example. Given that content is free and abundant, I’m hoping the teaching staff in EDC3100 can avoid duplicative content creation. e.g. we won’t be giving lectures in the traditional sense.

    Questions: What (if any) content required for EDC3100 is not free and abundant?
    How do we scaffold students’ engagement with the abundant content?

  2. Assessment;
    Which xMOOCs are scaling with MCQs and perhaps somewhat with peer assessment. But which doesn’t necessarily scale well if you’re trying to assess in areas that don’t come with “cut-and-dry answers”.
  3. Remediation.
    i.e. responding helpfully to students when they are stuck. The ability to identify “not only what a student is getting wrong on an assessment but why she is getting it wrong”.

It’s remediation that Feldstein identifies as what both learning analytics and xMOOCs don’t do well. He also suggests it’s been dead long before xMOOCs

My guess is that college professor productivity has risen in the last decade, if all you mean by “productivity” is number of butts in classroom seats per teacher. The cost has been less time to respond to individual student needs.

I’m hoping EDC3100 can revive the viola player a touch. There will be limit to how much the EDC3100 teaching staff can do this due to institutional workload and funding models and simply the sheer number of students. This is where Stephen Downes’ comment on Feldstein’s post comes in. In talking about how the cMOOCs addressed the problem the dead viola player

You don’t need an expert for this – you just needs someone who knows the answer to the problem. So we have attempted to scale by connecting people with many other students. Instructors are still there, for the tough and difficult problems. But students can help each other out, and are expected to do so.

Perhaps the crux of the problem for me with EDC3100 is framed by Stephen as

we need to structure learning in such a way as to make asking questions easier, and as necessary, to provide more incentives to people to answer them.

There are numerous barriers to this. Cousin and Deepwell (2005) identify the learner problem, in that they “arrive in the networked setting with ‘congealed practices’ from more didactic educational contexts”. Shifting the students out of their congealment can be hard and unsuccessful work. Not the least because the institutional context has its own problems with congealment that can create a fair bit of cognitive dissonance. Not to mention that the teaching staff bring a level of congealment. When I am thinking about what I will be doing as this course is happening, I find myself slipping back into my own set of congealed practices. Finally, there is just the question of discovering and evaluating what are the practices that will effectively replace the congealed practices.

This is where I’m hoping to benefit from the work of others such as Kop, Fournie and Sui Fai Mak (2011) and Weller (2011).

Question: What are some of the other useful sources of design insight?
Question: When am I going to stop navel gazing and start actually designing?

In arguing that MOOCs fundamentally misperceive how teaching works Mark Guzdial suggests that “the main activity of a teach is to orchestrate learning opportunities, to get students to do and think. A teacher does this most effectively by responding to the individuals”. How can we do this effectively in the context of EDC3100?

Synchronicity

As it happens there is currently a great deal of activity in this area. Other folk are putting together MOOCs and open courses and sharing the whys, wherefores and what. These offer some possibilities for learning/borrowings. Here’s an initial summary.

#etmooc

Alec Couros – one of the inspirations for much of what’s happening with EDC3100 – is involved with #etmooc – Educational Technology & Media. Given this connection and the topic of #etmooc being very closely related to the topic of EDC3100 I’m aiming to engage with it in a number of ways.

Alec, as is his wont, appears to have been very open in gathering input for the design of #etmooc. The evidence of this can be seen in posts from Lisa Lane, Alan Levine and others. Lisa’s post summarises some of the resources (including a Google community from which I’ve drawn a couple of ideas in one quick skim and the planning Google doc) and questions considered by the #etmooc designers. It’s interesting in terms of my lack of connections in certain areas to note that I’m only now becoming aware of some of Alec’s much earlier posts about #etmooc.

Unlike the typical weekly course (and cMOOC) schedule, Lisa and Alan suggest something more open. Topics that have a launch date, a basic introduction and then a community development process where the aims and resources for the topic are developed by the community. There’s much to like to this idea, but it probably falls to far outside the constraints of EDC3100. Not sure how the institution might take this approach and I think the students may have some queries about exactly what my role is in the course. EDC3100 will need to keep a bit more of the traditional schedule, but how much can we open it up from there?

Question: How much can I push the content of the course out of the institutional Moodle instance and into more open technologies?

Some insights/ideas from all of the above #etmooc related resources

  • If a course is successful in getting students engaged with networks, then the course doesn’t have an end date.
  • Large synchronous presentations have value for “introducing/advancing ideas and for tool demonstrations”.
  • With a distributed approach the effective use of tagging is necessary for aggregating networked conversations.
  • Designing interactions is key or as one of the comments put clustering opportunities. Helping people orient themselves by providing tasks/opportunities to find “people who you want to learn with”.
  • Entry to Twitter and this type of approach in general is difficult.
  • The idea of using git projects as a platform.
  • The influence of 23 things as a design aid or something like 100 ideas.
  • The importance of a central course space for orientation and perhaps community?
    It would also match students expectations.
    Question: If I create a central course space in both the institutional Moodle instance and in a WordPress (or some other open site) blog with the same information, which would students use the most?

OLDS-MOOC

Will be interesting to compare the Learning Design for a 21st Century Curriculum MOOC with #etmooc. Both are starting about the same time.

Emerging Learning Technologies

Curtis Bonk has released the syllabus for his “R685, Emerging Learning Technologies” course which is also closely related to the purpose of EDC3100. (It is interesting to see the evolution of the course from the 2012 version, especially in the objectives. One of the assessment options was to evaluate changes in syllabi from the course since 1990 ) But one difference is that EDC3100 tends to spend more time explicitly on pedagogy. Both #etcmoos and R685 don’t, at least as indicated by the course topics.

Some insights/ideas from R685

  • The grading scheme.
    Interesting in the flexibility that it gives the students. Six set tasks which each give points up to a total of 360 points. Your grade is based on the number of points you achieve. Perhaps a step to far here.
  • Tidbit reflections.
    A large collection of small online articles are provided. Students are asked to read 3-4 a week. The assessment is to submit a list of the top 20 (and the top 2-3 videos) and reflection on what was learned from them. The list of resources might be useful.
  • Discussion moderator.
    Students sign up to take on the role of discussion starter based on the week’s readings.
  • Participation is marked both for synchronous sessions and posts etc.
  • The qualitative criteria for posts.
    1. Diversity (some variety in ideas posted, and some breadth to exploration);
    2. Perspective taking (values other perspectives, ideas, cultures, etc.);
    3. Creativity (original, unique, and novel ideas);
    4. Insightful (makes interesting, astute, and sagacious observations).
    5. Relevancy (topics selected are connected to course content); and
    6. Learning Depth/Growth (shows some depth to thinking and elaboration of ideas);
  • Criteria for the reflective paper grading are also interesting
    1. Relevancy to class: meaningful examples, relationships drawn, interlinkages, connecting weekly ideas.
    2. Insightful, Interesting, Reflective, Emotional: honest, self-awareness, interesting observations
    3. Learning Depth/Growth: takes thoughts along to new heights, exploration, breadth & depth, growth.
    4. Completeness: thorough comments, detailed reflection, fulfills assignment, informative.
  • The collection of weekly resources.
    Useful because of the list of resources to pick over, but also as an example of the “MOOC-like” approach I’m thinking for EDC3100. Provide a collection of useful resources and have the students read and reflect. Perhaps search for some more. Interesting division into Readings and Tidbits.
    Question: How much and what learning activities should be set each week?

Should remember that R685 is a graduate course, 3100 is undergraduate. There is a large written component (as in formal report) in R685, will aim to avoid that.

Educators as Social Networked Learners

In Educators as Social Networked Learners Jackie Gerstein describes here graduate course – Social Networked Learning. Some thoughts/inspirations from this post include

  • Use of alternate media (e.g. Glog) for assessment/to capture learning.
    Certainly something I wish to explore in 3100. Part of the students learning what they do. They need to use the ICT tools we’re talking about to show and share their learning.
  • The Twitter assignment has some suggestions to be made for both how 3100 might use Twitter, but also some of the other tools
  • The PLE assignment’s use of Mindmap might be another way to balance broadening awareness of different tools.

Social media literacies syllabus

Via Jackie Gerstein’s post I come to Howard Rheingold’s Syllabus for Social Media Literacies. Like this

Literate populations are becoming the driving force that shape new media, just as they were the eras following the invention of the alphabet and printing press,. What broad populations know now, and the ways they put that knowledge into action, will shape the ways people use and misuse social media for decades to come.

and the five social media literacies

  1. attention;
  2. crap detection;
  3. participation;
  4. collaboration; and,
  5. network awareness.

Reminder: revisit this work to explore it’s applicability to 3100 and observe what comes out of #etmooc

Importantly, the following succinctly captures important aspects of the “design principles” I’d like to use

chose texts that can offer analytic tools, explanatory frameworks, and competing perspectives — the basic building blocks for teachers and learners to use.

and he makes the important point and likely problem with this type of approach i.e.

College students have been strongly socialized to do the homework for each class the night before it is due — a method that doesn’t work when discourse, not a discrete product like a term paper, is the goal. The necessity for more frequent informal discourse through forums, blogs, comments, usually needs to be repeated and reinforced.

Some other important points/ideas etc.

  • The criteria and resources on this introduction to forums could be useful (though I’ve seen similar elsewhere)

    4 Points – The posting(s) integrates multiple viewpoints and weaves both class readings and other participants’ postings into their discussion of the subject.
    3 Points – The posting(s) builds upon the ideas of another participant or two, and digs deeper into the question(s) posed by the instructor.
    2 Points – A single posting that does not interact with or incorporate the ideas of other participants’ comments.
    1 Point – A simple “me too” comment that neither expands the conversation nor demonstrates any degree of reflection by the student.
    0 Points – No comment.

  • The idea of students pre-submitting a substantial question that they are prepared to address in a f-t-f session.
  • Teams of (2) students creating mindmaps from key readings for the week.
  • I wonder how well the instructors introduction would match current requirements of my institution?

Introduction to openness

And then there is David WIley’s course “Introduction to Openness in Education being run on Canvas. Interesting insights and ideas include

  • The use of badges rather than grades.
    It’s not a graded course, but interesting to see the badge implementation. It’s one of the considerations for 3100 and the use of blog posts to “submit” the evidence. The course itself has a module dedicated to open assessment and open badges which could prove useful perhaps as an example of transformation for the 3100 students.

    Question: How would the awarding of badges work with BIM and by extension Moodle?

  • Explicitly makes the point that the learning artifacts are the students and are stored outside the LMS.
    Links this to a constructionist type quote from Terry Anderson.
  • The Anderson quote is from a post “Connectivying” your course. It draws on the work by Anderson and Dron in two papers, including this one
    Describes two defining characteristics of connectivism

    • Construction, annotation and maintenance of learning artifacts that are open and contribute to knowledge beyond the course.
    • Student be given the opportunity, incentive and support to form networks with others, including outside the course.
  • The technology requirements are likely to be very similar to 3100, though perhaps not the YouTube account? But perhaps that’s my textual prejudice showing through.
  • The structure of the course in Canvas looks very much like something you could do in Moodle.
    Though much of it seems to resemble the “ramble” approach from last year. i.e. it appears Wiley has provided much of the content. Not something I’d like to replicate.

A blog post on new pedagogy that includes pointers to lots of examples
http://www.contactnorth.ca/trends-directions/evolving-pedagogy-0/new-pedagogy-emergingand-online-learning-key-contributing

So what shape should this course take?

*the aim should be to keep it open and flexible, engage the students in innovative applications of technology, with the world outside, be reflective….??

Finding the pedagogy

From Rheingold comes a link to the Instructor’s guide to process-oriented guided-inquiry learning. Much of this is known stuff, but I’m repeating it hear to remind myself.

Near the start it quotes 5 key ideas about learning from current research in the cognitive sciences, that people learn by

  • constructing their own understanding based on their prior knowledge, experiences, skills, attitudes, and beliefs.
  • following a learning cycle of exploration, concept formation, and application.
  • connecting and visualizing concepts and multiple representations.
  • discussing and interacting with others.
  • reflecting on progress and assessing performance.

Ahh, that comes from Bransford et al (2000). That resonates with earlier thoughts. The POGIL document talks about a three stage learning cycle

  1. Exploration;
    Provide students a model (very broadly defined) to examine or a set of tasks to follow that embody what is to be learned. “The intent is to have the students encounter questions or complexities that they cannot resolve with their accustomed way of thinking” Guided by critical-thinking or key questions.

    Talks about three types of questions

    1. Directed questions – point to obvious discoveries about the model .
    2. Convergent questions – require synthesis of relationships from new discoveries and prior knowledge to develop new concepts or deeper conceptual understanding.
    3. Divergent questions – open-ended, without unique answers.
  2. Concept invention/formation;
    Also called introduction, the idea is that the learners develop insights into a construct that helps understanding develop.
  3. Application.
    i.e. do something with that new understanding.

Then come some implications for teaching

  • Organising knowledge in long-term memory
    Pattern recognition is enhanced by ask for comparions and contrasts with problems in different contexts; identify patterns in concepts/problems/solutions; classify problems in terms of concepts.

    Have students identify relevant issues and concepts, discuss why relevant and plan solutions. Brainstorming.

    Give time to develop deep understanding.

  • Overcome limitations of working memory.
    Help chunk and develop knowledge schemata. Encourage them to draw diagrams.
  • Analysing problems and planning solutions.
    Use an explicit problem-solving methodology. Instruct them in how to use it. Ask them to explain what was done. Compare their approaches with that of the expert.
  • Benefiting from meta-cognition.
    Assessing the approaches of others and identifying strengths, areas for improvement and insights is useful.
  • Transfer knowledge for use in new contexts.
    Have students talk about the relevance and usefulness of what they have learned. Figure out when and where it can be used.

Four roles for the teacher

  1. Leader – creating the environment, explaining th elesson etc.
  2. Monitor/assessor – touches on the dead viola player/remediation section above.
  3. Facilitator – interventions, asking questions etc.
  4. Evaluator

And a table

Steps in the process 7E equivalent Component of the activity
Identify a need to learn Engage An issue that excites and interests is presented
An answer to the question Why? is given.
Learning objectives and success criteria are defined
Connect to prior understandings Elicit A question or issue is raised, and student explanations or predictions are sought. Pre-requisite material is identified
Explore Explore A model or task is provided and resource material identified. Students explore the model or task in response to critical-thinking questions.
Concept invention, introduction and formation Explain Critical-thinking questions lead to the identification of concepts and understanding is developed
Practice applying knowledge Skill exercises involve straightforward application of the knowledge
Apply knowledge in new contexts Elaborate and extend Problems and extended problems require synthesis and transference of concepts
Reflect on the process Evaluate Problem solutions and answers to questions are validated and integrated with concepts. Learning and performance are assessed.

Question: What changes would be required to this table to better encourage the formation of the community and culture aspects mentioned by Gardner and the other features I’ve talked about above? Not to mention some other differences.

Casey and Evans (2011) cite/describe Nuthall’s (2007, p. 36) four premises for learning

  1. Students learn what they do, and what they are learning is what you see them doing: writing notes, coping with the boredom without complaining, and later, memorizing headings and details they only partially understand. What they do in the classroom, day after day, is what they become experts at.
  2. Social relationships determine learning. It’s very important to remember that much of what students do in the classroom is determined by their social relationships. Even in the teacher’s own territory, the classroom, the student’s primary audience is his or her peers. More communication goes on within the peer culture than within the school and classroom culture.
  3. Effective activities are built around big questions. If we want to design effective learning activities, we must carefully monitor what students are gaining as they engage in focused learning. We have to spend a considerable amount of time and resources monitoring what they are understanding and learning as well as designing and carrying out these activities. Taking the time and providing the resources needed to design effective learning activities means covering much less of the formal curriculum. To justify doing this, we must make sure that the outcomes of these learning activities are significant not only in the official curriculum but also in the lives and interests of the students.
  4. Effective activities are managed by the students themselves. The ideal learning activity, in line with the previous three premises, has the following characteristics:
    • It focuses on the solution of a major question or problem that is significant in both the discipline and the lives and culture of the students;
    • It engages the students continuously in intellectual work that is appropriate in the discipline;
    • It provides teachers with opportunities, as the class engages in solving the smaller problems, to monitor individual students’ evolving understanding of the content and procedures.

There’s some useful insights there.

Gardner Campbell’s Narrate, Curate, Share piece also captures some of the aim. The aim is for students to be engaging in telling and creating their stories of exploration and learning. Not to be blithely fulfilling requirements. This blog post from Gardner further explores his perspective on this which breaks down into (though he does express his fear of the dangers of being too analytical about this)

  1. Distinguish between a requirement and an assignment.
    Blogging is required, but not assigned. Specify how much blogs/comments required (he talks about the dangers and need for a number) but not talk about why or about what. This warns against some of my ideas about being quite specific about marking criteria and purpose. Which has the danger of becoming “new wine in old bottles”.
  2. Encourage/make visible the community-culture continuum, make it accessible to thought.
    In Gardner’s words

    So when I talk to my students about blogging, I try very hard to emphasize how they’re likely to experience both community (tighter bonds with their fellow learners in the course of study) and culture (participation in the greater blogosphere, with unpredictable and often lovely results).

    When I read this, I’m thinking about how I can encourage recognition of and the opportunity to engage in this for these students.

    The culture aspect connects with the “learning-to-be” point above.

  3. Be the change you wish/relate that change.
    Use your own personal experience and example of blogging.

Oh I like this from Gardner’s post

Blogs are hydroponic farms for heuristics, hypothesis-generation, metacognition that continually moves out to other metacognizers and back to one’s own reflection.

which is part of a response to the fear that student blog posts will show the ignorance

To do

read

Kirschner, P., Strijbos, J., Kreijns, K., & Beers, P. J. (2004). Designing electronic collaborative learning environments. Educational Technology Research and Development, 52(3), 47-66.

Doering, A., Miller, C., & Scharber, C. (2012). No such thing as failure, only feedback: Designing innovative opportunities for e-assessment and technology-mediated feedback. Journal of Interactive Learning Research, 21(1), 65–92.

References

Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press.

Larkin, K., Jamieson-Proctor, R., & Finger, G. (2012). TPACK and Pre-Service Teacher Mathematics Education: Defining a Signature Pedagogy for Mathematics Education Using ICT and Based on the Metaphor “Mathematics Is a Language”. Computers in the Schools, 29(1-2), 207–226. doi:10.1080/07380569.2012.651424

When will we no longer teach ICTs to pre-service teachers?

Earlier today I tweeted the following

It resonated with a few people so I thought I’d share the reference and ramble a bit about the local implications.

The origin

The broader context of the quote from Barton and Haydn (2006, p. 258) is

Kenneth Baker (1988) saw the development of a technologically enabled teaching force as straightforward, explaining to a conference of Education Officers in 1988 that from henceforth, such skills would be built into initial training courses: ‘the problem of getting teachers aware of IT will soon be phased out as all new entrants will soon have IT expertise’

It appears that Kenneth Baker is in fact Baron Baker of Dorking a British Conservative MP who was the Secretary of State for Education and Science from 1986 to 1988. Obviously Baron Baker’s prognostications were a little wayward.

Especially given the Australian Government’s funding last year of the Teaching Teachers for the Future project with the aim of

enabling all pre-service teachers at early, middle and senior levels to become proficient in the use of ICT in education

. Not to mention the fact that I’m currently largely employed to teach a course that is meant to help achieve the same end.

The difference between IT and ICT in education

Though, without knowing exactly the broader context of Baron Baker’s talk it’s easy to draw to broad a conclusion and make a false comparison. @BenjaminHarwood responded to my original tweet with

I wonder if Baron Baker was using IT to mean the ability to turn a computer on and other fundamental skills. 1988 saw Windows 2.10 released in May. So most people we’re still using MS-DOS. The TTF project is focused on the broader “ICT in education”. i.e. @BenjaminHarwood’s “pedagogical integration expertise”.

Will it ever go away

I have to admit to making a claim somewhat similar to Baron Baker’s over the last year. Generally wondering how much longer I’ll be employed to teach “ICT and Pedagogy” as a stand alone course. The though is that we don’t teach “Video and pedagogy” or “Written word and pedagogy” courses, so why are ICTs any different? Won’t the need for a separate course disappear once all the other courses are doing a wonderful job of integrating innovative examples of ICTs and pedagogy?

@palbion had a suggestion, which I think is one of the factors

The on-going change of ICTs does appear to have created some illusion of having to continually re-learn. Even though perhaps some of the fundamentals have stayed the same. But perhaps a large part of that is simply because much use of ICTs and pedagogy has never gotten beyond the substitution/augmentation level as per the SAMR model.

SAMR model image from Jenny Luca’s TPACK and SAMR blog post

While there are many reasons for this lack of movement “up the scale”, much of it would seem to come back to the formal education system and the nature of the organisations that underpin it. A nature that does not really enable nor encourage transformation of operation. Especially not transformation driven by the teaching staff. An inertia that is playing its part within both school systems and institutions of higher education responsible for teaching the next generation of teachers.

@s_palm pointed to the broader “digital native” myth

So maybe the need will never go away, or perhaps at least not until I reach retirement age or decide to move onto greener pastures.

References

Barton, R., & Haydn, T. (2006). Trainee teachers’ views on what helps them to use information and communication technology effectively in their subject teaching. Journal of Computer Assisted Learning, 22(4), 257–272. doi:10.1111/j.1365-2729.2006.00175.x

Page 1 of 8

Powered by WordPress & Theme by Anders Norén

css.php