Assembling the heterogeneous elements for (digital) learning

Month: August 2015 Page 1 of 2

All models are wrong, but some are useful and its application to e-learning

In a section with the heading “ALL MODELS ARE WRONG BUT SOME ARE USEFUL”, Box (1979) wrote

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. (p. 202)

Over recent weeks I’ve been increasingly interested in the application of this aphorism to the practice of institutional e-learning and why it is so bad.

Everything in e-learning is a model

For definition’s sake, the OECD (2005) defines e-learning as the use of information and communications technology (ICT) to support and enhance learning and teaching.

As the heading suggests, I’d like to propose that everything in institutional e-learning is a model. Borrowing from the Wikipedia page on this aphorism you get the definition of model as “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002).

The software that enables e-learning is a model. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model (in the form of the software) that aims to fulfill those requirements.

Instructional design and teaching are essentially the creations of models intended to enable learning. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some learning outcome.

Organisational structures are models. At some stage, some smart people sat down, generated and analysed a set of requirements, and then developed a model to achieve some operational and strategic requirements. Those same set of smart people probably also worked on developing a range of models in the form of organisational policies and processes. Some of which may have been influenced by the software models that are available.

The theories, tools, and schema used in the generation of the above models, are in turn models.

And following Box, all models are wrong.

But it gets worse.

In e-learning, everyone is an expert model builder

E-learning within an institution – by its nature – must bring together a range of different disciplines, including (but not limited to): senior leadership, middle management, quality assurance (boo) and related; researchers; librarians; instructional designers, staff developers and related learning and teaching experts; various forms of technology experts (software developers, network and systems administrators, user support etc); various forms of content development experts (editors, illustrators, video and various multimedia developers); and, of course the teachers/subject matter experts. I’ll make special mention of the folk from marketing who are the experts of the institutional brand.

All of these people are – or at least should be – expert model builders. Experts at building and maintaining the types of models mentioned above. Even the institutional brand is a type of model.

This brings problems.

Each of these expert model builders suffer from expertise bias.

What do you mean you can’t traverse the byzantine mess of links from the staff intranet and find the support documentation? Here, you just click here, here, here, here, here, here, here, and here. See, obvious……

And each of these experts thinks that the key to improving the quality of e-learning at the institution can be found in the institution doing a much better job at their model. Can you guess which group of experts is most likely to suggest the following?

The quality of learning and teaching at our institution can be improved by:

  • requiring every academic to have a teaching qualification.
  • ensuring we only employ quality researchers who are leaders in their field.
  • adopt the latest version of ITIL, i.e. ITIL (the full straight-jacket).
  • all courses are required to meet the 30 page checklist of quality criteria.
  • all courses were redesigned using constructive alignment.
  • we re-write all our systems using an API-centric architecture.
  • adopted my latest theory on situated cognitive, self regulated learning and maturation.

What’s common about most of these suggestion is that it will be all better if we just adopt this new better model. All of the problems we’ve faced previously are due to the fact that we’ve used the wrong model. This model is better. It will solve it.

Some recent examples

I’ve seen a few examples of this recently.

Ben Werdmuller had an article on Medium titled “What would it take to save #EdTech?” Ben’s suggested model solution was an open startup.

Mark Smithers blogged recently reflecting on 20 years in e-learning. In it Mark suggests a new model for course development teams as one solution.

Then there is this post on Medium titled “Is Slack the new LMS?”. As the title suggests, the new model here is that embodied by Slack.

Tomorrow I’ll be attending a panel session titled “The role of Openness in Creating New Futures in higher education” (being streamed live). Indicative of how the “open” model is seen as yet another solution to the problem of institutional e-learning.

And going back a bit further Holt et al (2011) report on the strategic contributions of teaching and learning centres in Australian higher education and observe that

These centres remain in a state of flux, with seemingly endless reconfiguration. The drivers for such change appear to lie in decision makers’ search for their centres to add more strategic value to organisational teaching, learning and the student experience (p. 5)

i.e. every senior manager worth their salt does the same stupid thing that senior managers have always done. Changed the model that underpins the structure of the organisation.

Changing the model like this is seen as suggesting you know what you are doing and it can sometimes be made to appear logical.

And of course in the complex adaptive system that is institutional e-learning it is also completely and utterly wrong and destined to fail.

A new model is not a solution

This is because any model is “a simplification or approximation of reality and hence will not reflect all of reality” (Burnham & Anderson, 2002) and “it would be very remarkable if any system existing in the real world could be exactly represented by any simple model” (Box, 1979, p. 202).

As Box suggested, this is not to say you should ignore all models. After all, all models are wrong, but some are useful. You can achieve some benefits from moving to a new model.

But a new model can never be “the” solution. Especially as the size of the impact of the model grows. A new organisational structure for the entire university is never going to be the solution, it will only be really, really costly.

There are always problems

This is my 25th year working in Universities. I’ve spent my entire 25 years identifying and fixing the problems that exist with whatever model the institution has used. Almost my entire research career has been built around this. A selection of the titles from my publications illustrates the point

  1. Computing by Distance Education: Problems and Solutions
  2. Solving some problems of University Education: A Case Study
  3. Solving some problems with University Education: Part II
  4. How to live with ERP systems and thrive.
  5. The rise and fall of a shadow system: Lessons for Enterprise System Implementation
  6. Limits in developing innovative pedagogy with Moodle: The story of BIM
  7. The life and death of Webfuse: principles for learning and learning into the future
  8. Breaking BAD to bridge the reality/rhetoric chasm.

And I’m not alone. Scratch the surface at any University and you will find numerous examples of individual or small groups of academics identifying and fixing problems with whatever models the institutions has adopted. e.g. A workshop at CSU earlier this year included academics from CSU presenting a raft of systems they’ve had to develop to solve problems with the institutional models.

The problem is knowing how to combine the multitude of models

The TPACK (Technological Pedagogical Content Knowledge) framework provides one way to conceptualise what is required for quality learning and teaching with technology. In proposing the TPACK Framework, Mischra and Koehler (2006) argue that

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements (p. 1029).

i.e. good quality teaching requires the development of “appropriate, context-specific” combinations of all of the models involved with e-learning.

The reason why “all models are wrong” is because when you get down to the individual course (remember I’m focusing on university e-learning) you are getting much closer to the reality of learning. A reality that is hidden from the senior manager developing policy, the QA person deciding on standards for the entire institution, the software developer working on a system (open source or not) etc. are all removed from the context. They are all removed from the reality.

The task of the teacher (or the course design team depending on your model) is captured somewhat by Shulman (1987)

to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

The task is to mix all those models together and produce the most effective learning experience for these particular students in this particular context. The better you can do that, the more pedagogical value. The better the learning.

All of the work outlined in my publications listed above has been attempts to mix the various models available into a form that has greater pedagogical value within the context which I was teaching.

A new model means a need to create a new mix

When a new LMS, a new organisational structure, a new QA process, or some other new model replaces the old model it doesn’t automatically bring an enhancement in the overall experience of e-learning. That enhancement is really only maximised by each of the teachers/course design teams having to go back and re-do all the work they’d previously done to get the mix of models right for their context.

This is where (I think) the “technology dip” comes from, Underwood and Dillon (2011)

Introducing new technologies into the classroom does not automatically bring about new forms of teaching and learning. There is a significant discontinuity between the introduction of ICT into any educational setting and the emergence of measurable impacts on pedagogy and learning outcomes (p. 320

Instead the quality of learning and teaching dips after the introduction of new technologies (new models) as teachers struggle to work out the new mix of models that are most appropriate for their context.

It’s not how bad you start, it’s how quickly you get better

In reply to my comment on his post, Mark asks the obvious question

What other model is there?

Given the argument that “all models are wrong”, how do I propose a model that is correct?

I’m not going expand on this very much, but I will point you to Dave Snowden’s recent series of posts, including this one titled “Towards a new theory of change” and his general argument

that we need to stop talking about how things should be, and start changing things in the here and now

For me this means, stop focusing on your new model of the ideal future. e.g. If only we used Slack for the LMS. Instead develop an on-going capacity to know in detail what is going on now (learner experience design is one enabler here), enable anyone and everyone in the organisation to be able to remix all of the models (the horrendously poor way most universities don’t use network technology to promote connections between people currently prevent this), make it easy for people to know about and re-use the mixtures developed by others (too much of the re-mixing that is currently done is manual), find out what works and promote it (this relies on doing a really good job on the first point, not course evaluation questionnaires), and find out what doesn’t work and kill it off.

This doesn’t mean doing away with strategic projects, it just means scaling them back a bit and focusing more on helping all the members of the organisation learn more about the unique collection of model mixtures that work best in the multitude of contexts that make up the organisation.

My suggestion is that there needs to be a more fruitful combination of the BAD and SET frameworks and a particular focus on developing the organisation’s distributed capacity to develop it’s TPACK.

References

Box, G. E. P. (1979). Robustness in the Strategy of Scientific Model Building. In R. Launer & G. Wilkinson (Eds.), Robustness in Statistics (pp. 201–236). Academic Press. doi:0-12-4381 50-2

Holt, D., Palmer, S., & Challis, D. (2011). Changing perspectives: Teaching and Learning Centres’ strategic contributions to academic development in Australian higher education. International Journal for Academic Development, 16(1), 5–17. Retrieved from http://www.tandfonline.com/doi/abs/10.1080/1360144X.2011.546211

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/education/e-learning-in-tertiary-education_9789264009219-en

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachers’ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932

Refining a visualisation

Time to refine the visualisation of students by postcodes started earlier this week. Have another set of data to work with.

  1. Remove the identifying data.
  2. Clean the data.
    I had to remind myself the options for the sort comment – losing it. The following provide some idea of the mess.
    [code lang=”sh”][/code]
    :1,$s/”* Sport,Health&PE+Secondry.*”/HPE_Secondary/
    :1,$s/”* Sport, Health & PE+Secondry.*”/HPE_Secondary/
    :1,$s/Health & PE Secondary/HPE_Secondary/
    :1,$s/* Secondary.*/Secondary/
    :1,$s/* Secondry.*/Secondary/
    :1,$s/* Secondy.*/Secondary/
    :1,$s/Secondary.*/Secondary/
    :1,$s/* Secdary.*/Secondary/
    :1,$s/* TechVocEdu.*/TechVocEdu/
  3. Check columns
    Relying on a visual check in Excel – also to get a better feel for the data.

  4. Check other countries
    Unlike the previous visualisation, the plan here is to recognise that we actually have students in other countries. The problem is that the data I’ve been given doesn’t include country information. Hence I have to manually enter that data. Giving for one of the programs, the following.

    4506 Australia
    8 United Kingdom
    3 Vietnam
    3 South Africa
    3 China
    2 Singapore
    2 Qatar
    2 Japan
    2 Hong Kong
    2 Fiji
    2 Canada
    1 United States of America
    1 Taiwan
    1 Sweeden
    1 Sri Lanka
    1 Philippines
    1 Papua New Guinea
    1 New Zealand
    1 Kenya
    1 Ireland

And all good.

github and the Moodle – Step 3

Time to follow up step 2 in connecting github and the Moodle book module.

Current status

  1. Initial Book tool set up and a github repo created.
  2. Identified a PHP client for the github api that looks useful.
  3. Explored how to complete various required tasks with that API from command line php.

To do here

  1. Consider how the status or relationship between github and book are displayed/tracked.
  2. Refine the design of how the book tool will work with the github api.
  3. Some initial implementation.

How might the status be tracked

I haven’t explored the github API enough. What are the ways you might keep a track of the relationship between github and book versions of the file

  • Create a github repo on the Moodle server and use git. No
    This isn’t a good idea for a few reasons. Can’t see too many Moodle instances wanting random local repos set up for each book. Plus the current model here is that the book is linked to one file in a repo. Meaning you might create locally the whole repo to get one file.
  • Compare sha.
    Git creates a checksum. I guess in theory a checksum of the local book could be produced and compared. However, it appears you can’t a file’s sha from github without also getting the content. Also calculating the local sha might also be heavyweight (if possible to do in an equivalent way). Don’t want to be doing this each time a author views a book chapter.
  • Commits?
    Keep a track locally of the version/commit that was last imported into the book. Then do a test for later commits. Again this would have to be done each time someone viewed a book chapter.

Testing commits

This code
[code lang=”php”]
$commits = $client->repos->commits->listCommitsOnRepository( $owner, $repo );
[/code]

Returns an object of about 50Kb on a fairly small and inactive repository. But it is returning commits on the whole repository. You can refine the path.

Specify the file’s path (information the book tool would have) and it’s down to 16.47

That information includes the sha(s) for all the commits and also the dates when the commit was done (and by whom). The Book module maintains a

Point: This information would be useful to display on the status page.

Clarifying the design

Assume that the author has just created a book resource on a Moodle install that has the github book tool installed.

  1. Empty book – no github connection.
    Beyond the normal empty book interface, the author also sees something like the “GitHub (off)” link in the Administration block as shown in this image.
  2. Turn the github link on
    Clicking on the GitHub link opens a new page that will show

    • Basic information about the tool and how it works (and pointers to more detailed information).
    • Space to enter the required details, including
      • the author’s github username

        Will need to explore oAuth

      • name of the repo
      • name of the file to link
        Note: this will need to be able to handle specifying an existing file in the repo (which in a perfect world would have a a nice gui interface to do – but time won’t allow that – even the OERPub editor didn’t do that) or choose to create a new file based on the book.

        There’ll be a different workflow from here depending on which of these. I’ll focus here on connecting to an existing file.

  3. Github link configured and turned on
    Details about the link have been entered correctly, checked and now the tool displays details about the status of the file. The details will need to be entered into a book tool database.
    At this stage I don’t think it will have imported anything. Just display a list of details about the file. At this stage the author has the option to import the file after checking.
    The author should have the following choices

    1. Which file to import
      In most cases there will be multiple versions of files in the repo. The display should show details of all of them and allow the author to choose which to import.
    2. How to import
      There’s the question of how to import. i.e. add the contents of the file to the end of the book, to the start of the book, or to overwrite.
      Of course, this complicates coding. Especially in terms of committing changes back to the repo. Does the whole book (including the stuff that used to be there) get committed, or just the most recent?
      Initially, there may not be any choice how to import. All or nothing.
      What about merging/updating? Purely updating could be done by overwrite, but merging is different. i.e. I’ve made changes in the book and someone else has made changes in github and I’d like to merge those changes into the one file.
      At this stage, I’m leaning towards putting the onus back on github and keeping the book tool dumb. Makes it easier to implement and maintain at the cost of making it harder for the author – they need to know github to handle this case.
  4. File being imported.
    Clicking the “import” button starts the overwriting process (or a choice of import strategy if provided). The following screen will show the outcome of that process. What it shows might include

    • Whether or not the file was in a format that could be imported.
    • If there were any errors in the format. (these first two are related)
    • The number of chapters/sub-chapters etc that were found.

    The book tool table should be updated to store the date associated with the commit that was imported. Perhaps the SHA should also be stored to allow working on old versions of the file.

  5. Return to the normal book view
    Time to check out the imported book. The Book administration block should now display a link to the file on github that was imported and some indication of the relationship of the contents of the book. Options are

    • clean – i.e. the book and github version are the same.
    • ahead – i.e. the book version has been modified.

      This would include a link to push the changes back to the repo

      If the author has chosen to use an old version of the file for the book and has then changed the book, this is going to create issues for github (I believe). The tool may have to detect this and suggest that the author handle this via github. Will need to explore more.

    • out of date – i.e. the github version has been modified.

      This would include a link to pull the changes from the repo and update the book.

    • both ahead and out of date

      The initial design image had this situation having links to both pull and push. Instead, this might need to be a link to “merge”. Where that would be some advice on how to use github to do the merge.

    This would be calculated by using

    • the commit dates for the file from github
      Would need to include the sha of the file to work with a particular version. This will need to be retrieved everytime the book is viewed, just in case it’s been changed.
    • The “timemodified” field in mdl_book
      Which I assume is kept up to date.
  6. Initial implementation

    The main aim here is to test some of my assumptions around how the github communication will work. For now, I’m going to ignore broader questions such as the github tools database requirements (I’ll hard code specific information for now) and the actual import/export process.

    The focus will be on

    1. Implementing an initial status page.
    2. Getting the link in the Book administration to change

    Initial status page

    Aim here is that a click on “GitHub” in the Book administration block will take the author to a page that shows the status and details of the file that’s currently linked to the book. Test out the use of the github api and performance etc.

    And with a bit of kludging a connection is made and the content is displayed.

    Time now to look at the commits, start thinking about the structure of the code, and the HTML.

    Starting to put the github API calls into the lib.php file. Abstract that away hopefully.

    A lot of the data via the API is returned via JSON that is converted into hash arrays in PHP. Wondering if there’s some neat way of transforming those arrays into tables in Moodle? In PHP? Mustache templates are coming, but perhaps a bit too new to use?

    Let’s checkout the Output API and renderers – but I can’t figure out how to get the render call to work. And it may not be able to as the Book module itself doesn’t use a renderer.

    Back to more primitive approaches and have a bit of tinkering and exploring with both Moodle development and the github client we have a version of the github book tool that is talking to github. But only getting some initial information from github, not yet importing anything useful into the book. It looks like this

    github_v0

    The “History” section is all information retrieved from github for a specific file in a specific repository. It shows a list of all the commits on that file. When the commit was made, what the commit message was, a link to the HTML page on GitHub that shows more information on the commit, and the details of the person who made the commit.

    The idea is that the github book tool will eventually

    • If the book resource is linked to one of these commits
      • Highlight which of the commits (if any) is the current link to the book resource.
      • Indicate whether the book is up to date, ahead, or behind the version in github.
      • Provide links to the github book tool to take appropriate action (push, pull etc). based on the status
    • If the book resource is not yet officially linked to one of these commits
      • Provide a link to the github book tool to make the connection.
    • If the github book tool couldn’t access information from github
      • Attempt to diagnose the problem and display information about why
      • Ask for github credentials if required

        Raising a whole range of issues to consider (how to store passwords, oAuth?).

    Current focus is to have a largely working prototype to show people, get feedback, and test out major sticking points. Thus, next steps to do include

    • oAuth.
      Will be required if we want people to be able to use private repositories and should probably be required for commits back to github
    • Working with any commit.
      Should (yes I think) and can the github book tool allow the user to work with any commit.
    • Status.
      Have the github book tool be able to discover whether the current book is up to date, ahead, or behind the repo.
    • Responsive administration link
      Have the github book tool link in the Book adminstration block show the status.
    • Commit.
      Github book tool can commit to the repo. (Initially working with a simple import/export from the Moodle tables)
    • Pull.
      Book tool can pull from the repo. (Initially working with a simple import/export from the Moodle tables)
    • Identify format.
      Need to identify a format for the single file that is stored on github. A single HTML file. Will need to identify chapters and sub-chapters.
    • Implement import/export.
      Allow data to actually flow from github and Moodle.
    • Consider additional modifications.
      A straight dump from Moodle isn’t going to produce a file useful outside of Moodle. e.g. a link from one chapter to another chapter in Moodle, isn’t going to work as expected in the single file. Import/export will need to do some form of translation.

      There are other translations that may also be useful. e.g. the single HTML file might use a standard CSS link to allow display out of Moodle. Going into Moodle remove this, going out of Moodle include it. Identifying other Moodle specific links, perhaps identifying them with a specific CSS class that will make them obvious out of Moodle.

Visualising locations of students etc

I’ve been set a task (asked nicely really) by my Head of School if it is possible to produce a map that will allow all and sundry to see the geographic spread of our students.

I vaguely remember doing something like this previously with Google maps, but I didn’t think it “visual” enough. @palbion identified a couple of GIS experts in another school who could probably do it. I still don’t know whether I can do it, but I’m using this as an opportunity to test the adage from Connectivism that

The capacity to know is more critical than what is actually known (Siemens, 2008)

Can I use my “capacity to know” to solve this problem?

Making a connection

Just over an hour ago I tweeted out a plea

https://twitter.com/djplaner/status/633502273172144128

Within minutes @katemfd tweeted and introduced me to “the Fresh Prince of Visualisation of Things on Maps”

https://twitter.com/KateMfD/status/633503201526808576

Who replied very quickly with this advice

https://twitter.com/cbhorley/status/633504046351908864

Making many more connections

Now all I have to do is to grok @cartodb and produce a map.

But first, perhaps check pricing and functionality. Looks like the free version will work. The small wrinkle is the absence of “private datasets”. In the last week we’ve had a couple of serious emails make the rounds about student privacy. Will have to keep that in mind.

I should filter the data a bit more, but let’s give it a go.

  1. Drag and drop data onto the page
  2. Nice interface to manipulate the data once uploaded.
  3. First problem is geo-referencing the data.
    Postcodes are in the data, but not sure if this is sufficient. Need to look at the support. Looks like I might need to add country details. That’s it.

First version done. Time to filter. At this stage, I’m not going to show the visualisations given the worry about privacy.

Oh nice, the platform automatically creates different visualisations including a heat map and has wizards to modify further.

That’s produced a reasonable first go. Will need to refine it more, but enough to send off to the HoS.

That took no more than 20 minutes.

So which is more important?

The original quote is

The capacity to know is more critical than what is actually known

The above experience is actually a combination of both. The network I’ve built on Twitter – especially the brilliant @katemfd (performing as what Barabasi would call a network hub) – has provided the “capacity to know”. It helped me access someone for whom @cartodb was “actually known”.

But wouldn’t Google have worked just as well?

A couple of weeks ago I had performed a quick Google search and didn’t find @cartodb. I didn’t “actually know” about it and so I had to spend too much time figuring out how “to know”.

But even making the connection with @cbhorley wasn’t sufficient. In order to use @cartodb effectively I used a range of stuff that I already “know”.

Why should a teacher know how to code?

The idea that everyone should know how to code is increasingly dominant and increasingly questioned. In terms of a required skill that everyone should know, I remain sitting on the fence. But if you are currently teaching in a contemporary university where e-learning (technology enhanced learning, digital learning, online learning, choose your phrase) forms a significant part of what you do, then I think you should seriously consider developing the skill.

If you don’t have the skill, then I don’t know how you are surviving the supreme silliness that is the institutionally selected and mandated e-learning environment. And, at the very least, I’ve been able to convince Kate

https://twitter.com/Kate_Ames/status/601200435957923841

Which means I think it’s a good step when Alex and Lisa have decided to learn a bit of “coding” as the “as learner” task for netgl. I might disagree a little about whether “HTML” counts as coding (you have to at least bring in Javascript to get there I think), but as a first step it’s okay.

Why should a (e-)teacher know how to code

(Sorry for using “e-teacher”, but I needed a short way to make clear that I don’t think all teachers should learn to code. Just someone who’s having to deal with an “institutionally selected and mandated e-learning environment” and perhaps those using broader tools. I won’t use it again)

What reasons can I give for this? I’ll start with these

  1. Avoid the starvation problem.
  2. Avoid the reusability paradox.
  3. Actually understand that digital technologies were meant to be protean.
  4. Develop what Schulman (1987) saw as the distinguishing knowledge of a teacher,

The starvation problem

Alex’s reasons for learning how to code touch on what I’ve called the starvation problem with e-learning projects. Alex’s description was

our developers work with the code. This is fine, but sometime…..no…often, when clients request changes to modules they have paid tens-of-thousands of dollars for, I feel the developers’ time is wasted fixing simple things when they could be figuring out how to program one of the cool new interactions I’ve suggested. So, if I could learn some basic coding their time could be saved and our processes more efficient.

The developers – the folk who can actually change the technology – are the bottleneck. If anything needs to change you have to involve the developers and typically most institutions have too few developers for the amount of reliance they now place on digital technologies.

In the original starvation problem post I identified five types of e-learning projects and suggested that the reliance on limited developer resources meant that institutions were flat out completing all of the necessary projects of the first two types. Projects of types 3, 4, and 5 are destined to be (almost) always starved of developer resources. i.e. the changes to technology will never happen.

# Description
1. Externally mandated changes.
2. Changes arising from institutional strategic projects.
3. Likely (strategic) projects that haven’t registered on some senior managers radar
4. Projects that only a sub-set of institutional courses (e.g. all of the Bachelor of Education courses) will require.

How can we be one university if you have different requirements?

5. Changes specific to a course of pedagogical design.

For a teacher, it’s type 4 and 5 projects that are going to be of the immediate interest. But these are also the projects least likely to be resourced. Especially if the institution is on a consistency/”One University” kick where the inherent diversity of learning and teaching is seen as a cost to be minimised, rather than an inherent characteristic.

Avoid the reusability paradox

Choice1

The question of diversity and its importance to effective learning (and teaching) brings in the notion of the reusability paradox. The Reusability Paradox arises from the idea that the pedagogical value of a learning object (something to learn with/from) arises from how well it has been contextualised. i.e. how well it has been customised for the unique requirements of the individual learner. The problem is that there is an inverse relationship between the pedagogical value of a learning object and the potential for it to be reused in other contexts.

The further problem is that most of the e-learning tools (e.g. an LMS) are designed to maximise reuse. They are designed to be used in many different contexts (the image to the right).

The problem is that in order to be able to maximise the pedagogical value of this learning object I need to be able to change it. I need to be able to modify it so that it suits the specifics of my learner(s). But as we’ve established above, the only way most existing tools can be changed is by involving the developers. i.e. the scarce resource.

Choice2

Unless of course you can code. If you can code, then you can write: a module for Moodle that will allow students to use blogs outside of Moodle for learning; a script that will allow you to contact students who haven’t submitted an assignment; develop a collection of tools to better understand who and how learners are using your course site; or, mutate that collection of tools into something that will allow you to have some idea what each of the 300+ students in your course are doing.

Understand the protean nature of digital technologies

And once you can code, you can start to understand that digital technologies aren’t meant to be Procrustean tool that is “designed to produce conformity by violent or ruthless methods”. But instead to understand the important points made by people such as the gentlemen to the left – Doug Englebart and Alan Kay. For example, Kay (1984) described software as the “most protean of media” and suggested that it was obvious that

Users must be able to tailor a system to their wants (p. 57)

The knowledge base for teaching

Shulman (1987) suggested that

the key to distinguishing the knowledge base of teaching lies at the intersection of content and pedagogy, in the capacity of a teacher to transform the content knowledge he or she possesses into forms that are pedagogically powerful and yet adaptive to the variations in ability and background presented by the students (p. 15)

If the majority of the teaching you do is mediated by digital technologies, then doesn’t the ability to transform the digitial technologies count as part of the “knowledge base of teaching”? Isn’t coding an important part of the ability to perform those transformations? Shouldn’t every teacher have some ability to code?

I’m not ready to answer those questions yet, still some more work to do. But I have to admit that it’s easier (and I believe effective) for me to teach with the ability to code, then it would be without that ability.

References

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53–59.

Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–21. Retrieved from http://her.hepg.org/index/J463W79R56455411.pdf

Understanding learning as network formation

This is a follow on from yesterday’s post weaving in a few posts from netgl participants.

Learning as a (common) journey

Rebecca uses emojis to illustrate a fairly typical journey through netgl (and a few of the other courses I teach). As is confirmed by the comment from another Rebecca (there are 8 participants in the course and 3 of them are Rebecca’s).

One of the turning points for Rebecca (who wrote the post) was a fairly old-fashioned synchronous session held in a virtual space

But then I attended the online chat session, clarified where I was supposed to be heading

Rebecca links this to

when things are deemed too difficult, people tend to revert to coping strategies. In this case, it was good ol’ face to face talking (OK…admittedly online and not in the ‘true’ sense…) to achieve direction out of the online maze.

Aside: I’m wondering if the journey metaphor is just a bit to sequential. Perhaps it’s illustrative of our familiarity and comfort with the sequential, rather than the complexity and inter-connectedness that arise from a network view.

The problem of being disconnected

I think there’s some connection between Rebecca’s struggles with something new and the experience of Lisa’s 11 year-old during a blackout

I found myself with a crazy bored eleven-year-old on my hands who was pacing the house saying ‘when’s the power coming back on, when’s the power coming back on’. His level of anxiety at being disconnected was incredibly sobering.

I also wonder whether the relief Rebecca got from “good ol’ face to face talking” is related to Lisa’s experience of the blackout

It was lovely, not just to switch off from the noise and chaos, but from the words as well – as you say, time for the diffuse mode to kick in and allow moments of quiet reflection

Learning as network formation

As mentioned in yesterday’s post, at some level networked learning is about the idea that what we know is actually (or at least fruitfully represented as) a network. Yesterday’s post pointed to brain research that is based on the brain being a network. It also drew on Downes’ writing on connectivism which has the view

learning is the formation of connections in a network

From this perspective, you might suggest that Rebecca and Lisa’s 11 year-old have already formed networks (learned) how to cope with certain situations like a face-to-face session or spending Saturday with electricity. But they haven’t yet formed networks to deal with the new and unexpected situation. Meaning that they have to start forming that network. Starting with their existing networks, they need to start making new connections to different ideas and practices. Figure out if any existing connections may need to be questioned as not necessarily the only option (e.g. spending all day on the computer, learning via traditional modes). Test out some of the nascent connections and see if they work as expected.

This type of network formation is hard. Especially when the number and diversity of the new connections you have to make increase. Learning how to learn online in a xMOOC which consists of lots of small video-taped lectures, with a set, sequential syllabus that is stored in one place. Is a lot easier than learning how to learn online in a cMOOC that isn’t taking place in one place and expects you to figure out where you want to go.

How do I know? How do I keep up?

In the midst of getting their head around the different approach to learning taken in netgl quite a few folk have raised the question of “how do I keep up”? I saw it first in another of Rebecca’s posts in the form of this question

how, once I graduate from being a formal student and progress into the world of teaching (in whichever form that may take), on Earth do I keep up with all the new programs, networked learning, social media hookups that seem to pop up hourly that I need to contend with?

Charm has shared via the netgl Diigo group a link to and some comments on Kop and Hill (2008), which includes this on connectivism

Connectivism stresses that two important skills that contribute to learning are the ability to seek out current information, and the ability to filter secondary and extraneous information. Simply put, “The capacity to know is more critical than what is actually known” (Siemens, 2008, para. 6).

Rebecca, I think this quote gives you a “network” answer to your question. Your ability to “keep up” (to know, to learn) is what is important, not what you know.

I should also mention this next point from Kop and Hill (2008) which I think is often overlooked

The learning process is cyclical, in that learners will connect to a network to share and find new information, will modify their beliefs on the basis of new learning, and will then connect to a network to share these realizations and find new information once more. Learning is considered a “. . . knowledge creation process . . . not only knowledge consumption.”

“To know” versus “actually known”

In responding to Rebecca’s post, Alex asks

So, given the population is ceasing its reliance on fundamental knowledge and increasing its dependence on immediate information, do you think field-specific academics will remain a valuable entity, as they hold deep information on specific areas?

Touching on the debate between those who believe “to know” is more important and those who believe that “actually known” is still important. A debate that is on-going (for some) and for which I have to admit to not having any links for. My inability to provide links to the “actually known” folk is perhaps indicative of my own networks and prejudices.

Implications for teachers?

It is a debate that raises questions about the role of the teacher. Lisa’s search for a metaphor for the teacher role had her pondering: sage, guide, or grandmother. Grandmother being a link to the work of Sugata Mitra (in a comment I pointed Lisa to a critique of Mitra’s work).

IN terms of “guide on the side” Lisa writes

the role of the guide on the side becomes less about “being the facilitator who orchestrates the context”, as Alison King described in the nineties, and more about helping students to develop the tools and skills needed to hear and decipher a coherent message from the cacophony of information available to them.

Personally, I have an affinity for McWilliam’s (2009) concept of the “meddler in the middle” which points toward a more

interventionist pedagogy in which teachers are mutually involved with students in assembling and/or dis-assembling knowledge and cultural products

Which could perhaps be re-phrased as “mutually involved with students in the formation of their networks”.

I’ll end with Downes’ slogan that describes what he sees as the teacher and learner roles which seems to align somewhat with that idea

To ‘teach’ is to model and demonstrate. To ‘learn’ is to practice and reflect.

Testing the Lucimoo epub export book tool

There’s movement afoot. The Lucimoo epub export tool for the Moodle book module is going through the process of being tested (and perhaps installed) on my institution’s main Moodle instance. What follows is a bit of testing of that tool in the institution’s test environment.

Verdict: all works, a few changes to practice to leverage it properly.

Import a few books

First step is to import a few books into the bare course site within the test environment. Just a few random books from my main course. Something that’s much easier now that @jonof helped identify some lost knowledge (and my oversight/mistake).

Of course it is never perfect. The default setting on the test environment is to use the gui editor. Which removes links to CSS files. Which is a real pain.

Doing an export

Once in the book select the administration/settings block and hey presto, there’s the option to “Download as ebook”

export

Select that option and I get the option to download the ePub file or view it in iBooks.

As reported earlier the ePub contains a few errors because apparently the original HTML content in my Book resource doesn’t always meet ePub’s stricter requirements. The bugs I had to fix included

  • Missing ending tag for an image (produced by ImageCodr)
    Of course it appears that the over-reaching default HTML editor in Moodle is automatically removing the /> I’m putting at the end of the <<img tag. I've had to change my preference to the plain text area to get that fixed.

    God I hate tools that assume they know better than I what I want to do and won't let me override their assumptions.

  • It appears that it doesn’t like the nbsp entity either.
    There appears to be some blather about this online, but I don’t have the time to fix it. For now I’ll remove the nbsp entities.
  • “Opening and ending tag mismatch: br line 0 and div”
    Replace <br> with <br />

    And all this so far is largely in code auto-generated by ImageCodr

  • “Opening and ending tag mismatch”
    An issue with the relationship between P and BLOCKQUOTE tags that I am somewhat lazy. Yay, that’s the first page.
  • The spacing around the image isn’t great.
  • “Specification mandate value for attribute allowfullscreen”
    A YouTube embed that doesn’t meet expectations.
  • The videos don’t show.
    There is a space for the embedded YouTube video, but it is empty. Will need to figure out a way to fix this. Especially in this test book, which has a lot of videos in it
  • Missing styling.
    In this books I use a bit of CSS to style elements such as activities. The ePub version is currently not showing that styling, though the “Print this book” version does. Ahh, that’s caused by the magical CSS chomping GUI editor. Fixed.

You can view the final ePub file and also a PDF produced by “printing the book”.

The layout of the PDF isn’t great. It does at least show some visual evidence of the videos. Though it’s not very useful.

Test the Assessment book

Assessment is of course what is uppermost in the minds of students, so I should test that book. I don’t have that in my nice offline format, so will have to explore the how Moodle backup and restore process.

Again, a slightly different collection of HTML “strictness” problems. Given the size of the assessment book, there are surprisingly few of them.

The major problem here is that my “macro” approach that relies on jQuery to update due dates and related information obviously won’t work with ePub. Wonder if the filter approach will work with the ePub export tool?

View the results on a mobile device

One of the main benefits of the ePub format is that it is supposed to play nicely with mobile devices. Hence testing the files on a mobile device (my phone) would be sensible. Observation and problems include

  • The Flickr image on the front page isn’t showing up. There is a link I can click on, but not embedded in the book. Wonder if that’s a config option in ibooks?
  • The CSS styling on tables for Assessment doesn’t appear to work.
    It does in iBooks on the laptop, but not on the phone. In much the same way that the images work on laptop, but not phone.
  • Neither does the table of contents, actually that appears to be an issue with internal links being added into the ToC and some of these being incorrectly.

Problems to be explored at a later date, not show stoppers, just part of learning the details of a new ecosystem.

There's more to it than the Internet and social software

The following is a bit of reflection and curation of various posts from participants in the netgl course. There’ll be a few of these coming. The aim for this post is to suggest that there might be more to the “networked” part of Networked and Global Learning than just the Internet and social media. This is an important point to make because the interventions design by the folk from last year’s offering of the course were a little too limited in their focus on the Internet and various forms of social media.

At some level, the argument here is similar to the one from this post titled “Why everything is a network” i.e. not that everything is a network, or that a network is the only metaphor by which to understand a whole range of situations. It is to suggest, however, that a network is a useful model/metaphor through which to understand and guide interventions in a range of situations.

And this is a view that can trace its origins beyond just learning, teaching and education. Barabasi (2014) writes

Networks are present everywhere. All we need is an eye for them.

and then goes on to show how a network perspective provides ways to understand as diverse topics as: the success of Paul in spreading Christianity; how to cure a disease; and, the rise of terrorism. Leading to the

important message of this book: The construction and structure of graphs or networks is the key to understanding the complex world around us. Small changes in the topology, affecting only a few of the nodes or links, can open up hidden doors, allowing new possibilities to emerge (p. 12)

Changes in purchasing books

For example, in thinking about the future of Tertiary education Lisa talks about changes in the publishing industry, including her own behaviour around purchasing books

as this industry seems to be floundering and I only have to look at my own behaviour as a consumer to see why. As a book consumer, I can say I do still read, but I get my books from the places that are cheapest and easiest for me – Amazon and Audible (owned by Amazon). Why would I spend $45.00 on a hard-copy book from a shop when I can listen to it on the way to work by paying an audible credit that costs less than $13.00? Why would I order a book from a retailer that may take months to arrive that I can download to my Kindle app instantly – and cheaply?

Changes that I observe in my own practice. But also more than that. The Barabasi quote from above is from the Kindle version of the book I purchased. I read that book mostly while traveling to and from Wagga Wagga using my phone. Highlighting bits that were relevant to me and making annotations as I went. In writing this post, I’ve started up the Kindle app on my Mac, synced with Amazon, and was able to view all my annotations and highlights. Not only that, I was able to also see the popular highlights from other people.

The experiences of both Lisa and I illustrate how digital books are making it easier to create links or connections between nodes. Both Lisa and I find it much easier to “connect” (i.e. buy) a book via the combination of Amazon and the Kindle apps. Not only in terms of price, but also in terms of speed. Having the content in a digital form that can be manipulated also helps make links to specific parts of the book.

Barabasi (2014) writes

Nodes always compete for connections because links represent survival in an interconnected world.

Amazon is currently winning a large part of the publishing “war” because it is making the ability to “link” to a book or other publication much easier. The more links it is able to create, the more likely it will be able to survive.

What if there isn’t a network?

Angela ponders “The challenge of networked learning when there is no Network…” as she enjoys a weekend away from Internet connectivity and apparently no ability to engage in netgl. Of course, Angela has forgotten that she had taken along one of the most complex networks we currently know, her brain.

The Connected Brains website makes prominent use of this quote from Tim Berners-Lee

There are billions of neurons in our brains, but what are neurons? Just cells. The brain has no knowledge untill connections are made between neurons. All that we know, all that we are, comes from the way our neurons are connected.

The website then goes onto to trace some of the history and research going on that seeks to understand the brain as a complex network.

In a post title “Connectivism as a learning theory” Stephen Downes makes the connection between the view of the brain as a network, a weakness in other theories of learning, and how connectivism addresses this by viewing learning as “the formation of connections in a network”.

In closing

…much more to come, it’s been a fruitful week for netgl blogging.

But the point here is that the “network” part of netgl is much more than just social software and the Internet. This is perhaps the most visible parts of netgl to the participants, but these aren’t the only examples of, nor are they required for netgl.

github and the Moodle book – Step 2

The continuing story of linking github and the Moodle book module. Following on from step 1 the main aim here is to grok the PHP client for the github api I’m currently chosen.

Some additional work to be done includes

  1. Consider use of branches etc
  2. Ponder whether to work only with releases – or more open as listed below.
    Releases is more directly supported by the PHP client, but directly with content may be a little more flexible. But releases are perhaps more inline with expectations? Perhaps this is a question to answer by looking at some of the ways other similar projects are working.

    At this stage, I sort of see using the book to modify the repo as something that is happening prior to a release.

  3. Looks like storing the sha of the file in a local Moodle database will be necessary to help with checking statuses etc.

How to (if)?

I’ve got it installed and working from command line php scripts. Need to figure out how to use it

  1. Does the file exist in the repo?
    Getting the content should return a 200 status code and “type: file” if it is a file, but it will also return the content of the file.
  2. Create a new file
    API: PUT /repos/:owner/:repo/contents/:path
    Initial implementation in PHP working.
  3. (fetch) Get the content for the file.
    API – GET /repos/:owner/:repo/contents/:path
    Intial implementation in PHP working.
  4. (push) Update the file with new content.
    API: PUT /repos/:owner/:repo/contents/:path
    Initial implementation in PHP working
  5. What is the status of the file in the repo?
    What do I actually mean by status? The full history? Still need to find what, if anything in github/git/the API provides this.
  6. What is the relationship between the content/status of the file in the repo and the content in the book.
    Looks like it’s available via the same call.

How does it work

Would help if I understood the model that it uses. Some of the example code includes something like[code lang=”php”]$commits = $client->repos->commits->listCommitsOnRepository($owner, $repo);[/code]

The question is whether or not there is any pattern in common between this and the github API. I assume there is and grokking that pattern should lead to understanding how to use the API.

The assumption is that the client provides a method to access the API and hence the pattern of methods etc should match.

In the GitHub api is there an equivalent to listCommitsOnRepository? And is it found in something within a hierarchy of repos/commits?

There does appear to be a match. The heading List commits on a repository seems to match and its found within repos/commits.

Can I apply this to get the contents of a file?

The GitHub API defines it here

  1. Title – Get contents meaning method getContents??
  2. Structure is Repositories/Contents
  3. parameters – owner, repo, path

Leading to something like
[code lang=”php”]$commits = $client->repos->contents->getContents($owner, $repo, $path);[/code]

Let’s see if I can write code to retrieve the contents of this file from GitHub.

Mmm, getting undefined method for getContent(s).

Let’s dig into the code. GitHubClient class creates the various objects.

What does GitHubRepos contain? There is a link “contents” (GitHubReposContents) as expected. But it only apparently gets the readme!!!

Which does work. But begging the question, where’s the rest?

One fall back would be to call directly – the getReadMe is implemented via
[lang code=”php”]return $this->client->request(“/repos/$owner/$repo/readme”, ‘GET’, $data, 200, ‘GitHubReadmeContent’);[/lang]

That appears to work. Now the question is whether I can get the content. Yep, there is a method that will do that. But it’s still encoded in base64. This will fix that. The rough code that’s working follows.

Code for get a file

The following only works if the repository is open. The major kludge here is the use of GitHubReadmeContent as the last parameter in the request. This appears to define the type of object that is returned by request. This appears to work (for now) because the Readme is just another file. Hence it appears that the various members etc are directly applicable.

A final version should check use getType to check that the type of content returned is a file and not a symlink or directory

[code lang=”php”]
$owner = ‘djplaner’;
$repo = ‘edc3100’;
$path = ‘Who_are_you.html’;

$client = new GitHubClient();

$data = array();
$response = $client->request( "/repos/$owner/$repo/contents/$path", ‘GET’, $data, 200, ‘GitHubReadmeContent’ );

print "content is " . base64_decode( $response->getContent() );[/code]

Creating a new file?

At this stage, I’m thinking I’ll stick with the approach of using request directly. Mainly because GitHub API for this indicates it’s part of Contents. And it already appears that contents doesn’t include support for method. Yep, not there.

PUT /repos/:owner/:repo/contents/:path will do it. But it also lists other parameters message (commit message) and content as required. Plus committer and branch as optional. Plus this is also likely going to require credentials.

Yep, 404 error. Credentials required. Put in what I think is the required code and get a 422. Which is invalid field. API documentation suggests content is required. Best provide some.

And that appears to work. At least the file was created on GitHub. But it got a 201, rather than a 200 back. Which is actually what the documentation says should happen. Another quick test.

That’s better and the 2nd file is created. This code is listed below.

An example of the PHP client appears to be using releases as a way to upload (or create) a new file.

Code to create a file

Much the same limitation as above – i.e. is GitHubReadmeContent really the best value for the last parameter.

Will also need to look at handling exceptions (e.g. when the response code is different).

[lang code=”php”]
$owner = ‘djplaner’;
$repo = ‘edc3100’;
$path = ‘A_2nd_new_file.html’;
$username = ‘djplaner’;
$password = ‘some password’;

$client = new GitHubClient();
$client->setDebug( true ); # this is a nice little view
$client->setCredentials( $username, $password );

$content = “This will be the content in the second file. The 1st time”;

$data = array();
$data[‘message’] = ‘First time creating a file’;
$data[‘content’] = base64_encode( $content );

$response = $client->request( “/repos/$owner/$repo/contents/$path”, ‘PUT’, $data, 201, ‘GitHubReadmeContent’ );
[/code]

Update a file

Going to stick with the same method. In essence, this should be an almost direct copy of the code above. Ahh, one difference. There is an additional required parameter – sha – “The blob SHA of the file being replaced”. This will be something that needs to be gotten from git – it’s returned by getting content. Wonder if there’s a get status?

That appears to be working

code to update file

[code lang=”php”]
$owner = ‘djplaner’;
$repo = ‘edc3100’;
$path = ‘A_2nd_new_file.html’; # an existing file
$username = ‘djplaner’;
$password = ‘some password’;

$content = "This will be the content in the second file. The 4th time";

$client = new GitHubClient();
#$client->setDebug( true );
$client->setCredentials( $username, $password );

$sha = getSha( $client, $owner, $repo, $path ); # get the content to get sha

$data = array();
$data[‘message’] = ‘First time creating a file – Update 4’;
$data[‘content’] = base64_encode( $content );
$data[‘sha’] = $sha;
$data[‘committer’] = array( ‘name’ => ‘David Jones’,
’email’ => ‘some email’ );

$response = $client->request( "/repos/$owner/$repo/contents/$path", ‘PUT’, $data, 200, ‘GitHubReadmeContent’ );

print_r( $response );
[/code]

Statuses

Separate part of the API seems to deal with these. Works on the sha.

Seems the PHP client has a method based on repos to access this listStatusesForSpecificRef

Mmmm, this doesn’t look like it will do what I want at all. More searching required.

Bringing github and the Moodle book module together – step 1

The following is the first step in actually implementing some of the ideas outlined in an earlier post about bringing and the Moodle Book module together. The major steps covered here are

  1. Explore the requirements of a book tool.
  2. Name and set up an initial book tool.
  3. Figure out how to integrate github.

A book tool

The Moodle book module is part of core Moodle. Changing core Moodle is (understandably) hard. Recently, I discovered that there is a notion of a Book tool. This appears to be a simple “plugin” architecture for the Book module. People can add functionality to the Book module without it being part of core. The current plan is that the github work here will be implemented as a Book tool.

What does that mean? My very quick search doesn’t reveal any specific information. The book tool page within the list of plugin types in the Developer documentation is missing. Suggesting that perhaps what follows should be added to that page.

The plugin types page describes book tools as

Small information-displays or tools that can be moved around pages

Which is perhaps not the best description given the nature of the available Book tools.

The tool directory

The book tools appear to reside in ~/mod/book/tool. Each tool has it’s own directory. Apparently, with all the fairly common basic requirements in terms of files

etc.

The Book module’s lib.php has various calls to get_plugin_list(‘booktool’) in various places

  • book_get_view_actions
  • book_get_post_actions
  • book_extend_settings_navigation

The first two look for matching functions (e.g. book_plugin_get_post_actions) in the book tool’s lib.php which get called and then used to modify operations.

The settings navigation is where the changes to the settings/administration block get made and from there that’s how the author gets access to the booktool’s functionality.

Naming and getting it started

The plan seems to be to

  1. Create a new github repository for the new book tool
  2. Copy and edit an existing book tool to get started.
  3. Figure out how to slowly add github functionality.

Creating the booktool github repository

The repository will need to be called moodle-booktool_pluginname. What should the plugin name be?

I’ll start with github. Existing tools tend to include a verb e.g. print, exportepub, importepub, exportimscp. So this may be breaking a trend, but that can always be fixed later.

And then there was a repository.

Clone a local copy.

Copy the contents from another book tool and start editing

And take a note of work to do on the issues section of the github repository.

Updated the icon. Wonder if that will work as is?

Login to local moodle. It has picked up the new module and asking to install. That appeared to work. Now what happens when I view a book resource? Woohoo that works.

Doesn’t do anything useful beyond display the availability of GitHub (with the nice icon).

Early success

Push that code back to the repository.

How to integrate github

Time to actually see if it can start talking to GitHub and how that might be achieved.

Initial plan for this is

  1. Hard code details of github repository and credentials for a single Book module.
  2. Implement the code necessary to update the link in the settings block based on whether the book is up-to-date with the repository.
  3. Implement index.php function to display various status information about current repository and book.
  4. Implement the fetch and push functions.

    From here on a lot more thought will need to be given to the workflow.

  5. Implement the interface to configure the repository/credentials

Which all beg the question.

How to talk to the GitHub API

The assumption underpinning all of this is that the tool will use GitHub API to access it’s services. Moodle is written in PHP, so I’m looking for a PHP-based method for talking to the GitHub API.

There’s no clear winner, so time to do a comparison

  • Scion: Wrapper – initial impressions good. Does use cURL. But requires other “scion” based code
  • KnpLabs API – requires another library for the HTTP requests. Not a plus.
  • tan-tan-kanarek version – looks ok. No mention of other requirements.

Let’s try the latter. Installation and it’s all working. Now only need to grok the API and how to use it from PHP.

The focus here is on an individual file. The book will be connected to an individual file.

Most of these request seem linked to the Contents part of the API – part of Repositories.

Actions required

  1. Does the file exist in the repo?
    Getting the content should return a 200 status code and “type: file” if it is a file, but it will also return the content of the file.
  2. Create a new file
    API: PUT /repos/:owner/:repo/contents/:path
  3. (fetch) Get the content for the file.
    API – GET /repos/:owner/:repo/contents/:path
  4. (push) Update the file with new content.
    API: PUT /repos/:owner/:repo/contents/:path
  5. What is the status of the file in the repo?
  6. What is the relationship between the content/status of the file in the repo and the content in the book.

Running out of time. Will have to come back to this another day for Step 2.

Homogeneity: the inevitable result of a strategic approach?

Is homogeneity an inevitable end result of a strategic approach to deciding what gets done?

The following presents some evidence to suggest a potential strong correlation.

What is the strategic approach?

In Jones and Clark (2014) we suggested that contemporary universities (along most other organisations) increasingly use a strategic approach to decide what work gets done. We described strategy as

following a global plan intended to achieve a pre-identified desired future state.

It’s where a bunch of really smart people get together. They analyse the current situation, identify the requirements and the challenges, and then decide that the entire institution should do X. Where X might include: a particular strategic vision; a single set of graduate attributes for the entire organisation; a particular approach to branding and marketing; the selection of a particular information system etc.

Once the strategic decision is made, the entire organisation becomes focused on moving toward the various institutionally approved strategic goals. Doing anything else is seen as inefficient, inappropriate, and is to be rooted out.

The underlying aim of the strategic approach is differentiation. To set the institution apart from the other institutions. To give various stakeholders/customers/clients a reason to go to this institution first.

How does that work out for them?

It’s Hard to Differentiate One Higher-Ed Brand From Another

This page reports on a study of 50 US-based higher education institutions and includes quotes such as (emphasis added)

found that the mission, purpose or vision statements of more than 50 higher education institutions share striking similarities, regardless of institution size, public or private status, land-grant status or religious affiliation, or for-profit or not-for-profit status….
statements may accurately represent the broad views and aspirations of education leaders and their institutions. And they probably differentiate the institutions from financial service and retail companies

Interestingly the suggested solution to this problem is to forge “a strong organizational identity only starts with establishing and committing to a clear and differentiated purpose, brand and culture”. i.e. yet another strategic approach.

The sameness of graduate attributes

Few a few years know there’s been a fetish that has required each Australian University to develop their own set of graduate attributes. These are meant to indicate what are the unique attributes of a graduate of that institution. To demonstrate the unique value that the educational experiences offered by institution adds to the development of their customer student. Surely this must be the most obvious place of differentiation and distinction. Something the truly captures what is unique about each university.

Oliver (2011) does a scan of the literature and practice around graduate attributes identifies that

Universities’ most common generic attributes, apart from knowledge outcomes, appear to cluster in seven broad areas:

  1. Written and oral communication
  2. Critical and analytical (and sometimes creative and reflective) thinking
  3. Problem-solving (including generating ideas and innovative solutions)
  4. Information literacy, often associated with technology
  5. Learning and working independently
  6. Learning and working collaboratively
  7. Ethical and inclusive engagement with communities, cultures and nations.

(p. 2)

Strategic Information Systems

And the other fad over recent years has been the adoption of Strategic Information Systems such as ERPs and LMS. If the institution adopts the same system and works effectively together to leverage its capabilities we will be able to gain a competitive advantage over the opposition. Well, no.

Over 20 years ago, Ciborra (1992) argues

Tapping standard models of strategy analysis and data sources for industry analysis will lead to similar systems and enhance, rather than decrease, imitation (p. 297)

Which is why e-learning within Universities is increasingly infected by LMS-based courses using institutional standard course site designs, a digital repository, a lecture capture system, an e-portfolio, and a couple of other standard systems offering the same broken experience. Whether your LMS is open source or not, typically doesn’t make a difference.

The solution

Ciborra (1992) suggested

How then should “true” SISs be developed? In order to avoid easy imitation, they should should emerge from from the grass roots of the organization, out of end-user hacking, computing, and tinkering. In this way the innovative SIS is going to be highly entrenched with the specific culture of the firm. Top management needs to appreciate local fluctuations in practices as a repository of unique innovations and commit adequate resources to their development, even if they fly if the face of traditional approaches. Rather than of looking for standard models in the business strategy literature, SISs should be looked for in the theory and practice of organizational leaming and innovation, both incremental and radical. (p. 297)

Or as we argued in Jones and Clark (2014)

Perhaps universities need to break a little BAD?

Instead, universities like most organisations, are attempting to solve the problems of the strategic approach by doing the strategic approach again (but we’ll do it better this time, promise).

Insanity by Albert Einstein by Mimsen, on Flickr
Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License   by  Mimsen 

References

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

How might github and the Moodle book module work together

The Moodle open book project is attempting (not surprisingly) to modify the Moodle book module to enable it to produce open resources (educational or otherwise). The main focus is on making the content of the books open in a way that enables modification and reuse. The plan is to do this by enabling a Moodle book resource to be linked to github.

The following is an exploration of and an attempt to describe how this might work at a fairly high level.

What do you think? Might this work? Are there better options?

The next step will be to try some realistic technical explorations to see if this can be implemented.

Why?

The idea is that once in github different people (or courses) can use github to modify and collaborate around the same document. e.g. a book I created for my course, might be useful for another course looking at ICT and Pedagogy. Rather than play around with Moodle backups, I could create a github repository and the content of the book to that repository. The author of the other course can then fork that repo and import the content of the book from their repo into Moodle.

Any changes that either of us make to the book are stored in github. We can then use github’s features to share and manage changes.

Beyond this, I could make all of the books in my course available via github. Who knows, some of my students might find them useful or may wish to make changes that might enhance the work.

Implementation

At this stage, the idea is to implement the github ‘connection’ as a book tool. This means it can be installed by each Moodle site that wants it. When installed there will be a new link in the Book administration block through which you access the github functionality.

The intent is that an individual Book resource will be linked to a single file hosted in a github repository. The file would be a single HTML file (at least initially) with the different chapters and sub-chapters indicated in some yet to be defined way. The final format will aim to allow the HTML file to be edited by as many different editors as possible, but still allow simple importation into the Book module.

As a future feature, it might be possible and useful to all the import/export of that single file from github into the Book to be done using the user’s choice of other import/export tool. i.e. if I might want the file in github to be an epub file, I would configure the github tool to use the Lucimoo EPUB export/import tools to produce the file that is sent to/from github.

What it might look like

Initially, it might look like the following. The (off) is meant to be an indication that the connection to github is currently off. i.e. not being used.

001_off

Clicking on the GitHub link would open up a form that would be used to configure the necessary information including:

  1. github repository – that contains the file.
  2. file – the actual file being linked to.
  3. github credentials – of the author (with the option that this might be left empty for repositories configured to allow that).
  4. behaviour spec – i.e. how to import the file (replace existing content, append?), how to handle changes made in the book

    Initially, this would probably be left to some default combination. It would also be dependent on the settings of the repository and the permissions of the github credentials.

    More work required here.

Once this is configured, the administration link would change to indicate that a connection had been made. It would now have a link to the file on github and also some indication of the relationship between the book and the github file. In the following image “clean” implies the book and github file are a match.

002_on

If changes are made in the Moodle book this would mean that the book is “ahead” of the github file. The github link would change appropriately. It would also add an additional link “push”. Clicking on that link should probably display a page that provides some details of the changes to be pushed and allows the author to make the choice whether to push or not.

003_push

If the version of the file on the repository had been changed, then changes are also made.

004_out_of_date

Leaving the question of what happens when both local and remote changes have been made? Both? Some thought to be given here.

005_both

Assumptions

This is all based on

  1. The Book author has the details and credentials to a GitHub respository that contains a file of the correct format;
    This might be a challenge for some authors.
  2. There is no local git repository.
    Asking folk supporting a Moodle instance to install git on the server is a bit much. Instead, the content for the book will be stored in the Moodle database. No problems for Moodle, but raises questions about how to determine whether there have been changes. At least two current possibilities

    1. Store the date of last change on the repo in the Moodle database and compare dates for changes to the book.
    2. Generate/store a version of the HTML file locally and do a compare (sounding very heavyweight).
  3. That different books in a single course could be linked to entirely different github repositories.
  4. That the idea of adding additional links and status information about github into the administration block doesn’t break some Moodle style guide.
  5. Outstanding questions

    Lots.

    More specifically

    • How to handle links between chapters?
      A book is made up of chapters (a single HTML page). When displayed in Moodle the Book module provides simple next/previous page navigation. It’s also fairly common for authors to hard-code links between chapters (and even into chapters in other books). If the github version of a book stores all the chapters in a single file, what about these links?

      How do the existing export/import plugins handle this?

    • How to handle embedded resources (images, movies etc)?
      Books also contain embedded images, movies etc. The issue of how these are provided is a common one. I tend to use external services, but others place them into Moodle, how to handle these? How do the existing export/import plugins handle them?
    • Is all of the above technically possible with github, the github API, PHP, Moodle etc.
    • Does all this need to be github specific? Is there a way (and a need) for this to git specific, but not github specific?
    • What might be the process for create a new file in a repository based on an existing Moodle book?

ICT knowledge and quizzes

Do you know more about computers than an 11-year-old? is the title of an article from a UK newspaper. It contains an 11 question quiz that is apparently based on the primary school IT and computer science curriculum.

I’ve come across it via some folk currently taking one of the courses I teach. In that course they are learning about how to use ICT to enhance/transform student learning. For many starting out in the course their perceived level of ICT knowledge and competence is not high. I imagine a challenge phrased as the above struck a chord. Results included: 8 out of 11 and 7 out of 11.

What did you get?

I’ve been programming since 1983, have university degrees in Computer Science and Information Systems, and have spent my professional life teaching and developing applications of ICT. I should do ok. What did I get?

10 out of 11.

What does this tell us?

Yes, I’ve probably picked up a bit more knowledge about ICT over the years playing with computers. But that’s no great surprise.

Is the quiz a good judge of ICT knowledge, or more importantly capability to do useful things with ICT. No!

The article makes claims like

The scary thing for older people is that things you were never taught about are now common knowledge for young children. Even if you’re in your 20s quite a lot of what you learnt in school ICT lessons is probably obsolete now.

 

One of the multiple choice questions was along the lines of

In what year did Tim Berners-Lee invent the World-Wide Web?

The answer is 1989. This is not some new fangled ICT knowledge that someone in their 20s wouldn’t have learned.

In addition, this is the question I got wrong. One of the distractors was 1988. I couldn’t remember exactly when the WWW was invented, so went for 88.

There’s nothing in this question that tests my capability to do something creative with ICT. Nor with many of the other questions (e.g. which of the following list of words is not a programming language). Most are not basic knowledge required to be creative with ICT and most could be answered after a Google search.

None get at the fundamental knowledge or capabilities that have helped me maintain some level of knowledge about ICTs as they evolve. None talk about the fact that while the syntax and specifics of ICT change rapidly, there are basic principles of how they are designed and how you can learn and work with new ICT.

The quiz is based on the assumption that it is what you know that is important. It’s not. Arguably, it’s about “how you know things are connected”. That particular quote is from this post by @gsiemens. The original full quote is from this newspaper article and is from a former editor-in-chief of a dictionary. The full quote is about the English language

English is a network, being a literate person is not so much about what you know, but about how you know things are connected.

Which perhaps says something about Jocelyn’s concerns

It really gets me wondering how we can actually keep up with the speed in which technology and terminology around technology is constantly updating and changing. Makes me feel like I need to source an ITC dictionary or something to try and keep up with what it all means!

Possible sources of an institution's e-learning content problems

My current institution has a content problem when it comes to e-learning (insert digital learning, online learning, technology enhanced learning, or just learning if you prefer). The following is an attempt to use my experience teaching at the institution to understand what are some of the factors contributing to the problem.

In order to appear solutions-focused, I’ll start by re-framing the contributing factors I’ve identified below as suggested partial solutions to the content problem, including:

  1. Implement a search engine.
  2. Implement content authoring tools that fulfill authoring and learning requirements.
  3. Focus on authoring tools that help produce content that is “of” the web, not just “on” the web.
  4. Focus on authoring tools that support both design and bricolage.
  5. Identify, query, and replace conceptions and metaphors from prior modes (e.g. print-based and perhaps face-to-face) of learning.
  6. Develop and provide support for a number of higher-level models of “Course Activity”.
  7. Move away from an information transmission focus toward one based on learner activity.
  8. Develop a range of contextual services that can enhance content and student learning.

Disclaimer: I think improving any aspect of learning and teaching within a university is a wicked problem. i.e. there is no one silver bullet solution. Just better and worse solutions. From my perspective, the solutions that the institution appears to be exploring are not necessarily leaning towards the “better” end of the spectrum. The above may be a little better.

In addition, I don’t think this is a problem restricted to my current institution. I also don’t think that the solutions attempted so far are all that much different from what’s been attempted at other institutions. I’ve observed both the problems and the solutions elsewhere.

Does your institution have a “content problem”? Has it any solutions? Any of them worked?

PS. if we’re having problems with “content”, imagine the problems there must be with creating effective learning activities (IMHO, a much harder and more important problem).

Evidence of the problem

Evidence of the problems comes from two different sources.

Institutional

First, is at the institutional level. For some time there has been concern expressed at senior levels within the institution that students can’t find information on the course sites. This has led to a number of institutional projects and strategies.

The first was the development of a standard look and feel for course sites. Publicised as a makeover of the StudyDesk (the institutional brand for Moodle, which potentially causes its own problems) that promises the ability to find “all course information” and “assessment submission in one location”. There is apparently on-going work around this

https://twitter.com/usqedu/status/630513932176756736

Personal observation

From the evidence I see (personally and via my better half who is currently a student at the institution) there remains some distance until this promise is fulfilled. The Moodle sites at the institution that I see are still largely problematic and still mirror what I found when I took over the course I currently teach. i.e. a hodge podge of Powerpoint and other files interspersed with various bits of HTML (Moodle labels) with headings or explanatory text. The HTML often illustrates complete ignorance of simple design (e.g. of the CRAP design principles) and is often an attempt to explain how everything fits together. This is required because due to a couple of institutional specific approaches, not all the content is able to be effectively integrated into appropriate places within the Moodle site.

The ad hoc intermingling of all this content ends up in “the inevitable scroll of death” and the problem that students (and staff) can’t find information.

Even when Moodle courses are well designed, there are times when you can’t find information. I’ll claim that the course site for EDC3100, ICT and Pedagogy (one of the courses I teach) is amongst the most structured of course sites. As far away from the ad hoc upload approach to site design as you can get. In addition, largely I have been the sole designer and maintainer of the course site. A task that I’ve been doing over the last 3+ years and 6+ offerings of the course. I’m also very technically proficient.

And there are times when I still can’t find information quickly on the EDC3100 site!!!

Contributing factors

What is contributing to this problem? What follows are some of the factors arise from my perspective.

No search engine

The number one way you find information on the web is via search and yet there is no search engine that works within Moodle at this institution.

Content tools solving institutional requirements, not authoring/learning requirements

The most recent major investment in content tools at the institution has been the implementation and mandated use of an institutional repository. This quite significant investment of funds was not driven by a desire to help improve the authoring or learning processes. It was driven by two separate institutional requirements, which were:

  1. being able to manage and report use of copyrighted materials; and,
  2. address the disk storage problems created by Moodle course sites containing duplicate copies of large content files.

From what I’ve observed, it would be very hard to claim that the implementation of the learning repository has helped address the ability of people to create and find information for Moodle.

“on” the web, not “of” the web

Alan Levine writes (about the open course ds106)

You will hear people talk about their organizations or projects being on the web. but there is more than a shade of difference of ds106 being of the web.

Much of the thinking behind the tools and approaches of the institution are focused on producing content that is placed “on” the web, but is not “of” the web. In fact, some of the tools provided previously had enough trouble being “of” Moodle, let alone “of” the web.

The prime example here is the ICE environment. An environment developed within the institution to enable it to leverage quite significant print-based distance education material (such as Study Guides) by converting them into a Web format. The existing material (typically created using Word) would be run through ICE to produce a collection of HTML files. That collection of HTML files could then be linked to from the course site – via a link labelled “Course Content”.

The very first web browser was also an editor. If you wanted to edit a page, you could do so within the same tool you were using to view it. The ICE approach doesn’t (I believe) work that way, to make a change you have to go back to the Word version, make the change, and then run it through ICE again. Not “of” the web.

A common way to organise a Moodle course site is by topic or week. Each section of the course site is meant to include everything you should do as part of that topic or week. But the ICE “Course Content” link contains all of the content in one place. It’s more difficult to distribute the content into the appropriate weeks or topics. Meaning that you can’t look in the one place for all the relevant information.

There’s some value in enabling the reuse of existing materials, but they have to be leveraged in a way that encourages them to become part of the new medium. Not always held back to the ways of the old.

A focus on design, rather than bricolage

The ICE model and the model used by print-based distance education was based on design. i.e. the process was to spend a lot of time on design and production of a perfect, final artefact (print-based materials) that was distributed to students. This is because once the materials were sent out, they couldn’t be changed. This created problems, e.g. this from Jones (1996)

inability to respond to errors in study material or the requirements of individual students

Yesterday, one of my students reported some difficulties understanding the requirements for submitting the first assignment. I decided that an example was the best explanation and that I should incorporate that example into the Assignment 1 specification so that other students wouldn’t have the same problem. I can do this because the Assignment 1 specification is a web page on the Study Desk that I can edit.

So I found an example and went to the Assignment 1 page to make the change. Only to discover that I’d already previously modified the page to include (the same) examples. Hence the quick reply back to the student pointing out the examples.

An experience that suggests you can put in all the effort you want around making content findable and understandable, but it may not be enough.

Old metaphors lingering around

It’s not only materials that need to be brought into the new medium. There are other conceptions or metaphors that need to be updated. For example, the makeover of the StudyDesk just undertaken includes a specific page for “Study Schedule”. This was a standard component of print-based distance education packages. But it’s not clear that it belongs in the new Moodle age within which we live.

As mentioned above, a common method for organising Moodle course sites is by week or by topic. The image below is part of the course site for EDC3100. The site is organised by week. The top of the site has skip navigation links (see the next image below) that you can use to take you directly to the week you need to work on. All the activities and resources you need for that week are in that section. As you complete each activity you will get a nice behaviouralist tick indicating that you have completed the activity.

s2 2015

With this structure in place, I question the value of a Study Schedule. Especially when I see the type of information that is contained in many of the Study Schedules on other courses.

My course does include a Study Schedule. It would be interesting to see how often it is used by students.

No higher-level models of “Course Activity”

The makeover of the Study Desk was “sold” to academics (in part) using a line like “we won’t touch ‘Course Activity'”. i.e. the normal Moodle list of activities and resources would remain the sole purview of the academic. The new look and feel was just adding some additional structure (see the left hand menu in the image below) to help students find information.

tooltip

It was left to academics to organise the “scroll of death” that is a Moodle site. A task that is not straight forward. There have been (yet) no attempts to develop and share higher-level models of how the “course activity” section could be structured. I’m assuming that at some stage soon there will be a project at the institution to develop the “one higher level model” for all courses at the institution, because consistency is good.

I’d argue that there’s value in developing multiple contextually appropriate “higher level” models. The approach I use is one “higher level” model. UNE uses a different model that provides enough eye candy to excite some, and there would be other possibilities.

Resource centric understanding of learning

Lastly, and perhaps most scarily, is the apparent on-going resource centric understanding of learning suggested by the on-going interest in the “Resources” tab in the standard look and feel captured by the tweet from above. It is even more troubling when you combine this significant investment of resources in the “Resources” tab with the apparent lack of focus on “Course Activity”.

At least for me (and a few others I know) this combination speaks of a conception of learning that is focused on the transmission of information, rather than learner activity.

No value added, contextual services

When the screenshot above was taken my mouse was hovering over the 3 in the “Jump to: Week” skip navigation. As a result a tool tip was being shown by the browser with the words Building your TPACK – 16-20 Mar. This is the title I’ve given to the week’s activities and also the dates of the semester that was week 3.

If you look at the earlier screen shot you will see the titles and dates for two more weekly sets of activities: Orientation and getting ready (Before 2 Mar) and ICT, PLNs and You – 2-6 Mar (Week 1). If you were able to mouse over the 0 and 1 in the skip navigation at the top of the page, the tooltip would display the same title and date information. If you were able to look at the provided Study Schedule, you would see the same title and date information in the Study Schedule.

The same course is being offered this semester. The dates listed above no longer apply in the new semester. Under the current institutional model I would be expected to manually search and replace all of the date information every time the course site is rolled over to a new semester. The same applies to assignment due dates and other contextual information. For example, if I decide that the title for week 3 should change, I’ll need to manually search and replace all occurrences of the old title.

Since doing this manually would be silly, most people don’t do it. Instead of providing context specific information (e.g. dates), generic information is given. It’s just week 1 or theme 1. The problem with this is that it makes it more difficult for the teacher and student. Rather than information (like due dates) being available in the space needed, they have to expend energy and time looking elsewhere for that information.

I’ve implemented a kludge macro system, but Moodle has a functionality called filters that could be used to achieve the same end with some advantages.

However, this particular problem doesn’t appear to be on the radar. Arguably because all of the other “content” problems means that few people are producing content that could work with filters or require this approach.

Changing "as learner" focus – analytics to "chamber music"

A much delayed blog post that I’m getting out in a hurry now.

A few weeks ago I started yet another MOOC with the intent of it being the demonstration of “as learner” for the Network & Global Learning course. As with all other attempts to start a MOOC, it was a failure. Mostly due to my own time constraints and unexpected time sinks. But also because the content and the approach used in the MOOC didn’t fit and I wasn’t motivated enough or have enough time to bridge the gap.

Time to change focus and approach. Rather than a formal course, the next attempt “as learner” will be to engage with the network and the communities it contains around a particular topic. Walking my own path through the network(s) associated with the topic, rather than following the path laid out by someone else. An approach that will have it’s own challenges.

The topic (purpose is perhaps a better descriptor) this time will be “chamber music”. Actually, that’s just a highfalutin way of saying that Mr 10 and I are going to try and play a some duets. He’s learning oboe and clarinet, while I’m trying to get back into the alto saxophone (not the most traditional of combinations, but you make do with what you have). Playing together seems a good way to motivate both he and I to play more, and also to provide an activity we can undertake together. Plus, if it all comes off, Mr 8 is going to picking up an instrument next year. The Jones trio may not be too far off.

How to go about it?

The purpose of the “as learner” task as part of netgl is to provide participants with a practical experience to which to apply the literature they are reading. In theory, the literature around netgl should help them reflect and perhaps plan how they go about their “as learner” task. I’ll try to demonstrate one particular approach to doing this.

The readings for next week have a focus on community. The CLEM framework (adopted from another context) talks about looking for

  • Community – folk getting together to share ideas and experience of a practice
  • Literature – ideas and experience around the practice formalised into published forms
  • Examples – examples of others engaging in the practice
  • Models – the terminology and schema associated with the practice

Models

Let’s start with models and in particular terminology. You can’t search effectively unless you know the commonly accepted terminology.

Chamber music, duets, trios etc are some of the terms I believe apply, but as I’m not really a member of the music community/set, I can’t be sure.

Community music is a new term found whilst searching. Defined as

Community music is music played in communities. It can be recreational, cultural or religious and can embrace any genre, from classical to popular to traditional music from diverse cultures. Community music is generally practiced on an amateur and non-profit basis, although there are professional musicians who work in communities.

Communities

Search for “music community” reveals an interesting collection of sites

  • Creative Commons – Music communities;

    A list of “exemplary music communities” put together by the Creative Commons folk. Includes a range of sites for finding CC licensed music and platforms for sharing music. Most of the sites appear aimed more at more advanced musicians, but I assume many can be be used to advantage by us novices. Most do seem aimed at sharing performance, rather than actual sheet music and aiding learning.

  • Music in community from the Music Australia site.
    Which links off to Music in Communities network. These appear to be more “portals” to existing music communities etc, rather than network-based communities to get playing. Including a directory of Australian music groups to join.

Literature

Examples

Have decided that both sheet music and performance can be classed as examples for my purposes.

Using 8notes

I ended up paying to join the 8notes community. This granted access to some sheet music that Mr 10 and I have started playing. It’s probably too complex. I need to explore a bit further and find something simpler for our earlier forays.

The rest of this post is a collection of summaries and thoughts from the netgl literature used in the course. It’s an attempt to use this literature to frame what I’m doing in this task. It’s something that I haven’t finished. But points to further exploration

  1. What type of community is 8Notes? What other types of sources of learning/networks do I need to engage with?
  2. How do Mr 10 and progress through our relationship with these networks? What impact does it have on our learning?

I have to admit that part of the cutting off of this post arises from the fact that I found myself pondering too much the theoretical side of this (trying to understand what I was doing through the netgl literature). As a result I wasn’t spending enough time actually engaged in playing with Mr 10. A feeling made worse by some additional workload and other factors.

Types of community

The other reading for next week – Riel and Polin (2004) identify three types of learning community

  1. Task-based learning communities – come together for a certain time to produce a specific product.
  2. Practice-based learning communities – larger groups with shared goals that provide support. Apparently, where a CoP fits.
  3. Knowledge-based learning communities – much like a CoP but focused explicitly on the formal production of external knowledge about a practice.

I’m not convinced that Riel and Polin’s three learning communities capture the full breadth of possibilities. But then that may simply be my on-going distrust of the CoP approach. But it’s also indicative of the perceived misfit between this type of conceptualisation and what I experience when engaging in learning on the Internet.

Perhaps that’s because when engaged in learning via the Internet it’s about traversing a huge network that consists of many different communities. Perhaps so much so that the desire to identify, classify, and enumerate what communities are out there says more about our desire to put stuff in boxes and not wanting to admit it’s way more complex. Perhaps so complex that any attempt to put in boxes loses more than it gains?

This is where I think Dron’s and Anderson’s (2014) identification of groups, networks, sets, and collectives does a better job of capturing the full spectrum of what happens in terms of learning on the network.

The communities Riel and Polin (2004) identify perhaps largely fit within Dron and Anderson’s (2014) notion of groups. The distinguishing factor being that the membership of these communities/groups are listable. For example, the Research Supervisors CoP at USQ fits within Riel and Polin’s (2004) practice-based learning communities category and its membership is listable through the attendance records at meetings. Dron and Anderson (2014) actually identify CoPs as an intersection of Group and Net, and I think this perhaps highlights the source of my bias against CoPs. The theoretical form of CoPs as discussed by proponents is perhaps what fits at the intersection of Group and Net. However, the implementation of the CoPs that I’ve observed tends to be learn much more toward the Group, than the net aspect. Perhaps this is because of how I’ve engaged, or perhaps due to the technologies they’ve used (almost entirely synchronous meetings).

**** I need to read and write more about collectives, maybe later ****

Identity transformation

Riel and Polin (2004) also talk about the focus of CoP and Activity Theory on learning being “a process of identity transformation – a socially construct and socially managed experience” (p. 19). A transformation that is evolves along with the individual’s journey through the community…….this unfinished thought and idea is something to be picked up later.

References

Dron, J., & Anderson, T. (2014). Teaching crowds: Learning and Social Media. Edmonton: AU Press. Retrieved from http://www.aupress.ca/index.php/books/120235

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge: Cambridge University Press.

Does learning about teaching in formal education match this?

Riel and Pollin (2004) talk about a view of learning that sees learning occurring

through engagement in authentic experiences involving the active manipulation and experimentation with ideas and artifacts – rather than through an accumulation of static knowledge (p. 17)

They cite people such as Bruner and Dewey supporting that observation.

When I read that, I can’t but help reflect on what passes for “learning about teaching” within universities.

Authentic experience

Does such learning about teaching occur “through engagement in authentic experiences”?

No.

Based on my experiences at two institutions, it largely involves

  • Accessing face-to-face and online instructions on how-to use a specific technology.
  • Attending sessions talking about different teaching methods or practices.
  • Being told about the new institutionally mandated technology or practice.
  • For a very lucky few, engaged with an expert in instructional design or instructional technology about the design of the next offering of a course.

Little learning actually takes place in the midst of teaching – the ultimate authentic experience.

Active manipulation

Does such learning allow and enable the “active manipulation and experimentation with ideas and artifacts”?

No.

Based on my experience, the processes, policies, and tools used to teach within universities are increasingly set in stone. Clever folk have identified the correct solution and you shall use them as intended.

Active manipulation and experimentation is frowned upon as inefficient and likely to impact equity and equality.

Most of the technological environments (whether they be open source or proprietary) are fixed. Any notion of using some technology that is not officially approved, or modifying an existing technology is frowned upon.

Does this contribute to the limitations of university e-learning?

If, learning occurs through authentic experience and active manipulation, and the university approach to learning about teaching (especially with e-learning) doesn’t effectively support either of these requirements, then is there any wonder that the quality of university e-learning is seen as having a few limitations?

References

Riel, M., & Polin, L. (2004). Online learning communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & J. Gray (Eds.), Designing for Virtual Communities in the Service of Learning (pp. 16–50). Cambridge: Cambridge University Press.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php