Assembling the heterogeneous elements for (digital) learning

Month: August 2010 Page 1 of 2

Situated shared practice, curriculum design and academic development

Am currently reading Faegri et al (2010) as part of developing the justificatory knowledge for the final ISDT for e-learning that is meant to be the contribution of the thesis. The principle from the ISDT that this paper connects with is the idea of a “Multi-skilled, integrated development and support team” (the name is a work in progress). The following is simply a placeholder for a quote from the paper and a brief connection with the ISDT and what I think it means for curriculum design and academic development.

The quote

The paper itself is talking about an action research project where job rotation was introduced into a software development firm with the aim of increasing the quality of the knowledge held by software developers. The basic finding was that in this case, there were some benefits, however, the problems outweighed them. I haven’t read all the way through, I’m currently working through the literature review. The following quote is from the review.

Key enabling factors for knowledge creation is knowledge sharing
and integration [36,54]. Research in organizational learning has emphasized the value of practice; people acquire and share knowledge in socially situated work. Learning in the organization occurs in the interplay between tacit and explicit knowledge while it crosses boundaries of groups, departments, and organizations as people participate in work [17,54]. The process should be situated in shared practice with a joint, collective purpose [12,14,15].

Another related quote

The following is from a bit more related reading, in particular Seely Brown & Duguid (1991) – emphasis added

The source of the oppositions perceived between working, learning, and innovating lies primarily in the gulf between precepts and practice. Formal descriptions of work (e.g., “office procedures”) and of learning (e.g., “subject matter”) are abstracted from actual practice. They inevitably and intentionally omit the details. In a society that attaches particular value to “abstract knowledge,” the details of practice have come to be seen as nonessential, unimportant, and easily developed once the relevant abstractions have been grasped. Thus education, training, and technology design generally focus on abstract representations to the detriment, if not exclusion of actual practice. We, by contrast, suggest that practice is central to understanding work. Abstractions detached from practice distort or obscure intricacies of that practice. Without a clear understanding of those intricacies and the role they play, the practice itself cannot be well understood, engendered (through training), or enhanced (through innovation).

Relevance?

I see this as highly relevant to the question of how to improve learning and teaching in universities, especially in terms of the practice of e-learning, curriculum design and academic development. It’s my suggestion that the common approaches to these tasks in most universities ignore the key enabling factors mentioned in the above quote.

For example, the e-learning designers/developers, curriculum designers and academic developers are generally not directly involved with the everyday practice of learning and teaching within the institution. As a result the teaching academics and these other support staff don’t get the benefit of shared practice.

A further impediment to shared practice is the divisions between e-learning support staff, curriculum designers and academic developers that are introduced by organisational hierarchies. At one stage, I worked at a university where the e-learning support people reported to the IT division, the academic staff developers reported to the HR division, the curriculum designers reported to the library, and teaching academics were organised into faculties. There wasn’t a common shared practice amongst these folk.

Instead, any sharing that did occur was either at high level project or management boards and committees, or in design projects prior to implementation. The separation reduce the ability to combine, share and create new knowledge about what was possible.

The resulting problem

The following quote is from Seely Brown and Duiguid (1991)

Because this corporation’s training programs follow a similar downskilling approach, the reps regard them as generally unhelpful. As a result, a wedge is driven between the corporation and its reps: the corporation assumes the reps are untrainable, uncooperative, and unskilled; whereas the reps view the overly simplistic training programs as a reflection of the corporation’s low estimation of their worth and skills. In fact, their valuation is a testament to the depth of the rep’s insight. They recognize the superficiality of the training because they are conscious of the full complexity of the technology and what it takes to keep it running. The corporation, on the other hand, blinkered by its implicit faith in formal training and canonical practice and its misinterpretation of the rep’s behavior, is unable to appreciate either aspect of their insight.

It resonates strongly with some recent experience of mine at an institution rolling out a new LMS. The training programs around the new LMS, the view of management, and the subsequent response from the academics showed some very strong resemblances to the situation described above.

An alternative

One alternative, is what I’m proposing in the ISDT for e-learning. The following is an initial description of the roles/purpose of the “Multi-skilled, integrated development and support team”. Without too much effort you could probably translate this into broader learning and teaching, not just e-learning. Heaven forbid, you could even use it for “blended learning”.

An emergent university e-learning information system should have a team of people that:

  • is responsible for performing the necessary training, development, helpdesk, and other support tasks required by system use within the institution;
  • contains an appropriate combination of technical, training, media design and production, institutional, and learning and teaching skills and knowledge;
  • through the performance of its allocated tasks the team is integrated into the everyday practice of learning and teaching within the institution and cultivates relationships with system users, especially teaching staff;
  • is integrated into the one organisational unit, and as much as possible, co-located;
  • can perform small scale changes to the system in response to problems, observations, and lessons learned during system support and training tasks rapidly without needing formal governance approval;
  • actively examines and reflects on system use and non-use – with a particular emphasis on identifying and examining what early innovators – to identify areas for system improvement and extension;
  • is able to identify and to raise the need for large scale changes to the system with an appropriate governance process; and
  • is trusted by organisational leadership to translate organisational goals into changes within the system, its support and use.

References

Faegri, T. E., Dyba, T., & Dingsoyr, T. (2010). Introducing knowledge redundancy practice in software development: Experiences with job rotation in support work. Information and Software Technology, 52(10), 1118-1132.

Seely Brown, J., & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57.

The rider, elephant, and shaping the path

Listened to this interview of Chip Heath, a Stanford Professor in Organizational Behaviour about his co-authored book Switch: How to change things when change is hard. My particular interest in this arises from figuring out how to improve learning and teaching in universities. From the interview and the podcast this seems to be another one in a line of “popular science” books aimed at making clear what science/research knows about the topic.

The basic summary of the findings seems to be. If you wish to make change more likely, then your approach has to (metaphorically):

  • direct the rider;
    The rider represents the rational/analytical decision making capability of an individual. This capability needs to be appropriately directed.
  • engage the elephant; and
    The elephant represents the individual’s emotional/instinctive decision making approach. From the interview, the elephant/rider metaphor has the express purpose of showing that the elephant is far stronger than the rider. In typical situations, the elephant is going to win, unless there’s some engagement.
  • shape the path.
    This represents the physical and related environment in which the change is going to take place. My recollection is that the shaping has to support the first two components, but also be designed to make it easier to traverse the path and get to the goal.

There are two parts of the discussion that stuck with me as I think they connect with the task of improving learning and teaching within universities.

  1. The over-rationalisation of experts.
  2. Small scale wins.

Over-rationalisation of experts

The connection between organisational change and losing weight seems increasingly common, it’s one I used and it’s mentioned in the interview. One example used in the interview is to show how a major problem with change is that it is driven by experts. Experts who have significantly larger “riders” (i.e. rational/analytical knowledge) of the problem area/target of change than the people they are trying to change. This overly large rider leads to change mechanisms that over complicate things.

The example they use is the recently modified food pyramid from the United States that makes suggestions something like, “For a balanced diet you should consume X tablespoons of Y a day”. While this makes sense to the experts, a normal person has no idea of how many tablespoons of Y is in their daily diet. In order to achieve the desired change, the individual needs to develop all sorts of additional knowledge and expertise. Which is just not likely.

They compare this with some US-based populariser of weight loss who proposes much simpler suggestions e.g. “Don’t eat anything that comes through your car window”. It’s a simpler, more evocative suggestion that appears to be easier for the rider to understand and helps engage the elephant somewhat.

I can see the equivalent of this within learning and teaching in higher education. Change processes are typically conceived and managed by experts. Experts who over rationalise.

Small scale wins

Related to the above is the idea that change always consists of barriers or steps that have to be stepped over. Change is difficult. The suggestion is that when shaping the path you want to design it in such a way so that the elephant can almost just walk over the barrier. The interviewer gives the example of never being able to get her teenage sons to stop taking towels out of the bathroom and into their bedroom. Eventually what worked was “shaping the path” by storing the sons’ underwear in the bathroom, not their bedroom.

When it comes to improving learning and teaching in universities, I don’t think enough attention is paid to “shaping the path” like this. I think this is in part due to the process being driven by the experts, so they simply don’t see the need. But it is also, increasingly, due to the fact that the people involved can’t shape the path. Some of the reasons the path can’t be shaped include:

  • Changing the “research is what gets me promoted” culture in higher education is very, very difficult and not likely to happen effectively if just one institution does it.
  • When it comes to L&T path (e.g. the LMS product model or the physical infrastructure of a campus) it is not exactly set up to enable “shaping”.
  • The people involved at a university, especially in e-learning, don’t have the skills or the organisational structure to enable “shaping”.

Possible uses of academic analytics

The following is a slightly edited copy of a message I’ve just sent off to the Learning Analytics GoogleGroup set up by George Siemens. I’m into reuse. Essentially it tries to highlight a small subset of the uses of learning analytics that I see as most interesting.

Some colleagues and I have been taking some baby steps in this area. In terms of trying to understand where we might end up doing we’ve started talking about three aspects. All very tentative, but can highlight a small subset of potential uses.

I have perhaps used a broad definition for analytics.

1. What?

This is the visualisation of what has happened in learning.

Lots of work to be done here in terms of finding out what patterns to look for and how to represent them.

For example, we took factors identified by Fresen (2007) as promoting quality as a guide to look for particular patterns.

Use #1 – Analytics can be used to test theories/propositions around learning and teaching. Perhaps by supplementing existing methods.

Fresen, J. (2007). A taxonomy of factors to promote quality web-supported learning. International Journal on E-Learning, 6(3), 351-362.

2. Why?

Once we’ve seen certain patterns, the obvious question is why did that pattern arise?

e.g. A colleague found a pattern where, on average, the older a distance education student was, the more they used the LMS.

The obvious question is why? Various theories are possible, but which apply? The 2nd person to comment on the above post is a psychology research masters student who is just completing some research attempting to identify an answer to they why question.

Use #2 – Analytics can be used to identify areas for future research.

3. How?

How can you harness analytics to improve learning and teaching.

Rowan made the point about using analytics to encourage changes in policy. I’ve seen this happen. Some early analysis at one institution showed that very few course websites had a discussion forum and even fewer used it effectively. Policy changed.

Use #3 – Analytics can inform policy change.

As we work/worked in areas supporting university academics in their teaching we were most interested in this question. In particular, we were interested in how analytics could be used to improve academics teaching.

Use #4 – Analytics can be used to encourage academics to reflect on their teaching.

e.g. Another colleague used analytics to reflect on how his conceptions of teaching matched what analytics showed about what happened within his courses.

Use #5 – Analytics can be used to encourage students to think differently about their learning.

Presenting students with different visualisations of what they are doing (or not doing) around learning can also encourage them to change practice.

I’ve heard reports that use of the SNAPP tool has achieved this. I’ve heard similar reports about the
use of the Progress Bar block for Moodle.

It’s possible to see a common trend in the last few uses. To some extent analytics is being used to improve “distributed cognition” in terms of putting some more smarts into the environment, which in turn becomes more likely to be seen and acted upon by policy makers, students or teachers.

However, what these people do in response to the improved knowledge they have, is still fairly open. I have a particular interest in how to encourage and enable these folk to use their improved knowledge in useful and interesting ways.

Which in turns will generate changed behaviour and hopefully changed use of the system. This takes us back to the what question at the start.

Use #6: Analytics can be an important component to on-going learning about what works and what doesn’t in learning and teaching.

BIM, blog posts and special characters

The following is a summary/explanation of a common problem with BIM and its mirroring of blog posts and a common solution. The problem is generally caused by folk creating their blog posts in Word and then copying and pasting them into the blog post. For various reasons this process brings along some “special” characters which, while they work fine in Word, screw up royally within more constrained textual representations, like those of Web browsers and XML/RSS parsing libraries.

Reported problem

A student has made a post to their blog, the teacher can see it on the student’s blog, but it’s simply not present within BIM. BIM isn’t picking it up.

Diagnosis of the problem

Steps to diagnosing the source of the problem were:

  • Login to the Moodle course site and confirm the problem.
    Yes, student has posted it to his blog, but BIM not picking it up.
  • Register the student blog with a local copy of BIM.
    Ahh, the blog post shows up on my local copy, but only the first dozen or so characters.
  • Look at the feed for the student blog.
    Find the tell-tale signs of special characters exactly where my local copy of BIM cuts off the post.

Okay, BIM currently attempts to handle special characters, obviously it is missing something.

Common solution

This appears likely to be an on-going problem, so am going to leave a bit of commented code in place that I use to implement this “solution”. The “solution” is basically get BIM to print out each individual character in a blog post along with its ASCII value. Use this ASCII value to modify the bim_clean_content function to remove the offending special character.

The code that implements this character by character display looks like this

[sourcecode lang=”php”]
# KLUDGE: simple test to find out which special characters are
# causing problems
$contenta = str_split( $content);
print "<h1> $title </h1>";
foreach ( $contenta as $char ) {
echo "$char .. " . ord( $char ) . "<br />";
}
[/sourcecode]

For this particular problem the offending character is 189. So add the following to the function bim_clean_content. It appears that character 189 is some sort of dash.

[sourcecode lang=”php”]
$post = ereg_replace( chr(189), "-", $post );
[/sourcecode]

Re-register a student with the same blog and 189 has been replaced. Remove the kludge and it all appears to be registered correctly.

Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

The end of management – lessons for universities?

Yet another “death of X” article is the spark for this post. This one comes from the Wall Street Journal and is titled The end of management. There’s been a wave of these articles recently, but this one I like because it caters to my prejudice that most of the problems in organisations, especially in universities around learning and teaching, arise from an inappropriate management paradigm. The following has some connections to the oil sheiks thread.

Some choice quotes

Corporations are bureaucracies and managers are bureaucrats. Their fundamental tendency is toward self-perpetuation. They are, almost by definition, resistant to change. They were designed and tasked, not with reinforcing market forces, but with supplanting and even resisting the market.

and

The weakness of managed corporations in dealing with accelerating change is only half the double-flanked attack on traditional notions of corporate management. The other half comes from the erosion of the fundamental justification for corporations in the first place.

And a quote from Gary Hamel which summarises much of the problem with real innovation, including innovation around management

That thing that limits us is that we are extraordinarily familiar with the old model, but the new model, we haven’t even seen yet.

Moving onto the question of resources

In corporations, decisions about allocating resources are made by people with a vested interest in the status quo. “The single biggest reason companies fail,” says Mr. Hamel, “is that they overinvest in what is, as opposed to what might be.”

The challenge that strikes at the heart of improving learning and teaching within universities is capture in this quote

there’s the even bigger challenge of creating structures that motivate and inspire workers. There’s plenty of evidence that most workers in today’s complex organizations are simply not engaged in their work.

Does your university have large numbers of academic staff that are actively engaged in teaching? How does it do it?

I’d like to work for a university that gets this, or at least is trying to.

University e-learning systems: the need for new product and process models and some examples

I’m in the midst of the horrible task of trying to abstract what I think I know about implementing e-learning information systems within universities into the formal “language” required of an information systems design theory and a PhD thesis. This post is a welcome break from that, but is still connected in that it builds on what is perhaps fundamentally different between what most universities are currently doing, and what I think is a more effective approach. In particular, it highlights some more recent developments which are arguably a step towards what I’m thinking.

As it turns out, this post is also an attempt to crystalise some early thinking about what goes into the ISDT. So some of the following is a bit rough. Actually, writing this has identified one perspective that I hadn’t thought of, which is potentially important.

Edu 2.0

The post arises from having listened to this interview with Graham Glass the guy behind Edu 2.0, which is essentially a cloud-based LMS. It’s probably one of a growing number out there. What I found interesting was his description of the product and the process behind Edu 2.0.

In terms of product (i.e. the technology used to provide the e-learning services), the suggestion was that because Edu 2.0 is based in the cloud – in this case Amazon’s S3 service – it could be updated much more quickly than more traditional institutionally hosted LMSs. There some connection here with Google’s approach to on-going modifications to live software.

Coupled with this product flexibility was a process (i.e. the process through which users were supported and the system evolved) that very much focused on the Edu 2.0 developers interacting with the users of the product. For example, releasing proposals and screenshots of new features within discussion forums populated with users and getting feedback; and also responding quickly to requests for fixes or extensions from users. To such an extent that Glass reports users of Edu 2.0 feeling like it is “there EDU 2.0” because it responds so quickly to them and their needs.

The traditional Uni/LMS approach is broken

In the thesis I argue that when you look at how universities are currently implementing e-learning information systems (i.e. selecting and implementing an LMS) the product (the enterprise LMS, the one ring to rule them all) and the process they use are not a very good match at all for the requirements of effectively supporting learning and teaching. In a nut shell, the product and the process is aimed at reducing diversity and the ability to learn, while diversity is a key characteristic of learning and teaching at a university. Not to mention that when it comes to e-learning within universities, it’s still very early days and it is essential that any systemic approach to e-learning have the ability to learn from its implementation and make changes.

I attempted to expand on this argument in the presentation I gave at the EDUCAUSE’2009 conference in Denver last year.

What is needed

The alternative I’m trying to propose within the formal language of the ISDT is that e-learning within universities should seek to use a product (i.e. a specific collection of technologies) that is incredible flexible. The product must, as much as possible, enable rapid, on-going, and sometimes quite significant changes.

To harness this flexibility, the support and development process for e-learning should, rather than be focused on top-down, quality assurance type processes, be focused on closely observing what is being done with the system and using those lessons to modify the product to better suit the diversity of local needs. In particular, the process needs to be adopter focused, which is described by Surry and Farquhar (1997) as seeing the individual choosing to adopt the innovation as the primary force for change.

To some extent, this ability to respond to the local social context can be hard with a software product that has to be used in multiple different contexts. e.g. an LMS used in different institutions.

Slow evolution but not there yet

All university e-learning implementation is not the same. There has been a gentle evolution away from less flexible products to more flexible produces, e.g.

  1. Commercial LMS, hosted on institutional servers.
    Incredibly inflexible. You have to wait for the commercial vendor to see the cost/benefit argument to implement a change in the code base, and then you have to wait until your local IT department can schedule the upgrade to the product.
  2. Open source LMS, hosted on institutional servers.
    Less inflexible. You still have to wait for a developer to see your change as an interesting itch to scratch. This can be quite quick, but it can also be slow. It can be especially quick if your institution has good developers, but good developers cost big money. Even if the developer scratches your itch, the change has to be accepted into the open source code base, which can take some time if its a major change. Then, finally, after the code base is changed, you have to wait for your local IT shop to schedule the upgrade.
  3. Open source LMS, with hosting outsourced.
    This can be a bit quicker than the institutional hosted version. Mainly because the hosting company may well have some decent developers and significant knowledge of upgrading the LMS. However, it’s still going to cost a bit, and it’s not going to be real quick.

The cloud-based approach used by EDU 2.0 does offer a product that is potentially more flexible than existing LMS models. However, apart from the general slowness in the updating, if the change is very specific to an individual institution, it is going to cause some significant problems, regardless of the product model.

Some alternative product models

The EDU 2.0 model doesn’t help the customisation problem. In fact, it probably makes it a bit worse as the same code base is being used by hundreds of institutions from across the globe. The model being adopted by Moodle (and probably others), having plugins you can add, is a step in the right direction in that institutions can choose to have different plugins installed.
However, this model typically assumes that all the plugins have to use the same API, language, or framework. If they don’t, they can’t be installed on the local server and integrated into the LMS.

This requirements is necessary because there is an assumption for many (but not all) plugins that they provide the entire functionality and must be run on the local server. So there is a need for a tighter coupling between the plugin and the LMS and consequently less local flexibility.

A plugin like BIM is a little different. There is a wrapper that is tightly integrated into Moodle to provide some features. However, the majority of the functionality is provided by software (in this case blogging engines) that are chosen by the individual students. Here the flexibility is provided by the loose coupling between blog engine and Moodle.

Mm, still need some more work on this.

References

Surry, D., & Farquhar, J. (1997). Diffusion Theory and Instructional Technology. e-Journal of Instructional Science and Technology, 2(1), 269-278.

Oil sheiks, Lucifer and university learning and teaching

The following arises from a combination of factors including:

Old wine in new bottles

Perhaps the key quote from Mark’s post is

This post is simply to try and say what many people don’t want to say and that is, that most universities really don’t care about educational technology or elearning.

My related perspective is that the vast majority of university learning and teaching is, at best (trying to be very positive), just ok. There’s a small bit that is really, really bad; and a small bit that is really, really good. In addition, most interventions to improve learning and teaching are not doing anything to change this distribution. At best, they might change the media, but the overall distribution is the same.

There’s a quote from Dutton and Loader (2002) that goes something like

without new learning paradigms educators are likely to use technology to do things the way they have always done; but with new and more expensive technology.

I am currently of the opinion that without new management/leadership paradigms to inform how universities improve learning and teaching, the distribution is going to remain the same just with new and more expensive organisational structures. This article from the Goldwater Institute about administrative bloat at American universities might be an indicator of that.

Don’t blame the academics

The “When good people turn bad” radio program is an interview with Philip Zimbardo. He’s the guy responsible for the Stanford prisoner study, an example of where good people turned really bad because of the situation in which they were place. The interview includes the following from Prof Zimbardo

You no longer can focus only on individual freedom of will, individual rationality. People are always behaving in a context, in a situation, and those situations are always created and maintained by powerful systems, political systems, cultural, religious ones. And so we have to take a more complex view of human nature because human beings are complex.

This resonates somewhat with a point that Mark makes

the problem of adoption is primarily not a technical one but one of organisational culture

. I agree. It’s the culture, the systems, the processes and the policies within universities that are encouraging/enshrining this distribution where most university learning and teaching is, at best, just ok.

The culture/system doesn’t encourage nor enable this to change. When management do seek to do something about this, their existing “management paradigm” encourages an emphasis on requiring change without doing anything effective to change the culture/system.

The proposition and the interest

Which is where I am interested in and propose the following

If you really wish to improve the majority of learning and teaching within a university, then you have to focus on changing the culture/system so that academics staff are encouraged and enabled to engage in learning about how to teach.

In addition, I would suggest that requiring learning (e.g. through requiring all new academic staff to obtain a formal qualification in learning) without aligning the entire culture/system to enable academic staff to learn and experiment (some of these characteristics are summarised here) is doomed to failure.

I’d also suggest that there is no way you can “align” the culture/system of a university to enable and encourage academic staff learning about teaching. At best you can engage in a continual process of “aligning” the culture/system as that process of “aligning” is itself a learning process.

Easy to say

I can imagine some universities leaders saying “No shit Sherlock, what do you think we’re doing?”. My response is you aren’t really doing this. Your paradigm is fundamentally inappropriate, regardless of what you claim.

However, actually achieving this is not simple and I don’t claim to have all the answers. This is why this is phrased as a proposition, it’s an area requiring more work.

I am hoping that within a few days, I might have a small subset of an answer in the next, and hopefully final, iteration of the design theory for e-learning that is meant to be the contribution of my thesis.

References

Dutton, W. and B. Loader (2002). Introduction. Digital Academe: The New Media and Institutions of Higher Education and Learning. W. Dutton and B. Loader. London, Routledge: 1-32.

Extracting case study research and multiple contexts?

The following is an attempt to consider and respond to a suggestion on my thesis from my esteemed supervisor. It’s essentially some thinking and questions arising from further refinement of the research method being used in my thesis.

The suggestion was that perhaps my work is an example of van Aken’s (2004) extracting/ive case study research. Is this the case?

What’s the method?

I’m essentially using an action research method within a single case study to develop/abstract a design theory for e-learning. The action research cycle is somewhat disjointed in that the case – the development and evolution of Webfuse from 1997 through 2009 – involved an on-going process of action research as different aspects of the system were developed and changed (evidence of this is found in my publications where this process was played out).

In addition to this more traditional example of action research there was another round of action research which occurred after most of the initial Webfuse development was completed. This was the cycle that led to the development and re-development of the design theories that were abstracted from the Webfuse. This is almost an example of a meta action research cycle. The existence of this on-going cycle is also evident in the publications from about 2002 onwards as there are three different representations of the ISDT, with another being formulated now.

Extracting and developing case study research

This type of method does seem to be fairly evident in the broader business research methods with van Aken (2004) identifying two relevant types of case study:

  • extracting case study research; and
    Also labeled as “best-practice” research. The aim here is to extract technological rules (theory) that is already being used in practice.
  • developing case study research.
    Researchers collaborate with people within specific contexts to solve a local problem. However, there is a reflective cycle after each case in which the researcher seeks to develop knowledge that can be transferred to similar contexts. Also labeled as clinical research.

It appears that van Aken (2004) makes another distinction between these two in that extracting case study research generally involves multiple case studies in which the same best practices have been implemented in multiple contexts.

However, van Aken (2004) does make the suggestion that developing case study research isn’t necessarily multiple case study. Instead development can go through an “alpha test” in which the “rule” is evaluated within a single context. However, significant value arises when there is “beta testing” when the “rule” is translated into other contexts and “third parties use it, assess its effectiveness and make final improvements”. van Aken (2004) makes the connection with beta-testing and replication research.

However, this bit is interesting

An essential element of beta-testing is that testing is conducted by a third party to counteract the ‘unrecognized defences’ of the originator of the rule, which may blind him or her to possible flaws in its use (Argyris, 1996).

This somewhat later quote is also of interest to my thesis

Management Theory research as described here may have most in common with the approach to Action Research discussed by Eden and Huxham (1996), who stress among other things the need for generality of research results (p. 530) and write about ‘action research aimed at the study of organization and organizations . . . where it is likely that the researcher accepts the dominant managerial ideology’ (p. 529).

So which “case study research” is my thesis?

From this description, I think that my thesis research leans more towards the developing case study research. There has, however, been two different reflective cycles. The first was the immediate reflection on the intervention that contributed to changes in the intervention. Then there was some additional reflection afterwards to move into the design theory. It was this additional reflection that is aiming to generate the more general contribution to knowledge.

So when does a context change?

An interesting aside here is exactly how do you define a different context. To some extent it, I would argue that CQU in 1996/1997 – when Webfuse started development – is almost an entirely different context in 2001/2002 and certainly by 2005. It’s still the same institution, however, the internal structure, people and processes have changed quite significantly. Significantly enough to be a new context? Not sure. But answering that question might be interesting.

References

van Aken, J. (2004). “Management research based on the paradigm of the design sciences: The quest for field-tested and grounded technological rules.” Journal of Management Studies 41(2): 219-246.

PhD Update #28 – minimal work, feedback arrived

Am attempting to keep this weekly update thing going within the new context. Not much to report this week as I was out of town for three days, baby sitting and recovering for another, and thinking about jobs and family-based travel on the other.

Am going to try a more evolutionary, to do list oriented approach to this post as well. i.e. the finer grain tasks I need to complete will be added and hopefully checked off here during the week.

What I did last week

Two fairly small bits of work done this week:

  • Completed a draft of the “lessons learned” section of chapter 5.
    This is still fairly early and will be revised, but wanted to get something down on paper, so to speak.
  • Received and started processing some feedback from my esteemed supervisor.

This left at least 3 of the tasks I set myself, undone.

What I need to do next week

A growing list of tasks to do:

  • Read comments on chapter 6, modify what I’ve done, send it off to supervisor.
    • Main point is to have a good overview of findings, i.e. summarise the Ps Framework, New view on ISDT and the ISDT for e-learning.
    • Look at including the Gregor and Jones table for theory summary.
    • Implications for IS/Design researchers needs to mention contribution of Gregor and Jones
    • See what insights Eisenhardt (1989) can add.
    • Look into van Aken’s “extracting” and “developing” case study concepts and see how it can be worked into this chapter and also perhaps chapter 3
    • The conclusions section needs to be more of a “declaration of victory”.
  • Read comments on chapter 4, especially on the ISDT
    • Think about how chapter 4 ISDT may change in response;
      • Table 4.3 – the principles read more like requirements. This opens up the question of what form should a principle take? Perhaps Bunge/van Aken’s technological rules?
        van Aken (2004) “If you want to achieve Y in situation Z, then something like action X will help”
      • Is section 4.3 about high-level guiding requirements and Section 4.4 more about design principles? YES
      • Need to think about the names of the principles/guidelines, especially the important/innovative ones.
      • Add the Gregor and Jones “theory table” as a summary/overview
    • Start thinking about form of ISDT for chapter 5.
    • Make more general changes to chapter 4 and the rest of the thesis
      • Rework the Chapter 4 introduction, and perhaps other parts of the chapter, to better orient the reader to the purpose and the structure of the chapter.
      • Look at how much of the design work in the thesis was done prior to commencing the thesis and how this is outlined within the thesis, with some connection to the extractive case study conceptNO.
  • Complete evaluation section of chapter 5.
    • What can be evidenced about the trend in feature and broader user adoption about Webfuse systems post 2004/2005.
  • Revisit chapter 1 and get to final draft, start sending around.

Minimum course presence and the tension between centralisation and de-centralisation

Am finding this HBR article to have an interesting take on the centralisation verse de-centralisation argument. However, still reading through it.

It particularly resonates with me at the moment because of the discussions I’ve had/seen this week around individual universities implementing minimum course presence policies. Essentially, it seems the next big fad for universities to be able to ensure that every course has a standard minimum course website.

This seems to me to be a prime example of a move to centralisation, and thus suffers from all of the problems associated with centralisation. In particular, how it removes the ability for the “person on the spot” (the teaching academic) to respond to the local context. I believe this to be an important factor in university learning and teaching because of the diversity of learning.

The tension here is that there are at least three separate requirements around a minimum course presence:

  1. An institutional quality assurance requirement;
    i.e. to ensure that all courses have some minimum standard, that all students can be assured at least this minimal level of service. Perhaps the most important requirement is that senior management need to be certain that this is the case.
  2. The learning within the course; and
    A key characteristic of each course, its teachers, and students is diversity. They are all different. They need to learn in a different way. An online course presence has to be able to engage with this diversity.
  3. The learning of the teaching staff and the organisation.
    It’s my argument that in order for an institution to improve the quality of its learning and teaching, the delivery of teaching must include a focus on learning about that delivery. i.e. when I use a minimum course site in a course for the first time, I am going to gain insights into what works and what doesn’t. I am going to learn. The minimum course presence needs to be able to change based on that learning. Not to mention that because of the diversity of teaching staff, courses and students, that learning is going to be very different.

Am wondering if the minimum course presence movement has been ruined by an over-focus on the first requirement and too little of a focus on the other two. Is there any evidence of approaches to a minimum course presence that recognises the other two requirements?

Amplify’d from hbr.org
Therefore, individuals who have on-the-spot knowledge must be allowed to figure out what to do.
Discerning the appropriate balance between top-down command and control, on the one hand, and individual initiative and judgment, on the other, will always be a challenge for our society and our organizations.Read more at hbr.org
 

Off to see the dinosaurs

And now for something completely different, leaving the dinosaurs of academic development within universities behind, it’s time to move on to some real dinosaurs, or at least their fossilised remains. In a few weeks, the two boys and I are off out west. In particular, we’re off to “ride” the Dinosaur trail of Western Queensland. Both boys are going through the dinosaur phase, with the 5 year old certain he wants to be a paleontologist when he grows up.

This and any following posts are really intended to outline some planning for the trip and gather feedback from my family, but feel free to add insight if you have any.

The basic route

The plan is something like:

  • Rockhampton to Longreach – 696 Km;
  • Longreach to Winton – 180Km;
  • Winton to Richmond – 233Km;
  • Richmond to Hughenden; – 120Km and
  • Hughenden to Proserpine – 600Km.
  • Proserpine to home – 495Km

Potential Attractions

Now to try and gather what there is to do at the three main Dinosaur destinations: Winton, Richmond and Hughenden.

Winton

  • Australian Age of Dinosaurs
    10 minutes the Longreach side of Winton. Working Museum, looks good for at least a couple of hours.
  • The Lark Quarry Trackways
    100+Km south-west. Umm, quit a long way. Maybe a tour, the ADT brochure suggests you can do both these in a day.

Richmond

  • Kronosaurus Korner
    This is the one that has the boys most excited. Comes associated with a space to dig for their own fossils.

Hughenden

This might be just a brief drive by.

The curse of simple diagrams

There is a lot I like about within this discussion of a “systems approach to e-learning”. However, there is also much that I dislike.

I think the source of my dislike is the typical engineers (or business analyst/project management) assumption that you can develop in-depth knowledge of a complex organisation/activity/set of processes like e-learning within a university. i.e. that the gant chart or e-learning strategic plan captures everything that you need to know about the activity.

A part of this is the “simple diagram” below that is meant to represent “How technology fits into an organisation”. It’s just too tidy, not to mention linear. The influence is just one way from culture “down” the line, it’s much more complicated and multi-faceted than that. Different tools can enable radically different or previously unthought of processes that can lead to changes in culture.

Amplify’d from newsweaver.co.uk
 

Lessons learned from Webfuse: 2000 onwards

The following is an early draft of the “lessons learned” section of chapter 5 of the thesis, the third last section that needs to be completed (to first draft stage). It still needs some work and completing the last two sections will probably lead to some changes, but it’s a start.

The basic aim of this section is to draw out reasons why the intervention (in this case the Webfuse system I designed) succeeded or failed. As I write this, pretty sure I haven’t finished.

Lessons learned

Before attempting to describe the final ISDT arising from this work, this section seeks to reflect on the outcomes of the intervention in an attempt to understand how well the intervention achieved the changes sought and to understand the observed success and failures.

Relative unimportance of the technical product

From the perspective of data structures, algorithms, and bleeding edge technology Webfuse was not at all innovative. Use of scripting languages, relational databases, and open source applications to construct websites was fairly common and widespread. Nor would much of its implementation be considered theoretically correct by researchers focusing on relational databases, software engineering or computer science. For example, the schema used by Webfuse databases could not be described as being appropriately normalised. In addition, a common complaint about Webfuse has been that it was using technology that would not scale and that was not “enterprise-ready” (even though it could and did scale and support the enterprise). The questions of technical novelty, technical purity, or fulfilling arbitrary scalability guidelines had little or no effect on the success of Webfuse. The success of Webfuse arose from becoming, and being able to stay, an integral and useful part of the everyday life of the students and staff of the institution.

Webfuse was not a product

This emphasis on the characteristics of the technical product was also evident in the continual queries from colleagues asking when Webfuse would be sold or made available to other institutions. The ability and adoption of Webfuse as a product by other institutions was seen as a way for proving its success. This was based on the assumption that Webfuse, like all software, was a product that could be reused regardless of the organisational context. This was also the assumption that underpinned the development of Webfuse during the first phase of its development (1996 though 1999). One example of this is the observation that Webfuse was made available as an open source project for people to download in 1997.

A key characteristic of the second phase of Webfuse development was the recognition that the product and its features was not as important as how well its features matched the needs of the local context and continued to evolve in response to those needs. The most important part of Webfuse was the process, not the product. It was through this process of contextual adaptation that Webfuse became part of the way things were done at CQU, it became part of the culture. This tight connection with the institution meant that while the principles behind Webfuse and some of the applications might be useful at other institution, it was impossible to distribute Webfuse as software product. An understanding of this distinction improved the implementation of Webfuse. However, an inability to explain the importance of this distinction to various stakeholders contributed to the eventual demise of Webfuse and especially its ateleological process.

The importance of the pedagogue

Coates et al (2005) suggests that a recurrent message from educational technology research is that “it is not the provision of features but their uptake and use that really determines their educational value”. This message matches well with the experience of Webfuse. During the initial phase of Webfuse development described in Chapter 4, the provision of features in terms of various page types was not sufficient to generate use by academic staff and consequently any impact in terms of educational value. If the teaching staff responsible for a course did not use the provided features, or did not integrate them effectively into a course, there was no educational value. The pedagogue was of central importance in terms of any educational value arising from e-learning. It was through understanding and using this principle that Webfuse was able to become and everyday part of the practice of teaching academics.

Change takes time, familiarity, need, support, and adaptation

Many, if not all, teaching staff did not make decisions to adopt new educational practices and technologies immediately upon their release. Such adoption decisions occurred over varying periods of time as a result of a combination of individual contextual factors. Effective use of novel practices and technologies often lagged adoption by a number of years. The introduction of novel practices into an organisation generated a need for changes in organisational practices and support in order to become widely adopted and appropriately used. For example, the use of the course barometer feature was highest and most appropriate in 2002 and 2008 (see Figure 5.12) when use was encouraged and supported by organisational resources. In addition, as novel features become more widely used, there is a need to adapt those features to the requirements in response to lessons learned and changing requirements.

Helping people increases trust and knowledge

From 2000 onwards the Webfuse development staff also fulfilled the roles of system trainers and frontline helpdesk staff. Each of these roles are inherently challenging and attempting to balance the competing demands of each role adds further to the complexity. However, there were also a number of significant benefits that arose from this multi-skilling. These benefits included:

  • Helpdesk staff with increased knowledge of the systems;
    The helpdesk staff handling user problems had deep understandings of how the systems worked, what they could do, and how they could be manipulated. This deep level of knowledge enabled quicker and more flexible responses to problems.
  • Increased ability for rapid changes; and
    In some cases, those flexible responses involved quick modification of the Webfuse code to correct a problem or add a new feature. Such minor problems did not have to rise through a helpdesk escalation process before being remedied.
  • Developers with increased knowledge of the needs and capabilities of the users.
    The offering of helpdesk support and training sessions provided a deeper understanding of the capabilities and needs of both staff and student users that could drive the on-going design and development of Webfuse.

Each of these benefits combined to increase trust in the system and its direction, as evidenced by the increased use shown above and the following quote from a member of academic staff

my positive experience with other Infocom systems gives me confidence that OASIS would be no different. The systems team have a very good track record that inspires confidence

You can’t keep all the people happy

The experience with Webfuse from 2000 through 2009 has highlighted just how difficult answering the question – “Was Webfuse a success?” – actually is and how dependent it is upon the experiences and position of the person answering the question. In terms of success, it is possible to point to the statistics showing much higher levels of usage by staff and students. It is also possible to point qualitative comments from staff around trust and confidence and to formal management reports describing Webfuse as “[t]he best thing about teaching and learning in this faculty in 2003”. At the same time, it’s possible to point to consistent arguments from central IT staff that Webfuse was a shadow system that duplicated existing systems and was subsequently inefficient and wasteful (Jones, Behrens et al. 2004). There were also comments from one senior member of staff in 2004 suggesting that Webfuse had made no significant difference to learning and teaching at CQU.

Ateleological processes don’t fit in a teleological environment

Webfuse experienced its greatest levels of support and improvement during the period from 2000 through 2004. During these years the Faculty of Informatics and Communication (Infocom) – which supported Webfuse – was undergoing significant growth in student numbers, complexity, and available resources. At the same time, Infocom had a Dean who had publicly expressed support (Marshall 2001) for a more ateleological approach to organisational and systems development, and was comfortable with that approach within Infocom.

From 2004 onwards there were a number of changes within CQU, including: (1) changes in faculty and institutional leadership; (2) changes in student enrolment profile raising concerns about faculty and institutional funding; and, (3) an organisational restructure resulting in increased centralisation and/or out-sourcing of services. These changes led the institution toward a much more teleological approach to systems development and support. Under these conditions the ateleological Webfuse process was seen as wasteful of resources and, to some extent, nonsensical.

References

Coates, H., R. James, et al. (2005). "A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning." Tertiary Education and Management 11(1): 19-36.

Jones, D., S. Behrens, et al. (2004). The rise and fall of a shadow system: Lessons for enterprise system implementation. Managing New Wave Information Systems: Enterprise, Government and Society, Proceedings of the 15th Australasian Conference on Information Systems, Hobart, Tasmania.

Marshall, S. (2001). Faculty level strategies in response to globalisation. 12th Annual International Conference of the Australian Association for Institutional Research. Rockhampton, QLD, Australia.

Learning with an open course – a case study?

It seems the open educational resources and open courses are one of the next big fads. I realise that they’ve been around for a long time. There is the formalisation of open courses arising out of the MIT and similar projects leading to the OCW consortium and more recently there’s been idea of a MOOC idea arising from work by Seimens, Cormier, Downes and others. Even before all that there were various ad hoc examples of open courses (of different types) made available through the work of various “lone rangers”. The fad cycle around OER/OCW has been in an upswing for a while, lots of formal institutional interest and increasingly there are grants being awarded. In fact, it appears that including OER/OCW in a grant title seems to be a good thing at the moment. Sure sign of a fad?

Personal aside

I was one of those lone rangers way back in the mid-1990s with the courses in systems administration and operating systems I designed and taught at CQU. The websites for these courses were open to everyone. As part of that openness, I made archives of the Systems Administration course available for download and was happy for folk to put up mirrors of the site. An interesting side effect of this practice is that while I can no longer find any record of those course sites on institutional servers, I can find the mirrors of the courses elsewhere. Sadly though, most of the links seem to be broken and pointing to the wrong place. Well that was a copy of the course site from 2000. In trying to find a better mirror I came across this nice comment. You can still find the study guide/book from the course online.

Eat your own dog food

I’m always skeptical of “movements” when the fad cycle has kicked in an institutions start making them part of their “strategic” plans. So, I’ve been wondering how good some of the institutional open courses are and whether there might be some insight gained from using one of the courses as a basis for learning. In my case, it’s a bit of eating your own dog food. As someone who pushes for courses to be open, perhaps I should try them more as a student.

Now I have started (but not completed) some of the Downes/Siemens MOOCs and will probably try and connect with their next one around PLEs/PLNs. In this case, I’m more interested in the open courses that are probably better termed open content. e.g. the MIT courses where the content is available, but there really is no instructor or cohort doing the course with you. What’s it like working through one of those courses on your own?

I’ve been pondering this for a while, then last weekend, I came across this course on “Empirical Research Methods” from CMU. It just so happens that this is one of my weaknesses. So, working through this course seems a good way to kill two birds with one stone.

Once the thesis is more complete, I plan to work through this course and use the blog to reflect on the experience and what I experience and what learn.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php