Assembling the heterogeneous elements for (digital) learning

Month: December 2011

Ateleological travels in a teleological world: Past and future journeys around ICTs in education

In my previous academic life, I never really saw the point of book chapters as a publication form. For a variety of reasons, however, my next phase in academia appears likely to involve an increasing number of book chapters. The need for the first such chapter has arisen this week and the first draft is due by February next year, which is a timeline to give me just a little pause for thought. (There is a chance that this book might end up as a special edition of a journal)

What’s you perception of book chapters as a form of academic publication? Am particularly interested in the view from the education field.

What follows is a first stab at an abstract for the book chapter. The title for the book/special edition is “Meanings for in and of education research”. The current working title for my contribution is the title to this post: “Ateleological travels in a teleological world: Past and future journeys around ICTs in education”.

Abstract

The Australian Federal Government are just one of a gaggle of global stakeholders suggesting that Information and Communication Technologies are contributing to the creation a brave, new, digital world. Such a digital world is seen as being radically different to what has gone before and consequently demanding a radically different education system to prepare the next generation of learners. A task that is easier said than done. This chapter argues that the difficulties associated with this task arise because the meanings underpinning the design of education systems for the digital world are decidedly inappropriate and ill-suited for the nature of the digital world. The chapter draws upon 15+ years of research formulating an Information Systems Design Theory for emergent e-learning systems for universities to critically examine these commonly accepted meanings, suggest alternate and more appropriate meanings, and discuss the potential implications that these alternate meanings hold for the practice of education and education research.

The plan

The plan is that this chapter/paper will reflect on the primary focus of my research over recent years and encourage me to think of future research directions and approaches. Obviously it will draw on the PhD research and in particular the Ps Framework and the presentation I gave at EdMedia a couple of years ago. It will also draw on the presentation I gave analysing the Digital Education Revolution as part of my GDLT studies this year.

"Scaling" educational innovations

This is an attempt to briefly (and possibly badly) express a disquiet I have with the idea of scaling educational innovations. It’s sparked by a post by Rick Hess titled “Why education innovation tends to crash and burn”.

The post suggests that there are two sets of obstacles that prevent educational innovations scaling

  1. Reliance on tough-to-replicate elements.
  2. Structural conditions that impede the growth of replicable models.

Structural conditions

This second set of obstacles resonates with me. It suggests that innovations are impeded by obstacles such as

  • True innovation challenges existing institutions and ways of doing things and hence likely fails within those existing institutions.
    This is Christensen’s disruptive innovation stuff.
  • The lack of price competition (e.g. many/most? parents don’t pay for education, at least not beyond taxes) leads to institutions not having to spend time building cost-effective models.
    There are some questions here, but it’s a factor.
  • The difficult of effectively comparing outcomes means you can’t objectively decide between alternatives.
  • A discomfort with for-profit companies – which are obviously better at innovation – being involved with education reduces risk etc.
    Not sure I agree entirely with some of the assumptions of this point, but there is an aspect of this which may reduce diversity. Not sure there is a lack of for-profits and also think some of the prior obstacles may contribute to this lack. Not just a discomfort.

Don’t think this list is complete, but it’s a start. It’s the first set of obstacles that trouble me.

The need for pilots – wrong set of “tough-to-replicate” elements

The underpinning assumption in the post is that the problem here is how to scale a successful pilot. i.e. some senior manager has identified a good innovation from somewhere, set up a pilot, it’s worked and now there is a need to scale this throughout the entire organisation. Based on this assumption the “tough-to-replicate elements” include: funding for the pilot, expertise of pilot staff, enthusiasm from leadership, accommodating policies.

The idea seems to be that the nature of a pilot makes it more likely to succeed, but that this nature is missing when you attempt to scale it more broadly.

I agree that these can be a problem, but I think the problems come from the assumptions that underpin this view of organisations. A view that results in the following solutions.

Hess’ solutions

The solutions Hess suggests essentially seek to minimise/negate some of the above obstacles and include

  • Pay more attention to innovations that scale easily.
    This is because you don’t have to pony up the resources to make it succeed.
  • Don’t innovation within existing institutions.
    More of the Christensen flavour. It’s to difficult to innovation in existing organisations with established cultures, do it elsewhere. This means leadership don’t have to be involved and even if they are, you don’t have to worry about reinvention.
  • Focus on cost and outcomes in allocating public dollars.
    Change public policy to avoid formula funding and limited measures of quality. Of course there’s no mention of how to do this, but it is a short article.

Change the type of system

All of the above seems heavily based on the assumption that an education institution is an ordered system. That the best way to scale an innovation is to pilot it, test if it succeeds and then scale it. It treats pilots as something separate, which is in part what the above argues against. But it goes further than this, if you assume an educational institution is a complex system.

Some of the implications of this view include

  • Limited knowledge of the existing system is a big problem.
    Pilots identified and supported by senior management are problematic because senior management – by the nature of their position – have little or now idea of the reality of organisational life. The “coal-face” is a mystery to them and so pilots often suffer from unexpected problems generated by clashes with the reality of organisational life, and this is why innovations are often worked around by coal-face workers. Pilot’s work around these clashes by having resources, expertise and leadership buy-in.
  • Organisational culture != the hierarchy.
    What senior management usually know about an organisation is based on the existing management hierarchy and the information flows it provides to management. The description of the value of a new innovative IT system given by the head of the IT division is going to markedly differ from the description offered by the people using it every day. Guess which one senior management are more likely to hear regularly?
  • The constraints of efficiency and purpose driven design.
    I don’t think it’s the established culture within the organisation (i.e. the recalcitrant workers) that provide the largest barrier to innovation. Instead it’s the “rational” organisational demand for efficiency which results in an increase in top-down, hierarchical policies and practices to ensure that resources aren’t being wasted. This is what constrains change and actively works against innovation from arising within.

    Within schools this can be seen in national curriculum, standardised computer equipment etc.

    This is part of the problem of organisations and their components being overly constrained by a particular purpose. “You are charged with teaching the Year 10 Core mathematics curriculum, any deviation from that curriculum is bad.” A great way to encourage innovation.

  • Simplified measurements.
    In order to measure effectiveness and/or learn more about the functioning of the institution simplified, standardised measures are adopted. For example, standardised testing, checklists, quality assurance processes, minimum defaults etc. All of which fail miserably at capturing the full diversity of the institution and instead end up driving the members of the institution toward the visible performance of some minimal default. i.e. it drives out diversity (or at least its visibility) and hence a source of innovation.
  • Decomposition.
    Schools as with most modern organisations are decomposed into smaller and smaller blocks. This decomposition tends to make the establishment of lines of communication between people in different blocks much harder. Driving out cross-silo communication limits capacity to innovate.
  • Innovation requires radical change and are predictable.
    An assumption behind “scaling an innovation” is that you are changing (possibly radically) the institution and that you can predict the outcome. If you view an educational institution as a complex system, you know you can’t predict the outcome because there will be unpredictable, non-linear effects.

    This means that you not only can’t predict what will happen, it also means that you don’t need radical change projects. Small changes within a complex system can have radical outcomes.

The failures of intuition in education

For some reason I’m in a fairly contrarian frame of mind tonight. So starting a series of blog posts listing and criticising widely held positions in education seems like the thing to do. I’m sure you have your favourite example, feel free to add them to the comments. Also feel free to add pointers to resources – both for and against – the examples.

The spark for the idea comes from this article title “Why Is the Research on Learning Styles Still Being Dismissed by Some Learning Leaders and Practitioners?” from the ACM eLearn Magazine. The article argues that (one of) the reason(s) where the idea of learning styles are still widely given credence within education is because they appeal to intuition and common sense. The idea resonates with people and hence it is hard to convince them of the alternate view.

I’m pretty sure learning styles aren’t the only example of this problem within the field of education. Given that I’m taking up a role as a teacher educator in the new year, it seems a good time to start adding to my list. The following starts with my initial list and I plan to expand on these over the coming weeks. (There’s a connection between this idea and the list of cognitive biases I included in the initial Ps Framework presentation – start on slide 154.)

The initial list

I haven’t bothered to define what it meant by “failure of intuition”, that would only prematurely close off discussion. I’m happy to live with messiness.

So, my initial list:

  1. Learning styles.
    Steve Wheeler calls learning styles A convenient untruth and points to a range of additional resources arguing against learning styles. Including a 2010 article from Change that argues for three reasons why learning styles continue to be accepted:
    1. Broader claims (e.g. all learners are different) with which learning styles connect are true.
    2. Learning styles suggest that everyone has strengths, it’s egalitarian, which much be good.
    3. Learning styles have become common knowledge.
  2. The learning pyramid.
    The most visited post on my blog is this one that argues that the learning pyramid has no support whatsoever from research. The comments on this post are symptomatic of the intuition/common sense problem. The commenters – including some apparent “gurus” on the education publication/conference circuit – argue for the learning pyramid because it just makes sense to them. This is especially problematic because – as argued by this post (I’ve just read the comments on this post, I recommend them, but that could just be my own confirmation bias) – the learning pyramid resonates with constructivist theories of learning, which as everyone knows must be good.

    Will Thalheimer has a blog post critical of the learning pyramid and associated ideas.

  3. People are rational.
    This isn’t specific to education, most other professions assume that people are rationale. Especially when they are in a management role (or from the IT division). This list is based on the idea that people are not rational decision makers. We do not actively search through all the evidence, weight that evidence and make objective decisions. We are pattern matching intelligences, when evidence matches our established patterns, we select for it.
  4. Leadership.
    Earlier this week Dean Groom tweeted a link to “What we know about successful school leadership” from the American Educational Research Association (AERA). It starts with the following

    Scratch the surface of an excellent school and you are likely to find an excellent principal. Peer into a failing school and you will find weak leadership.

    20 years experience in university (yes, it is only anecdotal evidence) suggests that such a causal link between “good” leaders and organisational performance (good or otherwise) is questionable. But this belief in the importance of leadership to outcomes seems to be a key part of the teleological myth underlying much or modern organisational practice.

    In Managing without leadership: Towards a theory of organizational function argues against the causal link and attempts to develop “a causal, bottom-up account of organizational practice, in place of top-down theories of leadership”.

  5. Digital native and immigrants.
    This one is fairly obvious.

So, what other examples of intuition failure exist in the discipline of education?

Some of the learning analytics literature

Am trying to slowly get back into the learning analytics literature as part of writing a paper. The following is an ad hoc collection of comments/reflections on a few learning analytics papers.

Definitions, processes and potentials

This paper was one of the contributions to the LAK’11 MOOC (which was yet another MOOC I engaged with briefly). Here’s the summary of this paper I did earlier this year.

Social learning analytics

Social Learning Analytics (Buckingham Shum and Ferguson, 2011) is something I’ve been meaning to read for awhile.

Three challenges and opportunities for the design of social learning analytics

  1. challenge of implementing analytics with pedagogical and ethical integrity given questions of power and control over data
    Draws on existing disciplinary and ethical critiques of “new forms of measurement and classification” e.g. Bowker and Staur (1999)…e.g. if it ain’t measured, it doesn’t exist.

    Expands this to suggest that much of analytics focuses on data generated as a by-product of online learning, not as an intentional form of evidence of learning. Gives 5 “variants on longstanding debates” that apply to analytics.

  2. the challenge given by an increasingly turbulent educational landscape
    Identifies 5 phenomena that create a new context for learning and consequently suggest the need for a rethink of analytics. They are
    1. Technological drivers
    2. shift to “free” and “open”.
    3. cultural shifts in social values
    4. innovation requires social learning
    5. challenges to educational institutions

    Some of this I might argue against. But the section on “innovation requires social learning” is much more interesting.

    Uses “the power of pull” to argue the point. This includes the idea that much of the knowledge in the new context is tacit. Which means it can’t be extracted and written down. i.e. analytics can’t measure it.

    Questions/thought: Raises the idea of analytics designed to help the construction/sharing of tacit/shared knowledge.

  3. understand different types of social learning analytic. ,/li>

The core proposition is that with the unprecedented amounts of digital data now becoming available about learners’ activities and interests, from educational institutions and elsewhere online, there is significant potential to make better use of this data to improve learning outcomes.

I like this quote because it suggests to me assumptions that can be challenged. e.g. while there may be a lot of data generated by LMS (quality), the overal quantity of the data or the insight about learning that can be drawn from that data is questionable.

A major part of the paper spends time outlining the challenges and opportunities.

Then the initial taxonomy of five types of social learning analytic is introduced.

  1. Social learning network analysis
  2. Social learning discourse analysis
  3. Social learning content analysis
  4. Social learning dispositions analysis
  5. Social learning context analysis

Finally, potential futures of learning analytics, an interesting list is provided.

Learning analytics

Another overview of the origins of learning analytics.

The evidence is that a growing number of universities are
implementing data warehouse infrastructures in readiness for a future in which they see analytics as a key strategic asset (Stiles et al 2011)

Question: What follows is a brief description of a project at OU that illustrates this organisational trend. It would be interesting to do research that looked at these institutions and found out how they are going, how they are implemented, their impacts and more importantly, how they are being worked around by members of the institution.

Powered by WordPress & Theme by Anders Norén

css.php