Between the idea and the reality,…. falls the shadow

T. S. Elliot’s poem The Hollow Men has a section that goes

Between the idea
And the reality
Between the motion
And the act
Falls the Shadow

I find some connection with this section due to my interest in shadow systems. Applications of information technology that arise within organisations without the knowledge of, and often in spite of, the official, centralised information technology division.

The idea

The typical response of organisational officialdom is that shadow systems are horrible, beastly things that must be stamped out. These horribly inefficient, unscalable, incorrect toys cobbled together by amateurs create problems for the organisation with no corresponding benefit. All IT development must be done by the heroic, rigorous and knowledgeable folk within the central IT division. This is the idea that underpins much of the existing IT development practice within organisations.

The content of the current wikipedia page on shadow systems gives some insight into this view. The wikipedia page has three sections:

  1. Overview – general summary of shadow systems and what they are.
  2. Cause – some general points about what leads to the development of shadow systems.
  3. Problems – a larger list of the problems shadow systems exhibit. Including, poorly designed, not scalable, poorly documented, untested, may allow unauthorised access to information, easy to introduce errors, one hard disk failure away from disaster, and several versions of the truth.

There is no mention of any potential benefits that shadow systems may provide. One potential benefit that is somewhat mentioned gets twisted. Often shadow systems will enable the real work of the organisation to go ahead in spite of unresponsive IT systems. This gets a mention but in a negative way as a cause of shadow systems, not as a benefit. The description of this “benefit” on the page places shadow systems in a negative light and uncritically accentuate the positives of centralised IT divisions. For example, (emphasis added)

Quite properly, when a reporting system is put together by IT professionals, they need to consider all aspects of how the system will be used.

The various skills that are required to achieve all of this means that inevitably a number of different people will all be involved in the task of creating the new report. This increases the amount of time and effort it takes to put a rigorously engineered solution in place. Shadow Systems typically ignore this kind of rigor, making them much faster to implement, but less reliable and more difficult to maintain.

I find it particularly interesting that the current Wikipedia page on shadow systems seems to contain much of the same content that appears on a commercial website that is linked to from the Wikipedia page. I find it increasingly interesting that the purpose of the commercial website is promote the idea of business intelligence applications and if you dig a bit further is associated with a company selling “ERP” solutions.

The reality

Behrens and Sedera (2004) developed a model (shown below) in an attempt to explain why shadow systems exist.

Sandy's Shadow System Model

The model suggests that shadow systems are created by a gap between the needs of organisation and its use of information technology and the features provided by the organisational IT systems. This mismatch can arise due to characteristics of the people, business processes, organisation and technology. How the gap is overcome is mediated by the available resources and support.

The reality is that the organisational information technology resources are not providing the features required. They are failing.

The shadow

The shadow system becomes the method by which the people within the organisation overcome this gap. How they attempt to address the failing of the organisational information technology division to provide what is required by the organisation.

Overcoming the negative perception of shadow systems

Members of the central IT divisions really hate shadow systems because, at least sub-conciously, they are aware that shadow systems point out where they have failed. And no-one likes having their failures pointed out. Especially folk within central IT divisions who generally have a pretty tough time of it. Especially because most of them have this image of themselves as the supreme, rigorous rationalists. They believe they can and do spend sufficient time and intelligence analysing the situation and coming up with the best solution for the organisation.

What they don’t realise is that there is never going to be one best solution. The implementation and use of information technology within any sufficiently large and complex organisation is going to be a wicked problem and one of the defining characteristics of a wicked problem is that there is no one best solution.

There appears to be some value for IT and broader organisational management to overcome this automatic and negative reaction to shadow systems and learn that they can help identify, and perhaps, solve organisational failures by valuing and seeking to understand shadow systems and the gap between idea and reality.

Perhaps rather than seeking to squash shadow systems, IT divisions should seek to encourage the development of shadow systems and put in place practices and support which mitigate some of the flaws of shadow systems. Such a practice might leverage the increasing capability of the current generation of end-user tools that make “shadow system” development so easy and also seek to address the problem of reducing costs most IT divisions seek.

There is a great chance that shadow systems are almost certainly going to be the source of more innovation than central IT divisions. This is because there will be more diversity of perspective around the development of shadow systems than exist within the development arms of central IT divisions. And diversity and through it perspective shift is a necessary condition for innovation.

References

Sandy Behrens, Wasana Sedera, Why do Shadow Systems Exist after an ERP Implementation? Lessons from a Case Study, Proceedings of PACIS’2004, Shanghai, China

Readings on Design-based research

My thesis is an example information systems design research. However, since the aim of the thesis is to develop a design theory for the implementation of e-learning within a university I do have some interest in the design-based research (DBR) literature from the education discipline.

I’m current following a discussion about DBR within education on the ITForum mailing list on which a number of folk are sharing resources, resources I’m not going to be able to read now. The purpose of this post is to save those references so I can perhaps (yea, right) go back and read them at some later stage.

Special issues of journals

Andrew Gibbons provided this list of journals

  • Educational researcher, 32(1) (Jan-Feb, 2003)
  • Journal of the learning sciences, 13(1) (Jan 2004)
  • Educational psychologist, 39(4) (Feb 2004)
  • Educational Technology 45(1) (Jan-Feb 2005)

A book

Andrew Gibbons also pointed to a book – Handbook of design research methods in education

How do we transform institutions? Learning 2.0 and PLEs@CQUni

Graham Atwell raises some questions around the topics Learning 2.0, PLEs, Web2.0, informal and formal learning in this blog post. Apparently based on a workshop which appears to be focusing on the harnessing of these technologies/approaches in existing educational organisations

Aside: I find it somewhat interesting, given the topic, that I’ve found it somewhat difficult (I admit with only a quick google) to find anything about this workshop on the web. The closest I got was the following blurb on a calendar of events page.

This Validation and Policy Options Workshop is being organised by the Institute for Prospective Technological Studies (IPTS), which is part of the European Commission’s Joint Research Centre. The rapid growth of social computing or web 2.0 applications and supporting technologies (blogs, podcasts, wikis, social networking sites, sharing of bookmarks, VoIP and P2P services) has become an important driver of innovation in learning. IPTS is carrying out a study with the objective to assess the impact of web 2.0 trends on the field of learning and education in Europe. Christian Wilk, from the European Commission, unit ‘Cultural Heritage and Technology Enhanced Learning’, is among the invited participants.

Given my current (but potentially somewhat limited) involvement in the PLEs@CQUni project, which is attempting to answer these sorts of questions, I thought it would be worthwhile to engage with some of Graham’s questions.

Does this focus miss the main issues?

Graham suggests

I feel that in focusing on the use of technology for learning within the existing educational organisations they miss the main issues.

Based on the blurb about the workshop from above, I’m guessing Graham’s comment is specifically in the context of this workshop and perhaps that this EU research group is focusing too much learning in universities etc and thus not asking the really interesting questions which he raises such as

  • How do people not enrolled on courses use technologies for learning?
  • How can we empower learners to structure their own learning?

I can certainly see how these questions, particularly the first one, can probably be more fruitfully answered outside of existing formal institutions of learning and teaching.

However, the second question and a number of the other questions Graham asks are different. I think they can be quite effectively answered, at least in large part, within existing institutions of learning and teaching. In fact, some of them must involve those institutions. For example

  • How do we transform institutions?

    Institutional transformation (as opposed to substitution and/or replacement) would appear to require engaging the institution and its members in projects looking to radically transform how they conduct themselves.

  • How do we bring together informal learning and learning from formal sources?
  • How can we open up educational resources – materials but not just resources – to the wider community?

    These last two questions seem to require some level of engagement from the existing institutions.

I can certainly see the danger and in many cases the strong likelihood that any move by existing institutions to adopt learning/web 2.0 is likely to end up missing the point. It will end up being the use of “blogs within an LMS” or other practices which simply miss the point of learning 2.0/web 2.0. Don’t understand the fundamental principles of learning 2.0/web 2.0 and just how much of a paradigm shift they pose for much of the organisational practices and assumptions of existing institutions of learning and teaching.

Assuming that you have to engage this organisations in some way, that transforming these organisations has some value, then what will work?

A teleological process won’t work

A traditional “project-management” project approach to using learning 2.0/web 2.0 within an existing organisation will not work. It will almost certainly result in the “blogs and wikis within an LMS” approach that demonstrates a complete misunderstanding of the implications of learning 2.0/web 2.0.

This type of approach goes by a number of names

  • teleological design (Introna, 1996)
  • push systems (Seely Brown and Hagel, 2005)
  • idealistic (Kurtz and Snowden, 2007)

The features of this approach can be summarised as

  • Some senior folk decide on the purpose.
  • They use “experts” to analyse the situation and design a solution.
  • Once decided the organisation and its members must all align and adopt the identified solution.
  • Those that don’t, get “change managed” or “culturally re-aligned” so they do.

My colleagues and I have expanded on the problems with this type of approach a couple of times before. First in 2005 and again in 2007.

The main problem in this context is that this type of approach is almost 100% certain to ensure that the fundamental assumptions underpinning existing organisational practice will remain. That you’ll get the “blogs and wikis in an LMS” approach to learning 2.0/web 2.0.

IT project management will fail

Often, the experts called in to do the analysis and design in this type of project will be IT experts. This is based on the misguided assumption that the harnessing of learning 2.0/web 2.0 within an existing institution of learning and teaching is an IT implementation project. After all isn’t Web 2.0 information technology?

This is a sure sign of folk who just don’t get it.

It’s a sure sign of a project that is destined to have “blogs and wikis in an LMS”.

The level of transformation or questioning of the fundamental practices and assumptions, both organisationally and about learning and teaching, necessary to effectively make use of the key aspects of learning 2.0/web 2.0 mean that the technology questions associated with such projects are just about the simplest thing you have to handle. The broader pedagogical and organisational questions are going to be significantly more difficult and require much greater engagement and consideration.

The technical selection, installation and/or support of a blog or wiki system or incorporating RSS feeds into existing technologies is dead simple. Any IT folk worth their salt could do it. There are hosting companies that, for a very cheap price, allow you to select from a menu of such systems and can have you up and going very quickly.

Then there’s the whole question of just how well you are understanding learning 2.0/web 2.0 if the first thing you recommend to your organisation is the selection and installation of local technology. SaaS, the “computing cloud” any one?

Helping an academic with 20 years experience of learning 1.0 understand and make moves towards learning 2.0, especially if you’re in an organisation that has only just barely (if you’re generous) web 1.0 literate, is incredibly more difficult than installing and maintaining a software package (locally hosted or not). The critical success factor in transforming an organisation from learning 1.0 to learning 2.0 is not how well you manage your blog engine, it’s how effective you are in engaging and changing the perspectives of the teaching staff and the students.

If the IT function of an organisation is the major driving force behind an attempts to harness learning 2.0/web 2.0, then it will fail. The tail is wagging the dog. The organisation is focusing on the simple question and ignoring the hard ones.

A purely research based approach won’t work

There is a lot of literature published around PLEs and other associated topics that have small research groups developing principles, embodying them in prototypes and trialing them with small groups of folk. Such groups can be primarily computer science type folk or learning science type folk, the tend to do the same thing. This practice is all well and good and will generate some very useful insights.

However, it tells us nothing about how you can or what might happen when you attempt to harness learning 2.0/web 2.0 within a real organisation. When you’re dealing with real people (students and academics) who have a lot on their plate and don’t really see the point of learning 2.0 (especially when they’re being told to research), a lot of unexpected and very difficult problems arise.

The assumptions made by the research groups don’t always apply. Working out how to enable this transformation within a specific context is a lot harder than figuring out the 5 principles of a PLE prototype.

An engaged, design research approach with a focus on staff learning, might work

Transforming the learning and teaching practice of a university will fail unless the teaching staff and the students engage in the process. If they don’t change their conceptualisations of how learning and teaching should work, then the transformation will not occur.

There is a chance, at least on the surface, that it may appear to be working. All the staff may be using blogs, the students are posting. But chances are, unless they really do engage, they are simply “gaming” the system. Being seen to do the right thing because it is expected, not because they actually taken on board the “new way” of doing things.

For this reason, I suggest that the transformation of an institution through application of learning 2.0/web 2.0 would probably require the organisation to take the principles of learning 2.0/web 2.0 and use a collaborative, emergent process engage with the local organisational context and actively help the students and staff improve their experience of the context through the appropriate application of the learning 2.0/web 2.0 principles.

Such a process would focus on developing knowledge and positive experience amongst the students and staff and do so through a largish number of small-scale separate trials that are attempting very different things. Those that work continue, those that don’t get killed off and lessons are learned.

The formation of this ideas is outlined and expanded upon in two recent publications

Won’t that just result in “Blogs and wikis in an LMS”

Underpinning the above suggestion is the idea that you can only make small changes to the existing practice of experience teachers. This is, to some extent, based on the findings that people will either discount or simply not understand/see any perspective or practice that is significantly different from their existing conceptualisations. Anyone who forces radical change will encourage at best compliance, not engagement, and at worst, complete disengagement.

The idea of small changes begs the question, “Well won’t this just end up with ‘blogs and wikis in an LMS'”. i.e. you will get the old horseless carriage approach with educational technology.

Perhaps. But there are two responses which I believe suggest otherwise. These are:

  1. “The journey of a thousand miles begins with a single step.”
    This type of a project doesn’t end after the first step. The project and how it works has to continue to be embedded into the organisation’s fundamental operation. It has to continue to encourage and enable the staff and students to keep taking those single steps. A significant limitation of teleological design projects is that once they finish the initial implementation, they aim to maintain the status quo for long periods of time. They stop the journey.
  2. Small changes in a complex system can have large and unexpected outcomes.
    I believe that most largish institutions of learning and teaching, especially universities, are an example of a complex adaptive system. Such systems are non-linear.

The importance of diversity to improving learning and teaching

For some time I have thought that one of the major barriers to improving/innovation in learning and teaching has been the consistency of practice and mindset held by discipline based groups. Now I’ve got some suggestion of a research basis for this view. This post attempts to explain my view, outline the research basis and draw implications for the practice of learning and teaching at Universities.

The problems with discipline-based groups

Almost without exception, academic staff at Universities are organised into discipline-based groups. All the computer scientists are in one unit, the management folk in another and yet another for the historians. These discipline groups generally have a fairly large common perspective on research and learning and teaching. They tend to teach based on methods they’ve experienced and all the members within a discipline group tend to have experience the same methods.

Anything outside of that experience is seen as strange and in the absence of outside knowledge they aren’t even aware that there are alternatives.

For example, way back when I was a member of an information technology group that were thrown together into an organisational unit with journalists, cultural studies and other decidedly non-technical, non-autistic disciplines. At some later stage I was responsible for supporting the learning and teaching of these different groups. Those staff from a more “human communications” based discipline, almost without exception, placed a great deal of emphasis on face-to-face tutorials with a heavy emphasis on student/student and student/teacher discussion. Which made it very difficult to come up with approaches for distance education students. The IT folk without that history, didn’t have the same problem. Neither group, without interactions with the other group, would normally have thought of the other approach to teaching.

Discipline-based groups tend to exclude awareness of alternatives, the tend to emphasise the importance of shared experience. They make it difficult to be aware of alternatives.

This is particularly problematic because, almost without exception, most of the major projects that attempt to improve and/or innovate around learning and teaching are discipline-based. This fundamental assumption, in my mind has always, limited the chances of true improvement or innovation. It limits the chance of them escaping the past.

Related to this practice is the suggestion that instructional/learning/curriculum designers should be physically located within faculties or departments. i.e. that they should work predominantly with folk from within a particular discipline (or set of disciplines). Over time, because of the nature of the work (e.g. these instructional designers will start to publish more within the literature of a particular discipline), I believe that this practice has the likely outcome of further constraining innovation.

What others have said

In a recent blog post Dave Snowden has suggested that perspective shift is one of the necessary (but not sufficient) conditions for innovation. Discipline-based attempts improving learning and teaching make this very difficult as they generally only involve people who have very similar perspectives.

Dave’s blog post mentioned above includes a link to an MP3 file of a talk he gave in Melbourne. I fully recommend people listen to this, even though it is disappointing to have missed out on the end of the talk due to flat batteries. This blog post which gives one summary on Dave’s talk offers some related insights.

This morning’s post from the Tomorrow’s Professor Mailing List was titled “Do Faculty Interactions Among Diverse Students Enhance Intellectual Development”. It was looking at the practice in the USA of having racially mixed classes and its effect on intellectual development. While it may be a leap (a leap too far?) to apply some of the findings to improving teaching, I certainly see some connection and value in doing so.

The post was an excerpt from

Chapter 4, Accounting for Diversity Within the Teaching and Learning Paradigm, in the book: Driving Change through Diversity and Globalization, Transformative Leadership in the Academy, by James A. Anderson, professor of psychology and Vice President for Student Success and the Vice Provost for Institutional Assessment and Diversity at the University of Albany

One of the foundation pieces of evidence the post is based upon is from the following paper

Anthony Lising Antonio, Mitchell J. Chang, Kenji Hakuta, David A. Kenny, Shana Levin,
and Jeffrey F. Milem, Effects of Racial Diversity on Complex Thinking in College Students, Psychological Science, 15(8): 507-510.

The paper aims to examine the effects of diversity on integrative complexity. Integrative complexity is the degree to which cognitive style involves differentiation and integration of multiple perspectives. The idea is that the level of IC has the following effects:

  • Low integrative complexity – take a less complicated approach to reasoning, decision making and evaluating information.
  • High integrative complexity – evaluation is reflective, involves various perspectives, solutions and discussions.

The paper’s findings included

  • Racial diversity in a group of white students led to greater level of cognitive complexity.
  • Racial diversity of a student’s friends had a greater impact on integrative complexity than the diversity of the group.

Some of the other points in the paper

  • Groupthink.
    The cohesiveness and solidarity that arises from a common group is a foundation for unanimity of opinion which results in poor decision making.
  • Minority influence.
    Presence of group members who hold divergent opinions lead to increased divergent thinking and perspective taking. Interaction with the minority enhances the integrative complexity of the members of the group who hold the majority opinion.

Implications of learning and teaching

In summary, at a high level

  • Homogeneous groups considered harmful.
    Any approach to improving learning and teaching which uses homogeneous groups will limit, possibly even prevent, innovation and improvement as the group will get bogged down in group think. Given that the majority of such projects within universities involve homogeneous groups, it questions some of the fundamental operations of universities.
  • Actively design projects to encourage positive interactions amongst people with diverse backgrounds.
    The positive flip side is that projects should actively seek diversity in its membership and engage in processes that enable positive interactions between these diverse group members. i.e. not an attempt to encourage group think or cohesion amongst the diverse members, but instead to leverage the diversity for something truly innovative.

    Hopefully those that are familiar with the unit I currently work with can see why I value and encourage the diversity of the unit and think any attempt to encourage uniformity of background and thinking is a hugely negative thing.

From these two observations a number of potential implications can be drawn

  • Discipline-based innovation in L&T will be less than successful.
  • Top-down innovations in L&T will be less than successful (as they embed an assumption that a very small, generally similar, group can make decisions and get everyone to buy into the group think.)
  • Any committee or group that contains members that have the same discipline or organisational experience (e.g. everyone has been at University X for 10+ years) will generate sub-optimal outcomes.
  • The best and most innovative teachers will have the most diverse set of teaching influences and experiences. (The diffusion of innovations literature backs this up).
  • Organisational units (e.g. teaching and learning or academic staff development units) which all have pretty much the same background (e.g. all graduated with Masters in Instructional Design) or same experience (all publish in the same set of conferences or journals) will be less innovative than they could be.
  • An L&T support unit that doesn’t regularly, actively and deeply engage with the L&T context of their organisation are destined to do things that are less innovative and appropriate.

Of course I believe this, over the last 5 years I’ve occasionally attempt to get the REACT process off the ground as an approach to improving learning and teaching. A key aim of that project was to

opening up the design of teaching to enable collaboration with and input from a diverse set of peers;

The gulf between users and IT departments

Apparently Accenture have discovered “user-determined computing” and associated issues.

The definition goes something like this

Today, home technology has outpaced enterprise technology, leaving employees frustrated by the inadequacy of the technology they use at work. As a result, employees are demanding more because of their ever-increasing familiarity and comfort level with technology. It’s an emerging phenomenon Accenture has called “user-determined computing.”

This is something I’ve been observing for a number of years and am currently struggling with in terms of my new job, in a couple of different ways. In particular, I’m trying to figure out a way to move forward. In the following I’m going to try and think/comment about the following

  • Even though “Web 2.0 stuff” seems to be bringing this problem to the fore, it’s not new.
  • The gulf that exists between the different ends of this argument and the tension between them.
  • Question whether or not this is really a technology problem.
  • Ponder whether this is a problem that’s limited only to IT departments.

It’s not new

This problem, or aspects of it, have been discussed in a number of places. For example, CIO magazine has a collection of articles it aligns with this issue (Though having re-read them, I’m not sure how well some of them connect).

The third one seems the most complete on its coverage of this topic. I highly recommend a read.

The gulf

Other earlier work has suggested that the fundamental problem is that there is a gap or gulf, in some cases a yawning chasm, between the users’ needs and what’s provided by the IT department.

One of the CIO articles above puts it this way

And that disconnect is fundamental. Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable and compliant with an ever increasing number of government regulations. Consequently, when corporate IT designs and provides an IT system, manageability usually comes first, the user’s experience second. But the shadow IT department doesn’t give a hoot about manageability and provides its users with ways to end-run corporate IT when the interests of the two groups do not coincide.

One of the key points here is that the disconnect is fundamental. The solution is not a minor improvement to how the IT department works. To some extent the problem is so fundamental that people’s mindsets need to change.

Is this a technology problem?

Can this change? Not sure it can, at least in the organisations where all that is IT is to be solved by the IT department. Such a department, especially at the management level, is manned (and it’s usually men, at least for now) by people who have lived within IT departments and succeeded, so that they now reside at the top. In most organisations the IT folk retain final say on “technical” questions (which really aren’t technical questions) because of the ignorance and fear of senior management about “technical” questions. It’s to easy for IT folk to say “you can’t do that” and for senior management not to have a clue that it is a load of bollocks.

Of course I should take my own advice look for incompetence before you go paranoid. Senior IT folk, as with most people, will see the problem in the same way they have always seen the problem. They will always seek solve it with solutions they’ve used before, because that’s the nature of the problem they see. One of the “technical” terms for this is inattentional blindness

The chances of a fundamental change to approach is not likely. Dave Snowden suggests that the necessary, but not sufficient conditions, for innovation are starvation, pressure and perspective shift. Without that perspective shift, the gulf will continue exist.

It’s not limited to IT

You can see evidence of this gulf in any relationship between “users” and a service group within an organisation (e.g. finance, Human Resources, quality assurance, curriculum design etc.) – especially when the service group is a profession. The service group becomes so enamoured of its own problem due to pressure from the organisation, the troubles created by the “users” and the distance (physical, temporal, social, mental etc.) between the service group and the “users” that it develops its own language, its own processes and tasks and starts to lose sight of the organisations core business.

The most obvious end result of the gulf is when the service department starts to think it knows best. Rather than respond to the needs, perceived and otherwise, of the “users”, the service department works on what it considers best. Generally something the emphasises the importance of the service divisions and increases their funding and importance within the organisation. You can see this sort of thing all the time with people who are meant to advice academics about how to improve their learning and teaching.

IT are just the easiest and most obvious target for this because IT is now a core part of life for most professions, most organisations continue to see it as overhead to be minimised, rather than an investment to be maximised and the on-going development of IT is changing the paradigm for IT departments.

A Paradigmatic Analysis of Information Systems As a Design Science

The following is a summary of and reflection upon

Juhani Iivari, (2007), A Paradigmatic Analysis of Information Systems As a Design Science, Scandinavian Journal of Information Systems, 19(2):39-64

Reflection

This paper is somewhat similar, at a very abstract level, to one I’ve been thinking about. However, it’s told from a different perspective, with a different intent and different outcomes (and probably much better than I could). There is enough difference that I think I can still contribute something.

One aspect of that difference would come from the fact that the foundation of my thoughts will be Shirley’s types of theories which Juhani identifies as being more complete than the framework he developed.

Questions and need for further thinking

In the epistemology of design science section the author outlines a framework to structure IS research. Somewhat equivalent to Shirley’s theory of theories. Does this structure belong in the “epistemology” section or the “ontology” section?

The question of truth value and truthlikeness is something I need to read on further

The 12 theses

The author summarises his view in 12 theses, I’ve listed them below with, where it exists, some early indication of some of my problems and/or thoughts. At least those that currently exist.

  1. Information Systems is ultimately an applied discipline.
    I agree. Juhani mentions the problems with the term “applied science” in the first footnote.
  2. Prescriptive research is an essential part of Information Systems as an applied discipline.
    Agreed. I would add that there has been significantly too much focus on the other forms of research – descriptive and explanatory – at the expense of prescriptive research. A flaw that has negatively impacted on the IS discipline.
  3. The design science activity of building IT artifacts is an important part of prescriptive research in Information Systems.
    I agree, however, I don’t see it as the main output or purpose of prescriptive research in information systems. At least not any more than building a quantitative survey is the main contribution/output of descriptive/explanatory research. For me, building an IT artifact is a method to test the theory being developed.
  4. The primary interest of Information Systems lies in IT applications and therefore Information Systems as a design science should be based on a sound ontology of IT artifacts and especially of IT applications.
    There’s a glimmer of agreement here. Not sure how I far that goes. I see IS as having a main interest in how IT applications are used/impact organisations/groups/people. For me the focus on just IT applications is computer science.
  5. Information Systems as a design science builds IT meta-artifacts that support the development of concrete IT applications.
    Agree, but with meta-artifacts expressed as information systems design theories.
  6. The resulting IT meta-artifacts essentially entail design product and design process knowledge.
    Yes.
  7. Design product and design process knowledge, as prescriptive knowledge, forms a knowledge area of its own and cannot be reduced to the descriptive knowledge of theories and empirical regularities.
    Not certain about this one. Mention a bit more below.
  8. Constructive research methods should make the process of building IT meta-artifacts disciplined, rigorous and transparent.
    Agree.
  9. Explication of the practical problems to be solved, the existing artifacts to be improved, the analogies and metaphors to be used, and/or the kernel theories to be applied is significant in making the building process disciplined, rigorous and transparent.
    Agree, but need more time to think about whether this is complete.
  10. The term ‘design theory’ should be used only when it is based on a sound kernel theory.
    Probably disagree, see more discussion below. Need more thought.
  11. Information Systems as a design science cannot be value-free, but it may reflect means-end, interpretive or critical orientation.
    Yes agree. I wonder if there are any other additional ethical perspectives.
  12. The values of design science research should be made as explicit as possible.
    Yes.

What distinguishes design science from IT development practice

Juhani suggests the use of rigorous constructive research methods as what distinguishes practice from design science. Which leads him to admit that if a practitioner uses a constructive research method, then they are doing research.

I find this vaguely troubling

I would suggest that need to move to the output. My view assumes that an artifact is not a sufficient output for design science. If you accept that the expected output of research is the generation or testing of theory (knowledge). Then the output of design science should be design theory (though I don’t like the phrase design science). An artifact can be part of the design theory, but not the sole output.

An IT practitioner will not (typically) generate design theory. They generate artifacts. A researcher aims to go the next step and generate design theory.

Does DSR have a positivistic epistemology

Juhani argues that action research and design science research are very different in terms of history, practice, ontology and epistemology. As part of this he suggests that DSR (especially from engineering and medicine) is based on positivistic epistemology and that argues against Cole et al that it might be possible for some applications of DSR around IS within organisations to have a different epistemology.

This argument is based on his work on the paradigmatic assumptions regarding systems development approaches which found that al 7 IS development approaches shared a fairly realistic ontology and positivistic epistemology.

However, earlier in the paper he argues that systems development approaches are not a good match for use as constructive research methods. Hence how can an analysis of systems development approaches be used to argue anything about DSR? Yes, there is likely to be some strong overlap, but it doesn’t seem to be strong evidence.

Also, simply because historically these systems development approaches (and one assumes IS developers/researchers) have held this particular view that this excludes some practice of DSR which has a different epistemology.

Test artifacts in laboratory and experimental situations as far as possible

It is suggested that action research can be used to evaluate artifacts and provide information on how to improve these artifacts. However, Juhani also suggests that design science artifacts should be tested in laboratory studies as far as possible.

I believe this closes off a major fruitful way of developing design theory. An approach that ties very much into Juhani’s first major source of ideas for design science research – practical problems and opportunities. DSR that uses action research as a methodology to not only evaluate but also inform the design of an artifact/ISDT can lead to very fruitful ideas.

Does a design theory need a kernel theory

Juhani says yes. If we do without there is a “danger that the idea of a ‘design theory’ will be (mis)used just to make our field sound more scientific without any serious attempt to strengthen the scientific foundation of the meta-artifacts proposed”.

There is something to this, but I also have some qualms/queries which I need to work through. The queries are

  • Situations where descriptive theory has to catch up with prescriptive theory.
    i.e. physics of powered flight being figured out after the Wright brothers flew.
  • Situations where descriptive theory is closing off awareness or insight.
    Someone deeply aware of descriptive theories will have a set of patterns established in their head which may limit there ability to be aware of the situation or envision different courses of action (i.e. inattentional blindness aka perceptual blindness).

    Awareness of a situation, or the ability to avoid established descriptive theories may highlight new and interesting solution (yes, I think this occurence might be rare).

There is an argument to be had about the difference between the final version of the ISDT and its formulation. It may be that a complete/formal ISDT does need to have a kernel theory or two. However, it may not have been there at the beginning.

For example, the work that forms that basis of my design theory for e-learning started without clearly stated and understood kernel theories based on formal descriptive research. However, a very early paper (Jones and Buchanan, 1996) on that work included the following

It is hoped that the design guidelines emphasising ease of use and of providing the tools and not the rules will decrease the learning curve and increase the sense of ownership felt by academic staff.

It’s not difficult to see in that statement a connection with diffusion theory and TAM. Descriptive knowledge that has informed later iterations of this work and diffusion theory certainly gets a specific inclusion as a kernel theory in the final ISDT.

What’s the kernel theory for the IS development life cycle

In footnote 7 the author writes that Walls et al (1992) “suggest that the information systems development life-cycle is a design theory, although I am not aware of any kernel theory on which it is based.”

I agree, in so much as I’m not aware of an clear statement of the kernel theories that underpin the SDLC. I also think that the absence of such a clear statement is a potential short-coming.

There is a world view embodied in the SDLC. For example, I believe that the SDLC assumes that the world fits into the simple or complicated fields of the Cynefin Framework and is completely inappropriate when used in other types of systems – even in the complicated field it can be difficult. Agile/emergent development methodologies appear to be a better fit for the complex section of Cynefin.

Which raises the question, is there value in going back and developing an ISDT for the SDLC which makes clear the assumptions that underpin it by providing kernel theories.

Irreducibility of prescriptive knowledge to descriptive knowledge. Juhani states, that since most IT artifacts aren’t strongly based on descriptive knowledge

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

That seems to be a rather large leap to me. The questions it brings to mind include

  • Does the absence of strong links mean its irreducible?
    I don’t understand how Juhani has gotten from “most IT artifacts have weak links to descriptive knowledge” to “prescriptive knowledge is not reducible to descriptive knowledge”.

    Not to suggest it’s wrong. It’s just that I’m not smart enough to make the conenction, yet.

  • Is there more to this statement that meets the eye?

    Despite this weak reliance on descriptive theories people design reasonably successful IT artifacts.

    • What types of artifacts are reasonably successful? Who says? Why are they successful?
      There’s a large amount of literature about the failure of large scale information systems. Is that failure due to the weak reliance?

      We can all point to systems that are being used by people to perform tasks. But does use mean success? Does it mean that the need of the folk is strong enough that they will adapt and work around the system enough to do the task they wish to achieve? Is success generating the best possible system? How do you evaluate that?

      Perhaps the success of some systems, even with weak reliance on descriptive knowledge, simply proves how adaptable people are.

    • Does weak reliance, mean none?
      The example I give above shows a situation where without knowledge f a specific type of descriptive knowledge (diffusion theory) a practitioner was already aware of something very similar, a need to go that way. An example of the relevance/rigor gap?

If you haven’t noticed, I lost my way in the above. Need to come back to it. I feel there is more to unpack there.

Summary

Abstract

Discusses

  • ontology – suggests ontology of IT artifacts, draws on Popper’s three worlds as a starting point
  • epistemology – emphasizes the irreducibility of the prescriptive knowledge of IT artifacts to theoretical descriptive knowledge, suggests a 3 level epistemology for IS – conceptual knowledge, descriptive knowledge and prescriptive knowledge
  • methodology – expresses a need for constructive research methods for disciplined, rigorous, transparent building of IT artifacts as outcomes of design science research (so as to distinguish design research from simply developing IT artifacts), also discusses connections between action research and design science research.
  • ethics – points out IS as a design science cannot be value free, distinguishes three ethical positions: means-end oriented, interpretive and critical

of design science.

Introduction

Computer science has always been doing design science research. Much of the early IS research focused on systems development approaches and methods – i.e. design science research

But the last 25 years of mainstream IS research has lost sight of these origins – due to the “hegemony of the North-American business-school-oriented IS research” over leading IS publication outlets.

The dominant research philosophy has been to develop cumulative, theory-based research to be able to make prescriptions.

A pilot analysis of practical recommendations of MISQ articles between 96 and 2000 showed they were weak (Iivari et al. 2004)

Current upsurge in interest in design science may change this. Also important that these papers have turned attention onto how to do design science research more rigorously.

IS is increasingly being seen as an applied science, a quote from Benbasat and Zmud (2003)

our focus should be on how to best design IT artifacts and IS systems to increase their compatibility, usefulness, and ease of use or on how to best manage and support IT or IT-enabled business initiatives.

Iivari’s (1991) previous work on applying paradigms to IS development approaches or schools of thought used the Burrell and Morgan (1979) framework but expanded it in two ways to encapsulate his design science background.

  1. Added ethics as an explicit dimension
  2. incorporated constructive research to complement nomothetic and idiography research

This essay revisits that work and applies it directly to design science research.

Ontology

States design research should be based on sound ontology. However, does not state explicitly (at least at this stage) why this is the case. Not suggesting that it should be based on sound ontology, but I want to know why Juahni thinks it should be.

Talks about Poppers (1978) three worlds as the basis for this ontology (a lecture delivered by Popper)

  • World 1 – physical objects and events, including biological entities
  • World 2 – mental objects and events
  • World 3 – products of the human mind, includes human artifacts and also covers institutions and theories

Popper talks about World 3 include “also aeroplanes and airports and other feats of engineering.”

Iivari argues

  • institutions are social constructions that have been objectified (Berger and Luckman, 1967)
  • truth and ‘truthlikeness’ (Niiniluoto 1999) can be used in the case of theories, but not artifacts
  • Artifacts are only more or less useful for human purposes

Disciplines of computer are interested in IT artifacts. Dahlbom (1996) adopts a broad and possibly confusing interpretation of the concept of the artifact including people and their lives. Coming back to just IT he says

When we say we study artifacts, it is not computers or computer systems we mean, but information technology use, conceived as a complex and changing combine of people and technology. To think of this combine as an artifact means to approach it with a design attitude, asking questions like: Could this be different? What is wrong with it? How could it be improved? (p. 43).

Dahlbom also claims the discipline should be thought of as “using information technology” instead of “developing information systems” (p.34). Need to look at this more to see if there is much more to this claim than the surface interpretation.

Starts thinking about developing a sound ontology for design science. Identifies the need to answer the question about what sort of IT artifacts IS should build, especially if we wish to distinguish ourselves from computer science. In terms of ontology of artifacts mentions

  • Orlikowski and Iacono (2001) – the names from the IT artifact
    And their list of views of technology: computational, tool, proxy and ensemble.
  • March & Smith (1995)/Hevner et al (2004) from design research
    And their constructs, models, methods and instantiations. Iivari suggests this is a very general classification, its application is not always straightforward
  • diffusion of innovations – Lyytinen and Rose (2003), refining Swanson (1994) identify
    • base innovations
    • systems development innovations
    • services – administrative process innovations (e.g. accounting systems) technologyical process innovations (e.g. MRP), technological service innovations (e.g. remove customer order entires), and technological integration innovations (e.g. EDI).

In my view the primary interest of Information Systems lies in IT applications.

Defines 7 archetypes of IT applications. As archetypes they may not occur in practice in their pure forms.

Role/function Metaphors Examples Connection with Orlikowski & Iacono
To automate Processor Many embedded or transaction processing systems technology as labour substitution tool
To augment Tool (proper) Many personal productivity systems; Computer aided design technology as productivity tool
To mediate Medium Email, instant messaging, chat rooms, blogs, electronic storage systems (e.g. CDs and DVDs) technology as socail relations tool
To informate Information source Information systems proper technology as information processing tool
To entertain Game Computer games
To artisticize Piece of art Computer art
To accompany Pet Digital (virtaul and roboting) pets

The above table interprets information system

  • is a system whose purpose “is to supply its group of users with information about a set of topics to support their activities” (Gustafsson et al, 1982, p100)
  • implies that an IS is specific to the organisational/inter-organisational context in which it is implemented
  • information content is also a central aspect

Differences between IT artifacts include

  • In design – different design approaches used for different purposes
  • In their diffusion – Swanson (1994) and Lyytinen and Rose (2003)
  • In their acceptance – Iivari’s conjecture

Proposes that IT artifacts have invaded all of Popper’s worlds

  1. IT artifacts are embedded in natural objects, e.g. to measure physical states, and nanocomputing may open up new opportunities. How IT artifacts affect natural phenomena is likely to become a significant research problem.
  2. IT artifats are influencing our consciousness and mental states, our perceptions.
  3. Significant constituents of organisations and societies – make it feasible to develop more complex theories.

Research phenomena below influence epistemology and methodology

  1. How does the use of a mobile phone affect one’s brain temperature?
  2. How does the use of a mobile phone affect one’s perception of time and space?
  3. How do mobile phones affect the nature of work in organisations?

An ontology for design science

World Explanation Research Phenomena Examples
World 1 Nature IT artifacts + World 1 Evaluation of IT artifacts against natural phenomena
World 2 Conciousness and mental states IT artifacts + World 2 Evaluation of IT artifacts against perceptions, consciousness and mental states
Institutions
Theories
Artifacts: IT artifacts, IT applications, meta IT artifacts
IT artifacts + World 3 Institutions
IT artifacts + World 3 Theories
IT artifacts + World 3 artifacts
Evaluation of organizational information systems
New types of theories made possible by IT artifacts
Evaluation of the performance of artifacts comprising embedded computing

Epistemology of design science

Truth, utility and pragmatism. Argues against the adoption of the idea from pragmatism that truth is seen as practical utility. Artifacts, if theories are excluded, do not have any truth value. Practical action informed by theory may develop some level of truth if it consistently proves to be successful.

Draws on his earlier work in adopting a framework from economics to structure research within IS. It’s again based the type of knowledge being produced, in his case there are three types

  1. Conceptual knowledge – which has no truth value
    Includes concepts, constructs, classifications, taxonomies, typologies and conceptual frameworks.
  2. Descriptive knowledge – has truth value
    Includes observational facts, empirical regularities and theories/hypothesis which group under causal laws.
  3. Prescriptive knowledge – which has no truth value
    Design product knowledge, design process knowledge and technical norms.

The author suggests the following mapping between his framework and Shirley’s types of theory

  1. Conceptual – “Theories for analysing and predicting”
  2. Descriptive – theories for explaining and predicting and theories for explaining (as empirical regularities)
    Can include

    • observational facts – who invented what, when.
    • descriptive knowledge – TAM, Moore’s law
    • empirical regularities and explanatory theories identify causal laws that are either deterministic or probablistic
  3. Prescriptive – theories for design and action
    Relatively speaking, prescriptive knowledge is the least well understood
    form of knowledge in Table 3.

Suggests that theories of explaining, in the form of grand theories such as actor-network theory, do not fit into his framework. But the do in Shirley’s.

On the question of truth value or truthlikeness

  • Conceptual – the goal is essentialist, to identify the essence of the research territory and the relationships. May be more or less useful in developing theories at the descriptive level (quotes Bunge 1967a here).
  • Prescriptive – artifacts and recommendations do not have a truth value. Only statements about their efficiency and effectiveness have such a value

Beckman (2002) identifies four criteria of artefacts

  1. Intentional – the knife is a knife because it is used as a knife
  2. Operational – it is a knife because it works like a knife
  3. Structural – is a knife because it is shaped and has the fabric of a knife
  4. Conventional – is a knife because it fits the reference of the common concept of a ‘knife’

Juhani does not include the conventional with artifact as it may not achieve community acceptance until years after invention and construction.

Prescriptive knowledge is irreducible to descriptive knowledge

Suggests that most IT systems are built divorced from descriptive knowledge. There is only a weak link between IT artifacts and descriptive knowledge. And yet IT systems are still reasonably successful.

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

Kernel theories

Believes the presence of a kernel theory is the defining characteristic of a “design theory”.

This is seen as difficult and leads to a softening of requirements for a kernel theory – e.g. Markus et al (2002) allowing any practitioner theory-in-use to serve as a kernel theory. Implying the design theory is not based on scientifically validated knowledge.

Methodology of design science

Classifications of IS research methods (Benbasat, 1985; Jenkins, 1985; Galliers and Land, 1987 and Chen and Hirschheim, 2004) do not recognise anything resembling constructive research methods. Iivari (1991) suggested constructive research as the term to denote the research methods required for constructing artifacts.

Positions building artifacts as a very creative task. Hence it is difficult to define an appropriate method for artifact building. Having constructive research methods is essential for the identity of IS as a design science. The rigor of methods distinguishes the design science from the practice of building IT artifacts.

Suggests two ways to identify the difference

  1. There is no constructive research methods, instead the difference is the evaluation. Design science requires scientific evaluation of the artifacts.
    drawback may lead to reactive research where IS as a designs cience focuses on the evaluation of existing artifacts, rather than building new ones.
  2. Define a rigorous approach for constructive research and use this to differentiate design science from invention in practice.

Iiivari didn’t specify the constructive research methods. Talks about Nunamaker et al (1990-1991) and their suggestion that systems development methods could serve this role. Iivari doesn’t appear to think so. Pitfalls include:

  • Do SDMs allow sufficient room for creativity and serendipity which are essential for innovation?
    A significant concern when attempting to make the building process more disciplined, rigorous and transparent.
  • Most serious weakness of the Nunamaker et al suggestion is that it integrates systems development quite weakly with research activities.

Hevner et al (2004) suggests rigor in designs cience research is derived from the effective use of prior research – using the existing knowledge base. Iivari claims it is in making the construction process as transparent as possible.

The source of ideas

Iivari suggests four major sources for ideas for design research

  1. Practical problems and opportunities
    Emphasizes the practical relevance of this research. Customers known as significant source of innovations (von Hippel 2005). But practice problems may be abstracted or seen slightly differently. Design science can also create solutions long before a problem is seen/understood.
  2. Existing artifacts
    Most design science research consists of incremental improvements to existing artifacts. Must understand what has gone before, if only to evaluate contribution.
  3. Analogies and metaphors
    Known that analogies and metaphors stimulate creativity.
  4. Theories
    i.e. kernel theories can serve as inspiration

Design science and action research

Many authors associated design science and action research, since they both attempt to change the world. Iivari suggests that they are different in a number of ways

  • Historically
    Action research – socio-technical design movement. Design science – engineering.
  • Practically
    Action research – focused on “treating social illnesses” within organisations and other institutions. Technology change may be part of the treatement, but the focus is more on adopting than building technology.
    DSR – focus on the construction of artifacts, most having material embodiment. Usually done in laboratories, clearly separated from potential clients.
  • Ontologically,
    DSR – in engineering/medicine adopts a realistic/materialistic ontology
    Action research – accepts a more nominalistic, idealistic and constructivist ontology

    Materialism attaches primacy to Popper’s World 1, idealism to World 2. Action research is also interested in the institutions of World 3.

  • epistemologically, and
    Consequently, design research, especially in engineering and medicine, have a positivistic epistemology in terms of knoweledge applied from reference disciplines and knowledge produced. Action research is strongly based on an anti-positivistic epistemology. The very idea of AR is anti-positivistic as each client is unique.
  • methodologically.

Cole et al (2005) take the alternate perspective that design science and AR share important assumptions regarding ontology and epistemology. Cole et al implicitly limit design science to IS in an organisational context, if so then shouldn’t the ontology and epistemology of DSR be different. Juhani is doubtful about this, based on his work evaluating systems development approaches – but he’s said earlier that systems development approaches aren’t a good match for constructive research – for DSR. Can he make this connection here?

Ethics of design science

Design science shapes the world. “Even though it may be questionable whether any research can be value-free, it is absolutely clear that design science research cannot be.” which suggests that the basic values of research should be expressed as explicitly as possible.

Juhani then uses his own work (1991) to identify three roles (?types of ethics?)

  1. Means-end oriented
    Knowledge is provided to achieve an ends without questioning the legitimacy of the ends.
    Evaluation here is interested in how effectively the artifact helps achieve the ends
  2. interpretive
    The goal is to enrich understanding of action. Goals are not clear, focus on unintended consequences.
    Evaluation seeks to achieve a rich understanding of how an IT artifact is really appropriated and used and what its effects are, witout focusing on the given ends.
  3. Critical
    Seeks to identify and remove domination and ideological practice. Goals can be subjected to critical analysis.
    Evaluation focuses on how the IT artifact enforces or removes unjustified domination or ideological practices.
  4. Most DSR is means-end oriented, but it can be critical (e.g. Scandinavian trade-unionist systems development approaches)

    Question values of IS research – whose values and what values dominate?

    Conclusions

    Introduces the 12 theses summarised right up the top

    References

    Benbasat, I., & Zmud, R. (2003). The Identity Crisis within the IS Discipline: Defining and Communicating the Discipline’s core properties. MIS Quarterly, 27(2), 183-194.

The many Ps

For those of you who don’t know, I’m developing something called the Ps Framework for sensemaking around e-learning (read the earlier post for more information.

Jocene, one of my current colleagues and someone who is having to engage with the Ps Framework, has somewhat regularly complained about IT folk and their interest in the Ps, Cs, etc.

She seems to have some connection with someone else with a similar line of thought. Jason Woodruff in this post outlines and complains about some of the “P Words” that are floating around. Always useful to keep things in perspective.

The other reason I mention this is the “synergistic” way I came across Jason’s post from about a year ago. I’ve just moved my website from a personal server to WordPress. When I posted my most recent post about the Ps Framework wordpress.com included links to what it thought were related posts at the bottom. Jason’s was one of them.

The “web 2.0” move, to a site more connected with others, seems to have paid off in a small way.

The Ps Framework – avoiding perceptual blindness?

The Ps Framework is a taxonomy/framework that has arisen out of my attempts to understand the literature around e-learning within universities. It’s main aim is to serve as the structure for chapter 2 of my thesis. This blog post seeks to summarise some of the origins and potential futures of the Ps framework and also to explain why I think it might be useful.

Origins

The Ps Framework started life as “the missing Ps”. As part of my thesis I was seeking to identify what I thought was missing from the literature and practice e-learning within universities and demonstrate how the information systems design theory (ISDT) that I’m developing addresses some of these holes. The first public airing of the missing Ps was in the following presentation.

Wow, in finding that slideshow presentation, I’ve just discovered the presentation has been viewed 3900+ times. And that’s for a presentation that was incomplete because time got away from me. If you want, you can view the video of the presentation. I should note that the original presentation made a lot of use of background images, but Slideshare’s limitation on file size meant that I had to remove them from the online version. The video will show you the full version and you can also (at least for now) download the full powerpoint file.

Since then the missing Ps have evolved into the Ps Framework and been used for two publications

  1. PLEs: Frameing one future for lifelong learning, e-learning and universities
    Presentation and paper.
  2. The Ps Framework: Mapping the educational technology landscape for the PLEs@CQUni project
    Presentation (including video) and paper.

And the near term future might actually be used for their intended purpose – chapter 2 of my thesis.

Rationale

In the 15+ years I’ve been muddling with e-learning within universities the defining characteristic I would ascribe to both the practice and research within this sphere is that of inattentional blindness, with bounded rationality” not too far behind.

Take a look at the following video

Bet you can't do this

When it comes to e-learning, the same problem arises. No matter who looks at e-learning their answers to the question of “how to do e-learning” is almost always limited by their background and experience. They do not see the whole picture, they only focus on what they know and that drives how the answer the question “how to do e-learning”. Some examples include:

  • If you employ a consultant that is an expert in product X or approach Y they will almost certainly recommend you use product X or approach Y regardless of whether or not it is a good fit for you or your organisation.
  • If an expert in IT looks at the problem, any solution you will get will emphasise security, IT governance, scalability, uptime, and other IT issues, rather than learning and teaching.
  • If an expert in lifelong learning looks at the problem, any solution you get will emphasise that.

The Ps Framework is based on a couple of assumptions:

  1. People will always make decisions based on what is familiar with them.
  2. The implementation of e-learning within a university is an example of a wicked design problem.
    Some of the characteristics of a wicked design problem listed on the wikipedia page include:

    • There is no definitive formulation of a wicked problem.
    • Stakeholders have radically different world views and different frames for understanding the problem.
    • No unique “correct” view of the problem.

A lot of the limitations in the practice of e-learning with universities comes from people who don’t agree with/recognise these assumptions. People who are not aware of and/or fail to engage with the variety of perspectives and recognise that their perspectives are inherently limited.

Lastly, if you accept some of the other characteristics of wicked problems:

  • The problem is not understood until after formulation of a solution.
  • Constraints and resources to solve the problem change over time.
  • Consequences difficult to imagine.
  • Considerable uncertainty, ambiguity.

It becomes almost impossible to see how any perspective, no matter how diverse and well informed, could continue to remain a reasonable perspective.

The Ps Framework – summarise potential different perpsective

Initially the Ps Framework was developed simply to highlight the limitations of the common perspectives within e-learning practice and literature. This was the approach used in the first presentation. Some examples.

  • Product – almost all university based e-learning (from the late 1990s to now) assumes that the organisation must choose a LMS (e.g. Blackboard, Sakai, Moodle etc.). The only major change, in terms of organisational practice, over recent years has been the recognition that it is okay if the product is open source.
  • Process – almost all university based e-learning is teleological. It ignores and in many cases actively attacks ateleological development. Jones and Muldoon (2007) go into more detail on this point.
  • Place – almost all university based e-learning, like most organisational practice, assumes that the nature of the organisation in which it operates is simple. Dave Snowden’s Cynefin framework identifies four other types of system. Each of these systems have radically different characteristics which mean many of the assumptions behind strategies used in the simple type of system are completely and utterly inappropriate.

The Ps Framework as part of a sensemaking process

The most recent paper using the Ps Framework (Presentation (including video) and paper) is starting a move away from the Ps Framework embodying a pre-defined checklist of different perspectives.

The paper still, to a certain extent, reads as a simple list of one perspective of the educational technology landscape. In discussions with others and the intent of the paper is to move towards using the Ps Framework as a tool in a sensemaking process in the pre-decision stage of an educational technology project.

That is, use the Ps Framework, with a yet undeveloped process, to

  • Develop a collection of very different perspectives of the situation and the project.
  • Have the decision makers actively engage with those different perspectives to improve the quality and possibly consistency of their perspectives of the problem/project.
  • As the project progresses, revisit the perspectives of many different folk in order to understand where things are up to, and where they should go now.

Representing the Ps Framework

An increasingly regular request is for a pretty diagram or table to summarise the Ps Framework. To make it easier for folk to understand the components of the Ps Framework. In the past I have been guilty of producing rather ugly and simple diagrams to represent the Ps Framework.

The very first version, from the first presentation tried to establish some sort of order or hierarchy.

Version 1 of the Ps Framework

The 2008 version keeps some of that, tries to emphasise the on-going impact each of the Ps have on each other and that place is the fundamental underlying consideration

Version 2 of the Ps Framework

One source of my reluctance to have a diagram is that there is an expectation to show relationships between the components. I believe there are problems with that. In the latest paper I wrote

The context of implementing educational technology within a university is too complex for such a simple reductionist view. It is also likely that different actors within a particular organization will have very different perspectives on the components of the Ps Frameworks in any given context.

Any diagram that shows some sort of connection, is liable to lead people to see relationships that aren’t there. This is also likely to limit/impact its use as a sensemaking aid. Too much structure will bias folk and their perspectives.

Perhaps there is a need for multiple, diverse representations of the Ps Framework.

Time to drink some rum

Look for incompetence before you go paranoid

I suggest that many of the problems organisations face can be traced back to a few observations

  • There are limited resources, the organisation can’t do everything.
  • Most of the important problems faced by the organisation are wicked problems.
    There is no one correct solution, or even an easy way to identify the “best” solution.
  • Large organisations are inherently “multi-cultural”.
    The different professions which make up an organisation have different world views.

Consequently, when it comes to solving a difficult problem there are on-going battles between the different sub-cultures about how to best solve the problem. Battles that can often (and sometimes quickly) decline into political battles that can get quite nasty. So nasty that people can often feel (sometimes rightly so) that people out to get them.

For quite some time I’ve used the following phrase as an alternative perspective

Look for incompetence before you go paranoid.

To be honest, the original intent was probably to suggest that the other folk know less (incompetence). My understanding of it has evolved now to the stage that, most of the time, their “incomptence” simply means that they know differently. Their different cultural perspective is showing through. They have a different way of looking at the world

The trouble is that every perspective or world view brings its own blind spots. People with a certain perspective simply can’t see things that others can. An example/technical definition of this is inattentional blindness or perceptual blindness.

An example wicked problem

An example wicked problem is the selection of an information system to support e-learning within a university. A decision being faced at my current institution. The decision has been made to go from Blackboard v6.3 to an open source learning management system – either Sakai or Moodle.

One of the reasons behind the move to an open source LMS is the reduction in cost. There are a range of perspectives about this reason. On a recent post in her blog Jocene describes one of these perspectives as held by someone at the institution.

He was a fan of the system. Apparently it only costs the university $55k per annum to run it as the CMS. Now I haven’t actually shopped around, but it sounds like a bargain to me. Anyway, we are moving away from it, apparently, to something more expensive and less familiar to the end-users

Would be interesting to hear the argument about how Blackboard is cheaper, at least on this point. I believe the $55K mention is meant to represent licence fees paid to Blackboard for using their product. Given that the open source product has no licence fees, am not sure how it can be more expensive.

Of course this doesn’t factor in support costs etc. But the support costs are going to be pretty much the same. The lack of familiarity to uses is going to be a problem, but then we have to move away from Blackboard 6.3 (it’s no longer support by the vendor at the end of next year) and even if we moved to a more recent version of Blackboard. That version is going to be as unfamiliar to the users as either open source version.

The above shows just two very different views of this problem. If you delve deeper into the different conceptions of this problem held by the various different sub-cultures at the institution you would be almost certain to find many, many more. And these are likely to create significant discussions.

Perceptual blindness

I’m willing to bet that very few, if any, of those perceptions, especially those held by people directly involved in the selection process, contain anything like the perceptions Jocene continues on with in her blog post.

I wonder if the person Jocene was talking with is able to see the alternate perspective Jocene expresses?

Paraphrased by me, that perspective sees a level of control in the position of the IT support person in their definining of what systems can be used and in their use of language to exclude non-IT people. The perspective raises the question of whether or not the rise of PLEs, Web 2.0 and social media will clash with this level of control and even turn the tables and put the control back in the hand of the “users”

Aside: There’s the old question, “There are only two industries in the world that call their clients/customers ‘users’. Can you guess which ones?
The computer industry and the illicit drug trade”.

The warlike atmosphere

For me this discussion is starting to develop into a dichotomy. On one side you have the “controlling IT folk” who are denigrating users and making it more difficult. At the other end you have the downtrodden user fighting back with the help of the liberating technology of PLEs, Web2.0 etc.

In trying to make that point I looked at the Wikipedia page on dichotomy which includes the following interesting and relevant quote

In The Argument Culture (1998), Deborah Tannen suggests that the dialogue of Western culture is characterized by a warlike atmosphere in which the winning side has truth (like a trophy). In such a dialogue, the middle alternatives are virtually ignored.

To often it appears that the differences in opinion within organisations lead to a warlike atmosphere that creates an environment where the middle alternatives, usually the more appropriate alternatives, are ignored. An environment in which there can only be one winner.

An environment where paranoia leads to the assumption that the bastards are out to get me and I better fight back. Rather than an understanding that the bastards are actually “incompetent” (i.e. they have a different perspective on the situation and that there may actually be value in engaging in a dialogue.

PLEs and universities

Each year EDUCAUSE (a US-based “nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology”) surveys “technology leaders in higher education” about their top concerns. The 2007 Top-Ten IT Issues survey results indicate that the #1 concern is “Funding IT”. This had been the #1 concern for 2003 to 2005 and “lost out” in 2006 to a combined concern (Security and Identity/Access Management) which was split in 2007.

In this type of environment, it’s not surprising that IT leaders are keen to control how and what is done with IT at universities. They have to pay for it and they are concerned about how they can pay for it. IT is expensive and the only way you can save money is to ensure that it is used efficiently and you need IT expertise to make those judgements.

There are a number of folk, including me, who have argued that the changes behind PLEs, Web 2.0 etc. are creating a paradigm change for IT departments. A paradigm change is not an easy thing to handle.

Consequently any project that seeks to introduce the use of PLEs or even simply Web 2.0 technologies into an existing organisation is going to have to deal with a likely paradigm change. A change that, according to Kuhn, only happens through a complex social process. An engaged dialogue, rather than a war.

Design Based Research vs. Mixed Methods: The Differences and Commonalities

This post is a summary and some reflection on a discussion paper posted to ITForum. It’s by Goknur Kaplan Akilli and is titled Design Based Research vs. Mixed Methods: The Differences and Commonalities. The author is a PhD candidate with some interesting research runs on the board.

The following contains two main sections

  1. Reflections – my, as yet incomplete, meanderings on the paper.
  2. Summary – an attempt to understand the paper.

Reflections

This paper seems to indicate that the education discipline, like information systems, appears to be struggling with understanding where design research fits. How is it different? How to do it well? Reading this paper should help me, given that one of my current tasks is to re-write chapter 3 of my thesis and essentially set out what I understand about these questions.

Given my background in information systems much of my thoughts on the following are influenced by that work. One representation of that work is given on the design research in information systems page. However, I don’t agree with a number of the points made there.

Questions and points of disagreement or depature

DBR: a methodology or a paradigm

There appears to be some confusion over whether or not DBR is a methodology or a paradigm. Which may come back to a lack of agreement on the difference amongst the author and her cited sources.

In the last paragraph of the section on DBR the author suggests

Lastly, the immaturity of the methodology is another criticism (Kelly, 2004; Wang & Hannafin, 2005), which consists of methodological challenges that need to be addressed if DBR is to be developed “from a loose set of methods into a rigorous methodology” (Kelly, 2004, p.116).

In the conclusion the author suggests

DBR is more of a generic paradigm rather than a method in the way that mixed methods research is. DBR offers a new worldview of theory development and refinement along with design to construct design sciences of education.

So which is it, paradigm or method?

Dede’s “conditions of success” criticism

In a couple of sentences the author describes Dede’s (2004, 2005) criticisms of DBR related to “conditions of success”.

For instance, Dede (2004) argues that there seems to be hardly any standards to decide whether a design should be dropped or sustained and further explores due to its promising nature by differentiating it from its “conditions of success” (p.109). However this is not possible, since the findings in DBR are strongly bounded with contextual variables shaping the design’s “desirability, practicality and effectiveness” (Dede, 2005, p.7).

At this point in time, I don’t understand (at all) this criticism. Appears I have some more reading to do.

Interventions embody theory

In characterising DBR the author follows (DBRC 2003) that the interventions arising from DBR embody specific theoretical claims about teaching and learning. I’m assuming that one reason for this is that as research, DBR should be purposeful about its interventions and shouldn’t simply be trying any idea that crops up, it needs to be informed by theory. Within the information systems field and its approach to design research the theories which a design embodies are called “kernel theories”, a term coined by Walls et al (1992).

One of the foundations of this work is Simon’s work on sciences of the artifical work that underpins the interest in design research in a number of fields, including information systems.

Simon talks about the idea that it is possible to design a successful artifact that makes a contribution without being fully aware of all of the theories/knowledge which underpin the artifact. The example often used is that the folk who built the first airplanes had little understanding of the aeronautical sciences, the physics that explained why their machines flew.

There is also the problem that simply following learning and teaching theories my lead to inattentional blindness (aka perceptual blindness). A situation where the established theories of learning and teaching limit what you can actually see or envision happening in a given situation.

The possibility that requiring an embodiment of existing learning and teaching theory in all educational DBR is troubling if you agree that the vast majority of learning and teaching theories have been developed by more “traditional research”. An approach to research which, according to the DBR proponents, have some significant flaws. Hence the patterns embodied by current theories of learning and teaching may have some flaws which is limiting possibilities.

A later characteristic of DBR – interactive, collaborative, iterative and flexible processes – address this somewhat in the recognition that DBR is flexible enough to react to situations where the expected outcomes did not occur in reality and consequently led to a re-thinking of these understanding sof the world.

So perhaps the characteristic re: embodying theory should be understood that at the end of the DBR process the work should embody some sort of theory around learning and teaching. But perhaps, when it started, that theory was not well understood or espoused, or perhaps a new one developed as the iterative process was followed.

The role of theory

Throughout the paper, and the literature it quotes, there appears to be some issues with the definition of theory including:

  • No clear definition of what the author or the literature thinks theory is (or not).
  • Many different terms that are close to theory but seem to indicate a difference e.g. prototheories, design theory, design principles, “usable knowledge”..

All this seems to point to a lack of agreement to a fairly fundamental building block of this argument. Especially if you agree that the ultimate aim of research is to generate knowledge which should typically aim to be expressed as theory.

This is especially troubling given that one of the stated criticisms of DBR is that it often doesn’t make a significant contribution of theory. This might well be expected if the current understanding of theory within the education discipline is more appropriate to traditional research and somewhat under done or limiting due to the nature of design research.

The nature of theory also raises its head in terms of the problems facing DBR in “universality of findings”.

I also wonder if my connection with Shirley and her work on the nature of theory in information systems colours my perspective. I also think that only very limited reading I’ve done of the education based DBR literature also shows, as I believe some of that literature does address this issue. Though not with the same outcomes as Shirley’s work.

Misc questions

Much of the design research thinking in information systems is focused on the IT (or IS) artifact. The aim of design research is to construct or develop theory to guide the construction of an IT artifact. What, if any, is the similar aim in education?

Summary

The basic aim of the paper is to establish that there is a difference between DBR and mixed methods as a research methodology. It has two main sections, one each on the respective approaches. One assumes (given that I haven’t read the paper) that these two sections explain the differences and serves the authors purpose.

In the authors conclusion she argues

  • Mixed methods is a research method. A 3rd methodology that arose from the qualitative/quantitative paradigm wars.
  • DBR more of a generic paradigm, claims it is a wicked paradigm since it deals with wicked problems (Rittel & Webber, 1973).
  • DBR offers a new worldview of theory development and refinement.
  • But it also offers a newly-emerging research methodology drawing on different fields of design and education, including mixed methods.
  • DBR produces knowledge that is
    • dynamic,
      It is knowledge that changes in relation to context, shaped by time, place, actors and actions.
    • usable,
      Knolwedge that informs theories and real-world practices.
    • glocal.
      Local in that it produces tentative generalizations that are drawn from initial implementations. Suggested it is global because these generalisations can be “globalised” with studies that have similar contexts.

Design-Based Research

First, the author establishes some of the variety in perspectives, or at least terminology, used around DBR

Design Based Research Collective (DBRC) (2003, p.5) characterizes DBR as a research paradigm that “blends empirical educational research with the theory-driven design of learning environments,”

Wang and Hannafin (2005) define it as “a systematic but flexible methodology [italics added] aimed to improve educational practices through iterative analysis, design, development, and implementation, based on collaboration among researchers and practitioners in real-world settings, and leading to contextually-sensitive design principles and theories”

DBR aims to develop and refine theories via closely linked strategies rather than testing intact theories using traditional methodologies (Edelson, 2002).

Suggests that the origins of DBR were to

  • develop a design science of education – a connection back to Simon (1969).
  • develop a methodology to help develop design theory (Collins, 1992)
  • prevent the detachment of educational research (laboratory settings) from problems and issues of everyday practice
  • close the credibility gap (Levin & O’Donnell, 1999)
  • develop more “usable knowledge”.

The main characteristics of DBR are

  • Pragmatic.
    It is based in real-world situations. Attempts to improve those through interventions, but at the same time make a contribution to theory. The value of theory is in its utility to practitioners and other designers.
  • Theory-driven and grounded in real-world contexts.
    The interventions embody specific theoretical claims about teaching and learning.
  • Uses a process that is interactive, collaborative, iterative and flexible.
    It continues to respond to the findings within the real world setting.
  • Is integrative through the richness and variety of theories, methods and procedures utilized to meet research needs.
    Multiple (mixed) methods are used to analyse and refine the intervention.
  • Contextualised.
    It cannot be thought of as indepedent from context, must involve authentic settings.

Criticisms of DBR include

  • “Conditions of success” – mentioned above as something I don’t get
  • Absence of theoretical foundation or contribution
    Dede (2004) suggests this is due to different skills for creative designers and rigorous scholars. Also the problem of “innovation fascination” leading to under-conceptualized research in order to try the new toy.

    Is suggested that DBR is over-methodologized and its tendency towards excessive data collection (Nona, are you reading this? 😉 ) which result in only tiny contributions to theory.

  • Generalisation.
    The very contextual nature of DBR is seen as making it difficult to make generalisations to other contexts. Some arguments here about the close interaction between researcher/practitioner necessarily limiting the ability to be rigorous or objective.
  • Immaturity of the methodology
    Suggests DBR is more a loose set of methods than a rigorous methodology (Kelly, 2004). He suggests that DBR studies are descrbied as a set of processes rather than describing the essential underlying conceptual structure.

Mixed methods research

Defined as a methodology that uses multiple approaches in all stages of research.

Theoretical assumptions include

  • Pragmatist philosophy
    Researchers avoid philosophical arguments about research methods and mix approaches based on the utility the approach will give within a particular problem or context.
  • Compatibility thesis
    Assumption that quantitative and qualitative methods are compatible and can be mixed.
  • principle of mixed methods research
    Methods are mixed in a way that uses their complementary strengths and non-overlapping weaknesses.

Four additional criteria

  1. Sequence of data collection approaches.
    Concurrently/sequentially, intra or inter-method mixing. Connected with “data triangulation” and “method triangulation”.
  2. Which method was given priority
  3. Stage of integration – where did the mixing or connecting of methods occur
  4. Theoretical perspectives – the researchers’ personal stances toward the topcis.

From these a diverse typology of mixed methods research is outlined

A strength of mixed methods is the ability to answer both exploratory and confirmatory questions at the same time – i.e. verify and generate theory.

A strength of mix-methods research is the availability of information about how to do such research well.

Major criticism of it is the “incompatibility thesis” which argues that quantitative and qualitative research paradigms should not be mixed. (Onwuegbuzie and Leech, 2005)

References

Dede, C. (2004). If design-based research is the answer, what is the question? Journal of the Learning Sciences, 13(1), 105-114.

Dede, C. (2005). Why design-based research is both important and difficult. Educational Technology, 45(1), 5-8.

Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5-8, 35-37.

diSessa, A. A., & Cobb, P. (2004). Ontological innovation and the role of theory in design experiments. Journal of the Learning Sciences, 13(1), 77-103.

Kelly, A. E. (2004). Design research in education: Yes, but is it methodological? Journal of the Learning Sciences, 13(1), 115-128.

Onwuegbuzie, A. & Leech, N. 2005. “Taking the “Q” Out of Research: Teaching Research Methodology Courses Without the Divide Between Quantitative and Qualitative Paradigms.” Quality and Quantity 39, 267-296.

Walls, J., Widmeyer, G., & El Sawy, O. A. (1992). Building an Information System Design Theory for Vigilant EIS. Information Systems Research, 3(1), 36-58.