Assembling the heterogeneous elements for (digital) learning

Category: information systems Page 1 of 2

From thinking to tinkering: The grassroots of strategic information systems

What follows is a long overdue summary of Ciborra (1992). I think it will have a lot of insight for how universities implement e-learning. The abstract for Ciborra (1992) is

When building a Strategic Information. System (SIS), it may not be economically sound for a firm to be an innovator through the strategic deployment of information technology. The decreasing costs of the technology and the power of imitation may quickly curtail any competitive advantage acquired through an SIS. On the other hand, the iron law of market competition prescribes that those who do not imitate superior solutions are driven out of business. This means that any successful SIS becomes a competitive necessity for every player in the industry. Tapping standard models of strategy analysis and data sources for industry analysis will lead to similar systems and enhance, rather than decrease, imitation. How then should “true” SISs be developed? In order to avoid easy imitation, they should should emerge from from the grass roots of the organization, out of end-user hacking, computing, and tinkering. In this way the innovative SIS is going to be highly entrenched with the specific culture of the firm. Top management needs to appreciate local fluctuations in practices as a repository of unique innovations and commit adequate resources to their development, even if they fly if the face of traditional approaches. Rather than of looking for standard models in the business strategy literature, SISs should be looked for in the theory and practice of organizational leaming and innovation, both incremental and radical.

My final thoughts

The connection with e-learning

Learning and teaching is the core business of a university. For the 20+ years I’ve worked in Australian Higher Education there has been calls for universities to become more distinct. It would then seem logical that the information systems used to support, enhance and transform (as if there are many that do that) learning and teaching (I’ll use e-learning systems in the following) should be seen as Strategic Information Systems.

Since the late 1990s the implementation of e-learning systems has been strongly influenced by the traditional approaches to strategic and operational management. The influence of the adoption of ERP systems are in no small way a major contributor to this. This recent article (HT: @katemfd) shows the lengths to which universities are going when the select an LMS (sadly for many e-learning == LMS).

I wonder how much of the process is seen as being for strategic advantage. Part, or perhaps all, of Ciborra’s argument for tinkering is on the basis of generating strategic advantage. The question remains whether universities see e-learning as a source of strategic advantage (anymore)? Perhaps they don’t see selection of the LMS as a strategic advantage, but given the lemming like rush toward “we have to have a MOOC” of many VCs it would seem that technology enhanced learning (apologies to @sthcrft) is still seen as a potential “disruptor”/strategic advantage

For me this approach embodies the rational analytic theme to strategy that Ciborra critiques. The tinkering approach is what is missing from university e-learning and its absence is (IMHO) the reason much of it is less than stellar.

Ciborra argues that strategic advantage comes from systems where development is treated as an innovation process. Where innovation is defined as creating new knowledge “about resources, goals, tasks, markets, products and processes” (p. 304). To me this is the same as saying to treat the development of these systems as a learning process. Perhaps more appropriately a constructionist learning process. Not only does such a process provide institutional strategic advantage, it should improve the quality of e-learning.

The current rhetoric/reality gap in e-learning arises from not only an absence, but active prevention and rooting out, of tinkering and bricolage. An absence of learning.

The deficit model problem

Underpinning Ciborra’s approach is that the existing skills and competencies within an organisation provide both the source and the constraint on innovation/learning.

A problem with university e-learning is the deficit model of most existing staff. i.e. most senior management, central L&T, central L&T and middle managers (e.g. ADL&T) have a deficit model of academic staff. They aren’t good enough. They don’t know enough. They have to complete a formal teaching qualification before they can be effective teachers. We have to nail down systems so they don’t do anything different.

Consequently, wxisting skills and competencies are only seen as a constraint on innovation/learning. They are never seen as a source.

Ironically, the same problem arises in the view of students held by the teaching academics that are disparaged by central L&T etc.

The difficulties

The very notion of something being “unanalyzable” would be very difficult for many involved in University management and information technology to accept. Let alone deciding to use it as a foundation for the design of systems.

Summary of the paper

Introduction

Traditional approaches for designing information systems are based on “a set of guidlines” about how best to use IT in a competitive environment and “a planning and implementation strategy” (p. 297).

However, the “wealth of ‘how to build an SIS’ recipes” during the 1990s failed to “yield a commensurate number of successful cases” at least not measured against the rise of systems in the 1980s. Reviewing the literature suggests a number of reasons, including

  • Theoretical literature emphasises rational assessment by top management as the means for strategy formulation ignoring alternative conceptions from innovation literature valuing learning more than thinking and experimentation as a means for revealing new directions.
  • Examining precedent-setting SISs suggests that serendipity, reinvention and other facts were important in their creation. These are missing from the rational approach.

So there are empirical and theoretical grounds for a new kind of guidelines for SIS design.

Organisations should ask

  1. Does it pay to be innovative?
  2. Are SISs offering competitive advantage or are they competitive necessity?
  3. How can a firm implement systems that are not easily copied and thus generate returns?

In terms of e-learning this applies

the paradox of micro-economics: competition tends to force standardization of solutions and equalization of production and coordination costs among participants.

i.e. the pressures to standarise.

The argument is that an SIS must be based on new practical and conceptual foundations

  • Basing an SIS on something that can’t be analysed, like orgnisational culture will help avoid easy imitation. Leveraging the unique sources of practice and know-how of the firm and industry level can be th esource of sustained advantage.
  • SIS development should be closer to prototyping and engaging with end-users’ ingenuity than has been realised.

    The capability of integrating unique ideas and practical design solutions at the end-user level turns out to be important than the adoption of structured approaches to systems development or industry analysis (Schoen 1979; Ciborra and Lanzara, 1990)

Questionable advantage

During the 1980s a range of early adopters of strategic information systems (SISs) – think old style airline reservation systems – arose brought benefit to some organisations and bankruptcy to those that didn’t adopt. This arose to a range of frameworks for identifying SIS.

I’m guessing some of these contributed to the rise of ERP systems.

But the history of those cited success stories suggest that SIS only provide an ephemeral advantage before being copied. One study suggests 92% of systems followed industry wide trends. Only three were original.

I imagine the percentage in university e-learning would be significantly higher. i.e. you can’t get fired if you implement an LMS (or an eportfolio).

To avoid the imitation problem there are suggestions to figure out the lead time for competitors to copy. But that doesn’t avoid the problem. Especially given the rise of consultants and service to help overcome.

After all, if every university can throw millions of dollars at Accenture etc they’ll all end up with the same crappy systems.

Shifts in model of strategic thinking and competition

This is where the traditional approaches to strategy formulation get questioned.

i.e. “management should first engage in a purely cognitive process” that involves

  1. appraise the environment (e.g. SWOT analysis)
  2. identify success factors/distinctive competencies
  3. translate those into a range of competitive strategy alternatives
  4. select the optimal strategy
  5. plan it in sufficient details
  6. implement

At this stage I would add “fail to respond to how much the requirements have changed” and start over again as you employ new senior leadership

This model is seen in most SIS models.

Suggests that in reality actual strategy formulation involves incrementalism, muddling through, myopic and evolutionary decision making. “Structures tend to influence strategy formulation before they can be impacted by the new vision” (p. 300)

References Mintzberg (1990) to question this school of through 3 ways

  1. Assumes that the environment is highly predictable and events unfold in predicted sequences, when in fact implementation surprises happen. Resulting in the clash between inflexible plans and the need for revision.
  2. Assumes that the strategist is an objective decision maker not influenced by “frames of reference, cultural biases, or ingrained, routinized ways of action” (p. 301). Contrary to a raft of research.
  3. Strategy is seen as an intentional design process rather than as learning “the continuous acquisition of knowledge in various forms”. Quotes a range of folk to argue that strategy must be based on effective adaptation and learning involving both “incremental, trial-and-error learning, and radical second-order learning” (p. 301)

The models of competition implicit in SIS frameworks tend to rely on theories of business strategy from industrial organisation economics. i.e. returns are determined by industry structure. To generate advantage a firm must change the structural characteristics by “creating barriers to entry, product differentiation, links with suppliers” (p. 301).

There are alternative models

  • Chamberlin’s (1933) theory of monopolistic competition

    Firms are heterogeneous and compete on resource and asset differences – “technical know-how, reputation, ability for teamwork, organisational culture and skills, and other ‘invisible assets’ (Itami, 1987)” (p. 301)

    Differences enable high return strategies. You compete by cultivating unique strengths and capabilities and defending against imitation.

  • Schumpeter’s take based on innovation in product, market or technology

    Innovation arises from creative destruction, not strategic planning. The ability to guess, learn and luck appear to be the competitive factors.

Links these with Mintzberg’s critique of rational analytics approaches and identifies two themes in business strategy

  1. Rational analytic

    Formulate strategy in advance based on industry analysis. Plan and then implement. Gains advantage relative to firms in the same industry strucure.

  2. Tinkering (my use of the phrase)

    Strategy difficult to plan before the fact. Advantage arises from exploiting unique characteristics of the firm and unleashing its innovating capabilities

Reconsidering the empirical evidence

Turns to an examination of four well-known SIS based on the two themes and other considerations from above. This examination these “cases emphasize the discrepancy between ideal plans for an SIS and the realities of implementation” (p. 302). i.e.

The system was not developed according to a company-
by one of the business units. The system was not developed according to company-wide strategic plan; rather, it was the outcome of an evolutionary, piecemeal process that included the ingenious tactical use of systems already available.

i.e. bricolage and even more revaling

the conventional MIS unit was responsible not only for initial neglect of the new strategic applications within McKesson, but also, subsequently, for the slow pace of company-wide learning about McKesson’s new information systems

Another system “was supposed to address an internal inefficiency” (p. 303) not some grand strategic goal.

And further

The most frequently cited SIS successes of the 1980s, then, tell the same story. successes of the 1980s, then, tell the same story. Innovative SISs are not fully designed top-down or introduced in one shot; rather, they are tried out through prototyping and tinkering. In contrast, strategy formulation and design take place in pre-existing cognitive frames and organizational contexts that usually prevent designers and sponsors from seeing and exploiting the potential for innovation. (p. 303)

New foundations for SIS design

SIS development must be treated as an innovation process. The skills/competencies in an organisation is both a source and a constraint on innovation. The aim is to create knowledge.

New knowledge can be created in two non-exclusive ways

  1. Tinkering.

    Rely on local information and routine behaviour. Learning by doing, incremental decision making and muddling through).

    Accessing more diverse and distant information, when an adequate level of competence is not present, would instead lead to errors and further divergence from optimal performance (Heiner, 1983) (p. 304)

    People close to the operational level have to be able to tinker to solve new problems. “local cues from a situation are trusted and exploited in a somewhat unreflective way, aiming at ad hoc solutions by heuristics rather than high theory”

    The value of this approach is to keep development of an SIS close to the competencies of the organisation and ongoing fluctuations.

  2. Radical learning

    “entails restructuring the cognitive and organisational backgrounds that give meaning to the practices, routines and skills at hand” (p. 304). It requires more than analysis and requirements specifications. Aims at restructuring the context of both business policy and systems development”. Requires “intervening in situations and designing-in-action”.

    The change in context allows new ways of looking at the capabilities and devising new strategies. The sheer difference becomes difficult to imitate.

SIS planning by oxymorons

Time to translate those theoretical observations into practical guidelines.

Argues that the way to develop an SIS is to proceed by oxymoroon. Fusing “opposites in practice and being exposed to the mismatches that bound to occur” (p. 305). Defines 7

  • 4 to bolster incremental learning
    1. Value bricolage strategically
    2. Design tinkering

      This is important

      Activities, settings, and systems have to be arranged so that invention and prototyping by end-users can flourish, together with open experimentation (p. 305)

      Set up the organisation to favour local innovation. e.g. ad hoc project teams. ethnographic studies.

    3. Establish systematic serendipity

      Open experimentation results in largely incomplete designs, the constant intermingling of implementation and refinement, concurrent or simultaneous conception and execution – NOT sequential

      An ideal context for serendipity to merge and lead to unexpected solutions.

    4. Thrive on gradual breakthroughs.

      In a fluctuating environment the ideas that arise are likely to include those that don’t align with established organisational routines. The raw material for innovation. “management should appreciate and learn about such emerging practices”

  • Radical learning and innovation
    1. Practice unskilled learning

      Radically innovative approaches may be seen as incompetent when judged by old routines and norms. Management should value this behaviour as an attempt to unlearn old ways of thinking and doing. It’s where new perspectives arise.

    2. Strive for failure

      Going for excellence suggests doing better what you already do which generates routinized and efficient systems. The competency trap. Creative reflection over failures and suggest ways to novel ideas and designs. Also the recognition of discontinuities and flex points.

    3. Achieve collaborative inimitability

      Don’t be afraid to collaborate with competitors. Expose the org to new cultures and ideas.

These seven oxymorons can represent a new “systematic” approach for the establishment of an organizational environment where new information—and thus new systems can be generated. Precisely because they are paradoxical, they can unfreeze existing routines, cognitive frames and behaviors; they favor learning over monitoring and innovation. (p. 306)

References

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297–309.

Is IT a service industry, or is it "eating the world"?

In an earlier post I wondered if how high school classes in Information Technology(IT)/Computer Science(CS) are being taught is turning students off and, if so, is this why enrolment numbers are dropping. In the comments on that post Tony suggests some other reasons for this decline. Including the observation that IT courses in local schools (both Tony and I live in Queensland, Australia) are primarily seen to serve the needs of students who want to be IT professionals. The further suggestion is that since

IT is a service-based industry, there only needs to be 5%-10% of the population focused on it as a profession

Now I can agree somewhat with this perspective. It matches some of what I observe. It also reminds me of Nicholas Carr’s 2003 Harvard Business Review article titled IT doesn’t matter which included the following

The point is, however, that the technology’s potential for differentiating one company from the pack – its strategic potential – inexorably diminishes as it becomes accessible and affordable to all

Instead of being strategic, Carr sees IT becoming infrastructure somewhat like electricity etc.

The rise of the cloud seems to reinforce this perspective. Increasingly there is no strategic advantage for an institution having its own guru Systems Administrators running servers and managing networks. Instead they can outsource this to “the cloud” or more often service providers. For example, a number of Australian Universities have outsourced the hosting of their Learning Management Systems.

Combine this with the nerd image of IT, and you can see why more high school students aren’t taking classes in IT.

But what if software ate the world?

And then comes the recent article from Marc Andreessen on “Why software is eating the world”. In his own words

My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy

If true, this sort of shift suggests that having some IT, especially software, knowledge and capability could be a useful thing. The prediction is that industries with a significant physical component (e.g. oil and gas) the opportunity is there for existing companies. But in other industries, start-ups will have opportunities.

Andreessen argues that this shift is starting to happen now for much the same reason that Carr argued that IT didn’t matter anymore. i.e. the technology needed to fully harness software has become infrastructure, it’s become invisible. Huge numbers of people have smartphones and Internet access. The IT services industry and the cloud make it simple to develop a global software application.

Of course, one of the problems with this argument is confirmation bias, as put in this comment from the Slashdot post on the Andreessen article

THIS JUST IN
An expert of [field of study] believes [field of study] will change the world. Also emphasizes that other people are not taking [field of study] seriously.

What does this mean for high school IT classes

One of the problems that Andreessen identifies is that

many people in the U.S. and around the world lack the education and skills required to participate in the great new companies coming out of the software revolution.

Give the dearth of enrolments in high school IPT in local schools and universities, I imagine that the same problem exists here in Australia. I believe this is also a major point that Rushkoff makes in his book “Program or be programmed”

So, obviously more people should enrol in the IT classes in high school.

No, I don’t think so. At least not as most stand at the moment.

This connects back to a point from my initial post. I believe that the current curriculum and teaching methods for these courses are generally not appropriate for the purpose of preparing people – beyond just the future IT professionals – for this world that software is eating.

The current curriculum appears aimed at providing the service providers. The folk who will keep the infrastructure going. What is needed is curriculum and teaching methods that will prepare the folk who are going to identify opportunities and transform industries. Or on a smaller scale, identify opportunities for how the IT infrastructure can be harnessed to improve their lives.

Dilbert as an expository instantiation

A few recent posts have been first draft excerpts from my Information Systems Design Theory (ISDT) from emergent university e-learning systems. Being academics and hence somewhat pedantic about these things there are meant to be a number of specific components of an ISDT. One of these is the expository instantiation that is meant to act as both an explanatory device and a platform for testing (Gregor and Jones, 2007) i.e. it’s meant to help explain the theory and also examples of testing the theory.

The trouble is that today’s Dilbert cartoon is probably as good an explanation of what is currently the third principle of implementation for my ISDT.

Dilbert.com

I’m sure that most folk working in a context where they’ve had to use a corporate information system have experienced something like this. A small change – either to fix a problem or improve the system – simply can’t be made because of the nature of the technology or the processes used to make the changes. The inability to make these changes is a major problem for enterprise systems.

The idea from the ISDT is that the development and support team for an emergent university e-learning system should be able to make small scale changes quickly without having to push them up the governance hierarchy. Where possible the team should have the skills, insight, judgement and trust so that “small scale” is actually quite large.

An example

The Webfuse e-learning system that informed much the ISDT provides one example. Behrens (2009) quotes a user of Webfuse about one example of how it was responsive

I remember talking to [a Webfuse developer] and saying how I was having these problems with uploading our final results into [the Enterprise Resource Planning (ERP) system] for the faculty. He basically said, “No problem, we can get our system to handle that”… and ‘Hey presto!’ there was this new piece of functionality added to the system… You felt really involved… You didn’t feel as though you had to jump through hoops to get something done.

Then this is compared with a quote from one of the managers responsible for the enterprise system

We just can’t react in the same way that the Webfuse system can, we are dealing with a really large and complex ERP system. We also have to keep any changes to a minimum because of the fact that it is an ERP. I can see why users get so frustrated with the central system and our support of it. Sometimes, with all the processes we deal with it can take weeks, months, years and sometimes never to get a response back to the user.

Is that Dilbert or what?

The problem with LMS

Fulfilling this requirement is one of the areas where most LMS create problems. For most universities/orgnaisations it is getting into the situation where the LMS (even Moodle) is approaching the “complex ERP system” problem used in the last quote above. Changing the LMS is to fraught with potential dangers that these changes can’t be made quickly. Most organisations don’t try, so we’re back to a Dilbert moment.

Hence, I think there are two problems facing universities trying to fulfil principle #3:

  1. Having the right people in the support and development team with the right experience, insight and judgement is not a simple thing and is directly opposed to the current common practice which is seeking to minimise having these people. Instead there’s reliance on helpdesk staff and trainers.
  2. The product problem. i.e. it’s too large and difficult to change quickly and safely. I think there’s some interesting work to be done here within Moodle and other open source LMS. How do you balance the “flexibility” of open source with the complexity of maintaining a stable institutional implementation?

References

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124-129.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312-335.

Principles of form and function

The aim of my thesis is to formulate an information systems design theory for e-learning. Even though I have a publication or two that have described early versions of the ISDT, I’ve never been really happy with them. However, I’m getting close to the end of this process, at least for the purposes of getting the thesis submitted.

The following is a first draft of the “Principles of form and function”, one of the primary components of an ISDT as identified by Gregory and Jones (2007). I’ll be putting up a draft of the principles of implementation in a little while (UPDATE principles of implementation now up). These are still just approaching first draft stage, they need a bit more reflection and some comments from my esteemed supervisor. Happy to hear thoughts.

By the way, the working title for this ISDT is now “An ISDT for emergent university e-learning systems”.

Principles of form and function

Gregor and Jones (2007) describe the aim of the principles of form and function as defining the structure, organisation, and functioning of the design product or design method. The ISDT described in Chapter 4 was specifically aimed at the World-Wide Web as shown in its title, “An ISDT for web-based learning systems”. Such technology specific assumptions are missing from the ISDT described in this chapter to avoid technological obsolescence. By not relying on a specific technology the ISDT can avoid a common problem with design research – the perishability of findings – and, enable the on-going evolution of any instantiation to continue regardless of the technology.

The principles of form and function for this ISDT are presented here as divided into three groupings: integrated and independent services; adaptive and inclusive architecture; and, scaffolding, context-sensitive conglomerations. Each of these groupings and the related principles are described in the following sub-sections and illustrated through examples from Webfuse. The underlying aim of the following principles of form and function is to provide a system that is easy to modify and focused on providing context-specific services. The ISDT’s principles of implementation (Section 5.6.4) are designed to work with the principles of form and function in order to enable the design of an emergent university e-learning information system.

Integrated and independent services

The emergent nature of this ISDT means that, rather than prescribe a specific set of services that an instantiation should provide, the focus here is on providing mechanisms to quickly add and modify new services in response to local need. It is assumed that an instantiation would provide an initial set of services (see principle 4) with which system use could begin. Subsequent services would be added in response to observed need.

An emergent university e-learning system should:

  1. Provide a method or methods for packaging and using necessary e-learning services from a variety of sources and of a variety of types.
    For example, Webfuse provided two methods for user-level packaging services: – page types and Wf applications – and also used design patterns and object-oriented design for packging of implementation level services. The types of services packaged through these means included: information stored in databases; various operations on that data; external services such as enterprise authentication services; open source COTS; and, remote applications such as blogging tools.
  2. Provide numerous ways to enable different packages to interact and integrate.
    Webfuse provided a number of methods through which the packaging mechanisms described in the previous point could be integrated. For example, Wf applications provided a simple, consistent interface that enabled easy integration from numerous sources. It was through this approach that Wf applications such as email merge, course list, and course photo album were integrated into numerous other services. To allow staff experience what students say on StudentMyCQU, the ViewStudentMyCQU application was implemented as a wrapper around the StudentMyCQU application.
  3. Provide a packaging mechanism that allows for a level of independence and duplication.
    Within Webfuse, modifications to page types could be made with little or no effect on other page types. It was also possible to have multiple page types of the same type. For example, there were three different web-based discussion forums with slightly different functionality preferred by different users. Similarly, the use of the Model-View-Controller design pattern in Wf applications enabled the same data to be represented in many different forms. For example, class lists could be viewed by campus, with or without student photos, as a CSV file, as a HTML page etc.
  4. Provide an initial collection of services that provide a necessary minimum of common e-learning functionality covering: information distribution, communication, assessment, and administration.
    The initial collection of services for Webfuse in 2000 included the existing page types and a range of support services (see Section 4.4.3). These provided an initial collection of services that provided sufficient services for academics to begin using e-learning. It was this use that provided the opportunity to observe, learn and subsequently add, remove and modify available services (see Section 5.3).
  5. Focus on packaging existing software or services for integration into the system, rather than developing custom-built versions of existing functionality.
    With Webfuse this was mostly done through the use of the page types as software wrappers around existing open source software as described in Chapter 4. The BAM Wf application (see 5.3.6) integrated student use of existing blog engines (e.g. http://wordpress.com) into Webfuse via standardised XML formats.
  6. Present this collection of services in a way that for staff and students resembles a single system.
    With Webfuse, whether users were managing incidents of academic misconduct, finding the phone number of a student, responding to a student query on a discussion forum, or uploading a Word document they believed they were using a single system. Via Staff MyCQU they could access all services in a way that fit with their requirements.
  7. Minimise disruption to the user experience of the system.
    From 1997 through 2009, the authentication mechanism used by Webfuse changed at least four times. Users of Webfuse saw no visible change. Similarly, Webfuse page types were re-designed from purely procedural code to being heavily object-oriented. The only changes in the user interface for page types were where new services were added.

Adaptive and inclusive architecture

Sommerville (2001) defines software architecture as the collection of sub-systems within the software and the framework that provides the necessary control and communication mechanisms for these sub-systems. The principles for integrated and independent services described in the previous section are the “sub-systems” for an emergent university e-learning system. Such as a system, like all large information systems, needs some form of system architecture. The major difference for this ISDT is that traditional architectural concerns such as consistency and efficiency are not as important as being adaptive and inclusive.

The system architecture for an emergent university e-learning system should:

  1. Be inclusive by supporting the integration and control of the broadest possible collection of services.
    The approach to software wrappers adopted as part of the Webfuse page types, was to enable the integration of any external service at the expense of ease of implementation. Consequently, the Webfuse page types architecture integrated a range of applications using very different software technologies including a chat room that was a Java application; a page counter implemented in the C programming language; a lecture page type that combined numerous different applications; and, three different discussion forums implemented in Perl. In addition to the page types, Webfuse also relied heavily on the architecture provided by the Apache web server for access control, authentication, and other services. The BAM Wf application (Section 5.3.6) used RSS and Atom feeds as a method for integrating disparate blog applications. Each of these different approaches embody very different architectural models which increase the cost of implementation, but also increase the breadth of services that can be integrated and controlled.
  2. Provide an architecture that is adaptive to changes in requirements and context.
    One approach is the use of an architectural model that provides high levels of maintainability through fine-grained, self-contained components (Sommerville 2001). This was initially achieved in Webfuse through the page types architecture. However, in order to achieve a long-lived information system there is a need for more than this. Sommerville (2001) suggests that major architectural changes are not a normal part of software maintenance. As a system that operated for 13 years in a Web-environment, Webfuse had to undergo major architectural changes. In early 2000, performance problems arose due to increased demand for dynamic web applications (student quizzes) resulting in a significant change in Webfuse architecture. This change was aided through Webfuse’s reliance on the Apache web server and its continual evolution that provided the scaffolding for this architectural change.

The perspective for this ISDT is that traditional homogenous approaches to software architecture (e.g. component architectures) offer numerous advantages. However, there are some drawbacks. For example, a component architecture can only integrate components that have been written to meet the specifications of the component architecture. Any functionality not available within that component architecture, is not available to the system. To some extent such a limitation closes off possibilities for diversity – which this ISDT views as inherent in university learning and teaching – and future emergent development. This does not rule out the use of component architectures within an emergent university e-learning system, but it does mean that such a system would also be using other architectural models at the same time to ensure it was adaptive and inclusive.

Scaffolding, context-sensitive conglomerations

The design of e-learning in universities requires the combination of skills from a variety of different professions (e.g. instructional design, web design etc), and yet is often most performed by academics with limited knowledge of any of these professions. This limited knowledge creates significant workload for the academics and contributes to the limited quality of much e-learning. Adding experts in these fields to help course design is expensive and somewhat counter to the traditional practice of learning and teaching within universities. This suggests that e-learning in universities has a need for approaches that allow the effective capture and re-use of expertise in a form that can be re-used by non-experts without repeated direct interaction with experts. Such an approach could aim to reduce perceived workload and increase the quality of e-learning.

An emergent university e-learning information system should:

  1. Provide the ability to easily develop, including end user development, larger conglomerations of packaged services.
    A conglomeration is not simply an e-learning service such as a discussion forum. Instead it provides additional scaffolding around such services, possibly combining multiple services, to achieve a higher-level task. While many conglomerations would be expert designed and development, offering support for end-user development would increase system flexibility. The Webfuse default course site approach (Section 5.3.5) is one example of a conglomeration. A default course site combines a number of separate page types (services), specific graphical and instructional designs, and existing institutional content into a course website with a minimum of human input. Another form of conglomeration that developed with Webfuse was Staff MyCQU. This “portal” grew to become a conglomeration of integrated Wf applications designed to package a range of services academics required for learning and teaching.
  2. Ensure that conglomerations provide a range of scaffolding to aid users, increase adoption and increase quality.
    There is likely to be some distance between the knowledge of the user and that required to effectively use e-learning services. Scaffolding provided by the conglomerations should seek to bridge this distance, encourage good practice, and help the user develop additional skills. For example, over time an “outstanding tasks” element was added to Staff MyCQU to remind staff of unfinished work in a range of Wf applications. The BAM Wf application was designed to support the workload involved in tracking and marking individual student reflective journals (Jones and Luck 2009). A more recent example focused more on instructional design is the instructional design wizard included in the new version of the Desire2Learn LMS. This wizard guides academics through course creation via course objectives.
  3. Embed opportunities for collaboration and interaction into conglomerations.
    An essential aim of scaffolding conglomerations is enabling and encouraging academics to learn more about how to effectively use e-learning. While the importance of community and social interaction to learning is widely recognised, most professional development opportunities occur in isolation (Bransford, Brown et al. 2000). Conglomerations should aim to provide opportunities for academics to observe, question and discuss use of the technology. Examples from Webfuse are limited to the ability to observe. For example, all Webfuse course sites were, by default, open for all to see. The CourseHistory Wf application allowed staff to see the grade breakdown for all offerings of any course. A better example would have been if the CourseHistory application encouraged and enabled discussions about grade breakdowns.
  4. Ensure that conglomerations are context-sensitive.
    Effective integration with the specific institutional context enables conglomerations to leverage existing resources and reduce cognitive dissonance. For example, the Webfuse default course site conglomeration was integrated with a range of CQU specific systems, processes and resources. The Webfuse online assignment submission system evolved a number of CQU specific features that significantly increased perceptions of usefulness and ease-of-use (Behrens, Jamieson et al. 2005).

References

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. Paper presented at the Australasian Conference on Information Systems’2005, Sydney.

Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: brain, mind, experience, and school. Washington, D.C.: National Academy Press.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312-335.

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. Paper presented at the World Conference on Education Multimedia, Hypermedia and Telecommunications 2009. from http://www.editlib.org/p/31530.

Sommerville, I. (2001). Software Engineering (6th ed.): Addison-Wesley.

How strict a blueprint do ISDTs provide?

Am working on the final ISDT for the thesis. An Information Systems Design Theory (ISDT) is a theory for design and action. It is meant to aim to provide general principles that help practitioners design information systems. Design theory provides guidance about how to build an artifact (process) and what the artifact should look like when built (product/design principles) (Walls, Widmeyer et al. 1992; Gregor 2002). Walls et al (1992) see an ISDT as an integrated set of prescriptions consisting of a particular class of user requirements (meta-requirements), a type of system solution with distinctive features (meta-design) and a set of effective development practices (meta-design). Each of these components of an ISDT can be informed by kernel theories, either academic or practitioner theory-in-use (Sarker and Lee 2002), that enable the formulation of empirically testable predictions relating the design theory to outcomes (Markus, Majchrzak et al. 2002).

My question

I’ve just about happy with the “ISDT for emergent university e-learning systems” that I’ve developed. A key feature of the ISDT is the “emergent” bit. This implies that the specific context within which the ISDT might be applied is going to heavily influence the final system. To some extent there is a chance that aspects of the ISDT should be ignored based on the nature of the specific context. Which brings me to my questions:

  1. How far can the ISDT go in saying, “ignore principle X” if it doesn’t make sense?
  2. How much of the ISDT has to be followed for the resulting system to be informed by the ISDT?
  3. If most of the ISDT is optional based on contextual factors, how much use is the ISDT?
  4. How much and what sort of specific guidance does an ISDT have to give to be useful and/or worthwhile?

Class of systems

One potential line of response to this is based on the “class of systems” idea. The original definition provided by Walls et al (1992) for the meta-design component indicates that it “Describes a class of artefacts hypothesized to meet the meta-requirements” and not a specific instantiation. van Aken (2004) suggests that rather than a specific prescription for a specific situation (an instantiation), the intent should be for a general prescription for a class of problems. van Aken (2004) arrives at this idea through the use of Bunge’s idea of a technological rule.

van Aken (2004) goes onto explain the role of the practitioner in the use of a technological rule/ISDT

Choosing the right solution concept and using it as a design exemplar to design a specific variant of it presumes considerable competence on the part of practitioners. They need a thorough understanding both of the rule and of the particulars of the specific case and they need the skills to translate the general into the specific. Much of the training of students in the design sciences is devoted to learning technological rules and to developing skills in their application. In medicine and engineering, technological rules are not developed for laymen, but for competent professionals.

This seems to offer some support for the idea that this problem, is not really a problem.

Emergent

It appears that the idea of “emergent” is then just an increase in emphasis on context than is generally the case in practice. There is, I believe, a significant difference between emergent/agile development and traditional approaches, it’s probably worthwhile making the distinction in a mild way when introducing the ISDT and then reinforcing this in the artifact mutability and principles of implementation section.

The first stab

The following paragraph is a first draft of the last paragraph in the introduction to the ISDT. It starts alright, but I’m not sure I’ve really captured (or understand) what I’m trying to get at with this. Is it just an attempt to signpost perspectives included below? Need to be able to make this clearer I think.

It is widely accepted that an ISDT – or the related concept of technological rule – are not meant to describe a specific instantiation, but instead to provide a general prescription for a class of problems (Walls, Widmeyer et al. 1992; van Aken 2004). The ISDT presented here is intended to offer a prescription for e-learning information systems for universities. In addition to this general class of problems, the ISDT presented here also includes in its prescription specific advice – provided in the principles of implementation, and artifact mutability components of the ISDT – to be more somewhat more general again. This is captured in the use of the word “emergent” in the title of the ISDT and intended in the sense adopted by Truex et al (1999) where “organisational features are products of constant social negotiation and consensus building….never arriving but always in transition”. This suggests the possibility that aspects of this ISDT may also be subject to negotiation within specific social contexts and subsequently not always seen as relevant.

References

Gregor, S. (2002). Design Theory in Information Systems. Australian Journal of Information Systems, 14-22.

Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A Design Theory for Systems that Support Emergent Knowledge Processes. MIS Quarterly, 26(3), 179-212.

van Aken, J. (2004). Management research based on the paradigm of the design sciences: The quest for field-tested and grounded technological rules. Journal of Management Studies, 41(2), 219-246.

Walls, J., Widmeyer, G., & El Sawy, O. A. (1992). Building an Information System Design Theory for Vigilant EIS. Information Systems Research, 3(1), 36-58.

Some reasons why business intelligence tools aren't the right fit

The following started as an attempt to develop an argument that business intelligence/data warehouse tools are not a perfect fit for what is broadly called academic analytics. In fact, as I was writing this post I realised that it’s actually an argument that business intelligence tools aren’t a perfect fit for what I’m interested in doing and that is not academic analytics.

The following is by no means a flawless, complete argument but simply an attempt to make explicit some of the disquiet I have had. Please, feel free, and I actively encourage you, to point out the flaws in the following. The main reasons for writing this are:

  1. see what form the final argument might take;
  2. see if I can convince myself of the argument; and
  3. see if others can see some value.

Background – the indicators project

Some colleagues and I are currently involved in some work we’re calling The Indicators Project which

aims to build on and extend prior work in the analysis of usage data from Learning Management Systems

In our first paper we presented some findings that contradicted some established ideas around LMS usage due to the differences in our student body, the breadth and diversity of the data we used.

The indicators project is especially interested in how what we can find out about LMS usage that can help improve learning and teaching. We’re especially interested in what analysis across time, across LMS and across institutions can reveal that we don’t currently know.

It’s somewhat related to the idea of academic analytics

Background – academic analytics

According to Wikipedia academic analytics is “the term for business intelligence used in an academic setting”. In turn, business intelligence is described as “skills, processes, technologies, applications and practices used to support decision making”.

In an environment that is increasingly demanding techno-rational approaches to management, especially the requirement for universities to quantitatively prove that they are giving value for money, universities have started to go in for “business intelligence” in a big way. For most, this foray into “business intelligence” means setting up a data warehouse and the accompanying organisational unit to manage the data warehouse.

The “intelligence” unit is often located either within the information technology grouping or directly within the senior management structure reporting to a senior executive (i.e. a Pro or Deputy Vice Chancellor within an Australian setting). This location tendency arguably reveals the focus of such units on either the technology or servicing the needs of the senior executive they report to.

With the increasing use of technology (especially Learning Management Systems – LMS/VLE) to mediate learning and teaching there becomes available an increasing volume of data about this process. Data which may (or may not) reveal interesting insights into what is going on. Data which may be useful in decision making. i.e. Academic Analytics (aka business intelligence for learning and teaching) has arrived.

In those universities where data warehouses exist, an immediate connection is made between analysing and evaluating LMS usage data and the data warehouse. It is assumed that the best way to analyse this data is to put it into the data warehouse and allow the “intelligence” unit to do their thing.

I’m not convinced that this is the best approach and the following is my attempt to argue why.

Business Intelligence != Data Warehouse

van Dyk (2008) describes “a business intelligence approach is followed in an attempt to take advantage ICT to enable the evaluation of the effectiveness of the process of facilitating learning” and argues that the context that leads to effective data for decision making “can only be created when a deliberate business intelligence approach if followed”. The paper also contains a description of a data warehouse model that accomplishes exactly that. The framework is based on the work of Kimball and Ross (2002) and is shown below

The business intelligence framework

As you can note this business intelligence framework includes as a core, and very important, part a data warehouse. Not surprising as it is based on a book about data warehouses.

However, drawing on the Wikipedia business intelligence page (my emphasis added)

Often BI applications use data gathered from a data warehouse or a data mart. However, not all data warehouses are used for business intelligence nor do all business intelligence applications require a data warehouse.

I’m trying to develop an argument that a data warehouse, defined as a tool/system that sold as a data warehouse tool, may not be the best fit for supporting decision making based around LMS usage data. In particular, it may not be the best fit for the indicators project.

But don’t you need a data warehouse

In the early days of the indicators project we developed an image to represent what we were thinking the project would do. It’s shown below.

Project Overview

Overview of indicators project

There is certainly some similarity between this image and the business intelligence framework above. Both images encapsulate the following ideas:

  • You take data from somewhat, do some operations on it and stick it into a form you can query.
  • You take some time to develop queries on that standardised data that provide insights of interest to people.
  • You make that information available to folk.

So you’re doing essentially the same thing. A lot of people have spent a lot of time on the math and the IT necessary to create and manage data warehouse tools. So why wouldn’t you use a data warehouse? What’s the point of this silly argument?

The problems I have with data warehouses

My problem with data warehouses is that the nature of these systems and how they are implemented within organisations are a bad fit for what the indicators project wants to achieve. From above, the indicators project is especially interested in finding out what analysis of LMS usage data across time, LMS and institution can reveal beyond what is currently known.

The nature of data warehouses within universities and the tools and processes used to implement them are, from my perspective, heavy weight, teleological, proprietary and removed from the act of learning and teaching. These characteristics get in the way of what the indicators project needs to do.

Heavyweight and expensive

Institutions generally spend a lot of money on the systems, people and processes required to set up their data warehouses. Goldstein (2005) reports key findings from an ECAR study of academic analytics and suggests that more extensive platforms are more costly. When systems cost a lot they are controlled. They are after all an expensive resource that must be effectively managed. This generally means heavyweight processes of design and control.

While a significant amount of work has been done around evaluation LMS usage, there’s still a lot to discover. The very act of exploring the data – especially when going cross institutional, cross LMS and cross time – will generate new insight that will require modification to how data is being prepared for the system and how it is being reported.

This level of responsiveness is not a characteristic of heavyweight processes and expensive systems. Especially when the systems main aim and use is focused at other purposes.

Not focused on L&T

Goldstein (2005) reports that few institutions have achieved both broad and deep usage of academic analytics with the most active users coming from central finance, central admissions and institutional research. In fact, the research asked respondents to evaluate their use of academic analytics within seven functional areas. None of those seven areas involved teaching and learning.

This seems to suggest that the focus of data warehouse use within universities is not focused on L&T. The expensive resources of the data warehouse is focused elsewhere. Which suggests that resources available to tweek and modify reports and data preparation for learning and teaching purposes will be few and far between.

Proprietary

Due to the expense of these systems universities will sensibly spend a lot of time evaluating which systems to go with. This will lead to differences in the systems chosen for use. Institutional differences will also lead to differences in the type of data being stored and the format in which it is stored.

The indicators project has an interest in going across institutions. Of comparing findings at different universities. While a data warehouse approach might work at our current institution, it probably won’t be easily transportable to another institution.

This is not to suggest that it wouldn’t be transportable, but that the cost of doing so might exceed what is politically possible within current institutions.

Not located within L&T

It is well known that the two most important factors contributing to the adoption (or not) of a piece of technology are:

  • Ease of use; and
  • Usefulness.

Academic staff at universities are not rewarded (by the institution) for spending more time on their learning and teaching. They do not received any, let alone significant, encouragement to change their practice. Academics are generally given enough freedom to choose whether or not they use a tool and always have the freedom to choose how they use a tool. i.e. if they are forced to use a tool that is not easy to use and/or useful, they will not use it effectively.

The reports and dashboards associated with data warehouse tools do not live within the same space that academic staff use when they are learning and teaching. E-learning for most university staff means the institutional LMS. Systems that are not integrated into the every day, existing systems used by academic staff are less likely to be used.

The usefulness of these reports will be governed by how well they are expressed in terms that can be understood by the academic staff. Goldstein (2005) reports on there being a two-part challenge in providing effective training for academic analytics. I’m going to divide those two into three challenges (in the original the last 2 in my list were joined into one):

  1. help users how to learn the technology;
  2. help users understand the underlying data; and
  3. envision how the analytical tools could be applied.

To me, the existence of these challenges suggest that the technology being used is inappropriate. It is too hard or different for the users to understand and the information being presented is also too far removed from their everyday experience. i.e. if they need training in how to use it, then the tool is too far removed from their existing knowledge.

Given that Goldstein (2005) found these difficulties for the “sweet spot” of business intelligence (i.e. “business and finance”, “budget and planning”, “institutional research” etc.). Imagine the difficulties that will arise when attempting to apply the same technology to learning and teaching. Learning and teaching itself is inherently diverse, while the perspectives of learning and teaching held by the academics doing the teaching is several orders of magnitude more diverse.

The key point here is the “build it and they will come” approach of putting this data into a data warehouse will not work. The academic staff will not come. A large amount of work is required to develop insights into how to identify and integrate the knowledge that arises out of the LMS data in a form that encourages adoption.

Getting academic staff to meaningfully adopt and use this information to change – hopefully improve – their teaching is much more important, difficult and expensive than the provision of a data warehouse. The wrong tool – e.g. a data warehouse – will significantly limit this much more important task.

So what

I believe any approach to use data warehouse tools to provide “dashboards” to coal face academics so they can see information about the impact of their teaching and their students learning, will ultimately fail, or at the very least be very expensive, difficult and be used in limited ways.

Is there any institution doing just this know that can prove me wrong?

What’s the solution?

That’s for another post. But what I’m thinking of is :

  • Much cheaper/simpler technology.
  • Lightweight methodology.
  • Research and coal-face informed development and testing of useful measures/information.
  • Design of additions to institutional LMS and other systems that leverage this information.

References

Goldstein, P. (2005). Key Findings. Academic Analytics: The Uses of Management Information and Technology in Higher Education. ECAR Key Findings, EDUCASE Center for Aplied Research: 12.

Kimball, R. and M. Ross (2002). The data warehouse toolkit: The complete guide to dimensional modeling, John Wiley and Sons.

van Dyk, L. (2008). “A data warehouse model for micro-level decision making in higher education.” The Electronic Journal of e-Learning 6(3): 235-244.

The LMS/VLE as a one word language – metaphor and e-learning

I’m currently back from a holiday restarting work on my thesis and in particular on the process component of the Ps Framework. I’m currently working on the section that describes the two extremes, I’m using Introna’s (1996) distinction between teleological design and ateleological design.

The following arises out of re-reading Introna (1996) and picking up some new insights that resonate with some recent thoughts I’ve been having about e-learning and Learning Management Systems (LMSs/VLEs). The following is an attempt to make sense of Introna (1996) – which is not the easiest paper to follow – and integrate it with some of my thinking.

That is, this is a work in progress.

Basic argument

Introna suggests that the dominant metaphor within the design of information systems – like LMSs/VLEs – is that of the system. That the over-emphasis on the “system” has made systems development a one word language.

Can you imagine holding a conversation in a language with only one word? Not a great stretch of the imagination to see such a language as hugely limiting. Hence our current conversations about e-learning are also hugely limiting, as we’re making do with a one word language. Introna (1996) puts it this way

The use of a one word language will lead to the building of systems that are “dead” not alive and profoundly meaningful

The pre-dominance of the one word language in e-learning

The pre-dominance of the LMS or VLE within e-learning within a University context probably doesn’t need much of a background. While there are some growing movements away from the LMS (e.g. edupunk, e-learning 2.0 etc) I still believe the LMS is the dominant answer to the “how do we do e-learning?” question. As I wrote in (Jones and Muldoon, 2007)

The almost universal approach to the adoption of e-learning at universities has been the implementation of Learning Management Systems (LMS) such as Blackboard, WebCT, Moodle or Sakai. If not already adopted, Salmon (2005) suggests that almost every university is planning to make use of an LMS. Indeed, the speed with which the LMS strategy has spread through universities is surprising (West, Waddoups, & Graham, 2006). Particularly more surprising is the almost universal adoption within the Australian higher education sector of just two commercial LMSs, which are now owned by the same company. Interestingly this sector has traditionally aimed for diversity and innovation (Coates, James, & Baldwin, 2005). Conversely, the mindset in recent times has focused on the adoption of the one-size-fits-all LMS (Feldstein, 2006).

This is even in the light of there being little difference between LMSs. Here’s what Black, Beck et al (2007) had to say

There are more similarities than differences among learning management system (LMS) software products. Most LMSs consist of fairly generic tools such as quiz/test options, forums, a scheduling tool, collaborative work space and grading mechanisms. In fact, the Edutools Web site lists 26 LMSs that have all of these features (2006). Many LMSs also have the means to hold synchronous meetings and some ability to use various templates for instruction. Beyond these standardized features, LMSs tend to distinguish themselves from one another with micro-detailed features such as the ability to record synchronous meetings or the ability to download forum postings to read offline.

In my recent experience, the LMS model has become so endemic that it is mythic and unquestioned. Many folk can’t even envision how you might do e-learning within a University without an LMS.

Even at its best, discussion about e-learning within universities seems to get dragged back to the LMS. The one word language.

What’s the problem with this

Introna (1996) believes that information systems development (I’m going to accept that e-learning information systems are a subset of this) very much involves a social system or three. The development of an information system for us by people is an inherently social process and communication is essential to such a process.

He connects this with authors in the social sciences who have investigated the connection between symbolism, communication and the construction of social reality. He tends to focus on Pondy (1991) but there are others. He includes the following quote from Pondy (1991)

The central hypothesis is that the use of metaphors in the organizational dialogue plays a necessary role in helping organization participants to infuse their organizational experiences with meaning and resolve apparent paradoxes and contradictions, and that this infusion of meaning or resolution of paradox is a form of organizing. In this sense, the use of metaphors help couple the organization, to tie its parts together into some kind of meaningful whole; that is, metaphors help to organize the objective facts of the situation in the minds of the participants. …That is, metaphors serve both as models of the situation and models for the situation

In looking for the pre-dominant metaphor used in information systems development he identifies the system. Developers perform “systems analysis”, they identify the entities that make up the system, the relationships between them etc.

The system no becomes a model of, and a model for, the symbols space that needs to be designed.

While accepting that the system metaphor has been beneficial he also suggests that it is over utilized and that there are benefits to be accrued from identifying different metaphors. For example, he suggests that the “systems” metaphor works well for the design of a transaction processing system but perhaps not so well for a website, an electronic meeting or a multimedia education application.

So what?

Most immediately for me is the potential avenue these thoughts might provide for the innovation role I’m meant to be taking on. I can currently see two immediately useful applications of this thinking:

  1. Using metaphor to map the current “grammar of school” at the host institution in order to identify what current conceptions are and evaluate whether they are limiting what is possible.
    I think it’s fairly obvious from what I’ve said on this blog that I think this is the case. It also helps, or perhaps increases my pattern entrainment, that there is a connection between this and with some work my wife is doing.
  2. Developing different metaphors to develop innovative approaches to e-learning.

More broadly, I think this is another way to show and explain just how limiting and negative an influence the LMS fad has been in e-learning. More broadly again, it highlights some of the disquiet I’ve felt about the direction of the teaching and practice of information systems/technology within organisations.

More to come

Introna (1996) goes onto talk about the role of narrative and myth may have to play in information systems development. I need to follow these up as through Dave Snowden and others I have a growing interest in applying these ideas to e-learning.

More on that later.

References

Black, E., D. Beck, et al. (2007). “The other side of the LMS: Considering implementation and use in the adoption of an LMS in online and blended learning environments.” Tech Trends 51(2): 35-39.

Coates, H., R. James, et al. (2005). “A Critical Examination of the Effects of Learning Management Systems on University Teaching and Learning.” Tertiary Education and Management 11(1): 19-36.

Introna, L. (1996). “Notes on ateleological information systems development.” Information Technology & People 9(4): 20-39.

Feldstein, M. (2006). Unbolting the chairs: Making learning management systems more flexible. eLearn Magazine. 2006.

Jones, D. and N. Muldoon (2007). The teleological reason why ICTs limit choice for university learners and learning. ICT: Providing choices for learners and learning. Proceedings ASCILITE Singapore 2007, Singapore.

Pondy, L.R. (1991), “The role of metaphor and myths in organization and in the facilitation of change”, in Pondy, L.R., Morgan, G., Frost, P. and Dandridge, T. (Eds), Organizational Symbolism, JAI Press, Greenwich, CT, pp. 157-66.

Salmon, G. (2005). “Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions.” ALT-J, Research in Learning Technology 13(3): 201-218.

West, R., G. Waddoups, et al. (2006). “Understanding the experience of instructors as they adopt a course management system.” Educational Technology Research and Development.

"An ISDT for e-learning" – Audio is now synchronized

On Friday the 20th of Feb I gave a talk at the ANU on my PhD. A previous post has some background and an overview of the presentation.

I recorded the presentation using my iPhone and the Happy Talk recorder application. I’ve finally got the audio up and synchronised with the Slideshare presentation.

Hopefully the presentation is embedded below, but I’ve had some problem embedding it in the blog (all the other slideshare presentations have been ok.

Nope, the embedding doesn’t want to work. Bugger. Here’s a link to the presentation page on Slideshare.

Limitations of Slideshare

In this presentation, along with most of my current presentations, I use an adapted form of the “Lessig” method of presentation. A feature of this method is a large number of slides (in my case 129 slides for a 30 minute presentation) with some of the slides being used for very small time frames – some less than a second.

The Slideshare synchronisation tool appears to have a minimum time allowed for each slide – about 15 seconds. At least that is what I found with this presentation. I think perhaps the limitation is due to the interface, or possibly my inability to use it appropriately.

This limitation means that some of the slides in my talk are not exactly synchronised with the audio.

The Happy Talk Recorder

I’m very happy with it. The quality of the audio is surprisingly good. Little or no problems in using it. I think I will use it more.

Barriers to innovation in organisations: teleological processes, organisational structures and stepwise refinement

This video speaks to me on so many levels. It summarises many of the problems I have faced and encountered trying to implement innovative approaches to e-learning at universities over the last 15 plus year. I’m sure I am not alone.

Today, I’ve spent a lot of time not directly related to what I wanted to achieve. Consequently, I had planned not to do or look at anything else until I’d finished. But this video resonates so strongly that I couldn’t resist watching, downloading it and blogging it.

I came across the video from a post by Punya Mishra. Some more on this after the video. I should also link to the blog post on the OpenNASA site. Would your University/organisation produce something similar?

If Nona ever gets around to watching this video, I am sure she will see me in a slightly different role in the video. Until recently I had the misfortune to be in the naysayer role. That’s no longer the case. Who said no good could come of organisational restructures?

Barriers to innovation and inclusion

The benefits of being open

Coming across this video, provides further evidence to support an earlier post I made today on the values of being open. I became aware of Punya’s post because of the following process:

  • Almost a year ago Punya published this post on his blog that openly shares the video of a keynote he and Mat Koehler gave.
  • I came across it not long afterwards through my interest in TPACK (formerly known as TPCK).
  • About two weeks ago I decided to use part of the video in some sessions I was running on course analysis and design.
  • A couple of days ago I blogged on an important part of the presentation (not used in the sessions I ran) that resonated with my PhD work.
  • My blog software told Punya’s blog software about my post and added it as a comment to his blog.
  • This afternoon Google Alerts sent me an email that this page on Punya’s blog was linking to my blog (because of the comment – see the comments section in the right hand menu).
  • Out of interest (some might say in the interest of procrastination) I followed the link and saw the video.

I plan to use parts of this video in future presentations around my PhD research. I believe it will resonate with people so much better than me simply describing the abstract principles.

So while not directly contributing to what I wanted to do today. It’s provided with a great advantage in the future.

of a Google Alert I have set on my site. Google emailed me to say that Punya had made this post because his blog software includes a list of the

I’ve spent a lot of time today doing stuff not necessarily directly related to what I wanted to achieve today. To such an extent I’d decided not to blog anymore.

Is all diversity good/bad – a taxonomy of diversity in the IS discipline

In a previous post I pointed to and summarised a working paper that suggests that IS research is not all that diverse. At least at the conceptual level.

The Information Systems (IS) discipline has for a number of years been having an on-going debate about whether or not the discipline is diverse or not. A part of that argument has been discussion about whether diversity is good or bad for IS and for a discipline in general.

Too much diversity is seen as threatening the academic legitimacy and credibility of a discipline. Others have argued that too little diversity could also cause problems.

While reading the working paper titled “Metaphor, meaning and myth: Exploring diversity in information systems research” I began wondering about the definition of diversity. In particular, the questions I was thinking about were

  1. What are the different types of diversity in IS research?
    Based on the working paper I believe there are a number of different types of diversity. What are they?
  2. Are all types of diversity bad or good?
    Given I generally don’t believe in universal generalisations, my initial guess is that the answer will be “it depends”. In some contexts/purposes, some will be bad and some will be good.
  3. Is this topic worth of a publication (or two) exploring these questions and the implications they have for IS and also for other disciplines and research in general?
    Other disciplines have had these discussions.
  4. Lastly, what work have IS researchers already done in answering these questions, particularly the first two?
    There’s been a lot of work in this area, so surely someone has provided some answers to these questions.

What different types of diversity exist?

The working paper that sparked these questions talks about conceptual diversity.

It also references Benbasat and Weber (1996) – two of the titans of the IS discipline and this article is perhaps one of “the” articles in this area – who propose three ways of recognising research diversity

  1. Diversity in the problems addressed.
  2. Diversity in the theoretical foundations and reference disciplines used to account for IS phenomena.
  3. Diversity of research methods used to collect, analyse and interpret data.

The working paper also suggests that Vessey et al (2002) added two further characteristics

  1. Research approach.
  2. Research method.

I haven’t read the Vessey paper but given this summary, I’m a bit confused. These two additional characteristics seem to fit into the 3rd “way” from Benbasat and Weber. Obviously some more reading is required.

In the work on my thesis I’m drawing on four classes of questions about a domain of knowledge from Gregor (2006). They are

  1. Domain questions. What phenomena are of interest in the discipline? What are the core problems or topics of interest? What are the boundaries of the discipline?
  2. Structural or ontological questions. What is theory? How is this term understood in the discipline? Of what is theory composed? What forms do contributions to knowledge take? How is theory expressed? What types of claims or statements can be made? What types of questions are addressed?
  3. Epistemological questions. How is theory constructed? How can scientific knowledge be acquired? How is theory tested? What research methods can be used? What criteria are applied to judge the soundness and rigour of research methods?
  4. Socio-political questions. How is the disciplinary knowledge understood by stakeholders against the backdrop of human affairs? Where and by whom has theory been developed? What are the history and sociology of theory evolution? Are scholars in the discipline in general agreement about current theories or do profound differences of opinion exist? How is knowledge applied? Is the knowledge expected to be relevant and useful in a practical sense? Are there social, ethical or political issues associated with the use of the disciplinary knowledge?

I wonder if these questions might form a useful basis or a contribution to a taxonomy of diversity in IS. At this stage, I think some sort of taxonomy of diversity might indeed be useful.

The gulf between users and IT departments

Apparently Accenture have discovered “user-determined computing” and associated issues.

The definition goes something like this

Today, home technology has outpaced enterprise technology, leaving employees frustrated by the inadequacy of the technology they use at work. As a result, employees are demanding more because of their ever-increasing familiarity and comfort level with technology. It’s an emerging phenomenon Accenture has called “user-determined computing.”

This is something I’ve been observing for a number of years and am currently struggling with in terms of my new job, in a couple of different ways. In particular, I’m trying to figure out a way to move forward. In the following I’m going to try and think/comment about the following

  • Even though “Web 2.0 stuff” seems to be bringing this problem to the fore, it’s not new.
  • The gulf that exists between the different ends of this argument and the tension between them.
  • Question whether or not this is really a technology problem.
  • Ponder whether this is a problem that’s limited only to IT departments.

It’s not new

This problem, or aspects of it, have been discussed in a number of places. For example, CIO magazine has a collection of articles it aligns with this issue (Though having re-read them, I’m not sure how well some of them connect).

The third one seems the most complete on its coverage of this topic. I highly recommend a read.

The gulf

Other earlier work has suggested that the fundamental problem is that there is a gap or gulf, in some cases a yawning chasm, between the users’ needs and what’s provided by the IT department.

One of the CIO articles above puts it this way

And that disconnect is fundamental. Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable and compliant with an ever increasing number of government regulations. Consequently, when corporate IT designs and provides an IT system, manageability usually comes first, the user’s experience second. But the shadow IT department doesn’t give a hoot about manageability and provides its users with ways to end-run corporate IT when the interests of the two groups do not coincide.

One of the key points here is that the disconnect is fundamental. The solution is not a minor improvement to how the IT department works. To some extent the problem is so fundamental that people’s mindsets need to change.

Is this a technology problem?

Can this change? Not sure it can, at least in the organisations where all that is IT is to be solved by the IT department. Such a department, especially at the management level, is manned (and it’s usually men, at least for now) by people who have lived within IT departments and succeeded, so that they now reside at the top. In most organisations the IT folk retain final say on “technical” questions (which really aren’t technical questions) because of the ignorance and fear of senior management about “technical” questions. It’s to easy for IT folk to say “you can’t do that” and for senior management not to have a clue that it is a load of bollocks.

Of course I should take my own advice look for incompetence before you go paranoid. Senior IT folk, as with most people, will see the problem in the same way they have always seen the problem. They will always seek solve it with solutions they’ve used before, because that’s the nature of the problem they see. One of the “technical” terms for this is inattentional blindness

The chances of a fundamental change to approach is not likely. Dave Snowden suggests that the necessary, but not sufficient conditions, for innovation are starvation, pressure and perspective shift. Without that perspective shift, the gulf will continue exist.

It’s not limited to IT

You can see evidence of this gulf in any relationship between “users” and a service group within an organisation (e.g. finance, Human Resources, quality assurance, curriculum design etc.) – especially when the service group is a profession. The service group becomes so enamoured of its own problem due to pressure from the organisation, the troubles created by the “users” and the distance (physical, temporal, social, mental etc.) between the service group and the “users” that it develops its own language, its own processes and tasks and starts to lose sight of the organisations core business.

The most obvious end result of the gulf is when the service department starts to think it knows best. Rather than respond to the needs, perceived and otherwise, of the “users”, the service department works on what it considers best. Generally something the emphasises the importance of the service divisions and increases their funding and importance within the organisation. You can see this sort of thing all the time with people who are meant to advice academics about how to improve their learning and teaching.

IT are just the easiest and most obvious target for this because IT is now a core part of life for most professions, most organisations continue to see it as overhead to be minimised, rather than an investment to be maximised and the on-going development of IT is changing the paradigm for IT departments.

A Paradigmatic Analysis of Information Systems As a Design Science

The following is a summary of and reflection upon

Juhani Iivari, (2007), A Paradigmatic Analysis of Information Systems As a Design Science, Scandinavian Journal of Information Systems, 19(2):39-64

Reflection

This paper is somewhat similar, at a very abstract level, to one I’ve been thinking about. However, it’s told from a different perspective, with a different intent and different outcomes (and probably much better than I could). There is enough difference that I think I can still contribute something.

One aspect of that difference would come from the fact that the foundation of my thoughts will be Shirley’s types of theories which Juhani identifies as being more complete than the framework he developed.

Questions and need for further thinking

In the epistemology of design science section the author outlines a framework to structure IS research. Somewhat equivalent to Shirley’s theory of theories. Does this structure belong in the “epistemology” section or the “ontology” section?

The question of truth value and truthlikeness is something I need to read on further

The 12 theses

The author summarises his view in 12 theses, I’ve listed them below with, where it exists, some early indication of some of my problems and/or thoughts. At least those that currently exist.

  1. Information Systems is ultimately an applied discipline.
    I agree. Juhani mentions the problems with the term “applied science” in the first footnote.
  2. Prescriptive research is an essential part of Information Systems as an applied discipline.
    Agreed. I would add that there has been significantly too much focus on the other forms of research – descriptive and explanatory – at the expense of prescriptive research. A flaw that has negatively impacted on the IS discipline.
  3. The design science activity of building IT artifacts is an important part of prescriptive research in Information Systems.
    I agree, however, I don’t see it as the main output or purpose of prescriptive research in information systems. At least not any more than building a quantitative survey is the main contribution/output of descriptive/explanatory research. For me, building an IT artifact is a method to test the theory being developed.
  4. The primary interest of Information Systems lies in IT applications and therefore Information Systems as a design science should be based on a sound ontology of IT artifacts and especially of IT applications.
    There’s a glimmer of agreement here. Not sure how I far that goes. I see IS as having a main interest in how IT applications are used/impact organisations/groups/people. For me the focus on just IT applications is computer science.
  5. Information Systems as a design science builds IT meta-artifacts that support the development of concrete IT applications.
    Agree, but with meta-artifacts expressed as information systems design theories.
  6. The resulting IT meta-artifacts essentially entail design product and design process knowledge.
    Yes.
  7. Design product and design process knowledge, as prescriptive knowledge, forms a knowledge area of its own and cannot be reduced to the descriptive knowledge of theories and empirical regularities.
    Not certain about this one. Mention a bit more below.
  8. Constructive research methods should make the process of building IT meta-artifacts disciplined, rigorous and transparent.
    Agree.
  9. Explication of the practical problems to be solved, the existing artifacts to be improved, the analogies and metaphors to be used, and/or the kernel theories to be applied is significant in making the building process disciplined, rigorous and transparent.
    Agree, but need more time to think about whether this is complete.
  10. The term ‘design theory’ should be used only when it is based on a sound kernel theory.
    Probably disagree, see more discussion below. Need more thought.
  11. Information Systems as a design science cannot be value-free, but it may reflect means-end, interpretive or critical orientation.
    Yes agree. I wonder if there are any other additional ethical perspectives.
  12. The values of design science research should be made as explicit as possible.
    Yes.

What distinguishes design science from IT development practice

Juhani suggests the use of rigorous constructive research methods as what distinguishes practice from design science. Which leads him to admit that if a practitioner uses a constructive research method, then they are doing research.

I find this vaguely troubling

I would suggest that need to move to the output. My view assumes that an artifact is not a sufficient output for design science. If you accept that the expected output of research is the generation or testing of theory (knowledge). Then the output of design science should be design theory (though I don’t like the phrase design science). An artifact can be part of the design theory, but not the sole output.

An IT practitioner will not (typically) generate design theory. They generate artifacts. A researcher aims to go the next step and generate design theory.

Does DSR have a positivistic epistemology

Juhani argues that action research and design science research are very different in terms of history, practice, ontology and epistemology. As part of this he suggests that DSR (especially from engineering and medicine) is based on positivistic epistemology and that argues against Cole et al that it might be possible for some applications of DSR around IS within organisations to have a different epistemology.

This argument is based on his work on the paradigmatic assumptions regarding systems development approaches which found that al 7 IS development approaches shared a fairly realistic ontology and positivistic epistemology.

However, earlier in the paper he argues that systems development approaches are not a good match for use as constructive research methods. Hence how can an analysis of systems development approaches be used to argue anything about DSR? Yes, there is likely to be some strong overlap, but it doesn’t seem to be strong evidence.

Also, simply because historically these systems development approaches (and one assumes IS developers/researchers) have held this particular view that this excludes some practice of DSR which has a different epistemology.

Test artifacts in laboratory and experimental situations as far as possible

It is suggested that action research can be used to evaluate artifacts and provide information on how to improve these artifacts. However, Juhani also suggests that design science artifacts should be tested in laboratory studies as far as possible.

I believe this closes off a major fruitful way of developing design theory. An approach that ties very much into Juhani’s first major source of ideas for design science research – practical problems and opportunities. DSR that uses action research as a methodology to not only evaluate but also inform the design of an artifact/ISDT can lead to very fruitful ideas.

Does a design theory need a kernel theory

Juhani says yes. If we do without there is a “danger that the idea of a ‘design theory’ will be (mis)used just to make our field sound more scientific without any serious attempt to strengthen the scientific foundation of the meta-artifacts proposed”.

There is something to this, but I also have some qualms/queries which I need to work through. The queries are

  • Situations where descriptive theory has to catch up with prescriptive theory.
    i.e. physics of powered flight being figured out after the Wright brothers flew.
  • Situations where descriptive theory is closing off awareness or insight.
    Someone deeply aware of descriptive theories will have a set of patterns established in their head which may limit there ability to be aware of the situation or envision different courses of action (i.e. inattentional blindness aka perceptual blindness).

    Awareness of a situation, or the ability to avoid established descriptive theories may highlight new and interesting solution (yes, I think this occurence might be rare).

There is an argument to be had about the difference between the final version of the ISDT and its formulation. It may be that a complete/formal ISDT does need to have a kernel theory or two. However, it may not have been there at the beginning.

For example, the work that forms that basis of my design theory for e-learning started without clearly stated and understood kernel theories based on formal descriptive research. However, a very early paper (Jones and Buchanan, 1996) on that work included the following

It is hoped that the design guidelines emphasising ease of use and of providing the tools and not the rules will decrease the learning curve and increase the sense of ownership felt by academic staff.

It’s not difficult to see in that statement a connection with diffusion theory and TAM. Descriptive knowledge that has informed later iterations of this work and diffusion theory certainly gets a specific inclusion as a kernel theory in the final ISDT.

What’s the kernel theory for the IS development life cycle

In footnote 7 the author writes that Walls et al (1992) “suggest that the information systems development life-cycle is a design theory, although I am not aware of any kernel theory on which it is based.”

I agree, in so much as I’m not aware of an clear statement of the kernel theories that underpin the SDLC. I also think that the absence of such a clear statement is a potential short-coming.

There is a world view embodied in the SDLC. For example, I believe that the SDLC assumes that the world fits into the simple or complicated fields of the Cynefin Framework and is completely inappropriate when used in other types of systems – even in the complicated field it can be difficult. Agile/emergent development methodologies appear to be a better fit for the complex section of Cynefin.

Which raises the question, is there value in going back and developing an ISDT for the SDLC which makes clear the assumptions that underpin it by providing kernel theories.

Irreducibility of prescriptive knowledge to descriptive knowledge. Juhani states, that since most IT artifacts aren’t strongly based on descriptive knowledge

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

That seems to be a rather large leap to me. The questions it brings to mind include

  • Does the absence of strong links mean its irreducible?
    I don’t understand how Juhani has gotten from “most IT artifacts have weak links to descriptive knowledge” to “prescriptive knowledge is not reducible to descriptive knowledge”.

    Not to suggest it’s wrong. It’s just that I’m not smart enough to make the conenction, yet.

  • Is there more to this statement that meets the eye?

    Despite this weak reliance on descriptive theories people design reasonably successful IT artifacts.

    • What types of artifacts are reasonably successful? Who says? Why are they successful?
      There’s a large amount of literature about the failure of large scale information systems. Is that failure due to the weak reliance?

      We can all point to systems that are being used by people to perform tasks. But does use mean success? Does it mean that the need of the folk is strong enough that they will adapt and work around the system enough to do the task they wish to achieve? Is success generating the best possible system? How do you evaluate that?

      Perhaps the success of some systems, even with weak reliance on descriptive knowledge, simply proves how adaptable people are.

    • Does weak reliance, mean none?
      The example I give above shows a situation where without knowledge f a specific type of descriptive knowledge (diffusion theory) a practitioner was already aware of something very similar, a need to go that way. An example of the relevance/rigor gap?

If you haven’t noticed, I lost my way in the above. Need to come back to it. I feel there is more to unpack there.

Summary

Abstract

Discusses

  • ontology – suggests ontology of IT artifacts, draws on Popper’s three worlds as a starting point
  • epistemology – emphasizes the irreducibility of the prescriptive knowledge of IT artifacts to theoretical descriptive knowledge, suggests a 3 level epistemology for IS – conceptual knowledge, descriptive knowledge and prescriptive knowledge
  • methodology – expresses a need for constructive research methods for disciplined, rigorous, transparent building of IT artifacts as outcomes of design science research (so as to distinguish design research from simply developing IT artifacts), also discusses connections between action research and design science research.
  • ethics – points out IS as a design science cannot be value free, distinguishes three ethical positions: means-end oriented, interpretive and critical

of design science.

Introduction

Computer science has always been doing design science research. Much of the early IS research focused on systems development approaches and methods – i.e. design science research

But the last 25 years of mainstream IS research has lost sight of these origins – due to the “hegemony of the North-American business-school-oriented IS research” over leading IS publication outlets.

The dominant research philosophy has been to develop cumulative, theory-based research to be able to make prescriptions.

A pilot analysis of practical recommendations of MISQ articles between 96 and 2000 showed they were weak (Iivari et al. 2004)

Current upsurge in interest in design science may change this. Also important that these papers have turned attention onto how to do design science research more rigorously.

IS is increasingly being seen as an applied science, a quote from Benbasat and Zmud (2003)

our focus should be on how to best design IT artifacts and IS systems to increase their compatibility, usefulness, and ease of use or on how to best manage and support IT or IT-enabled business initiatives.

Iivari’s (1991) previous work on applying paradigms to IS development approaches or schools of thought used the Burrell and Morgan (1979) framework but expanded it in two ways to encapsulate his design science background.

  1. Added ethics as an explicit dimension
  2. incorporated constructive research to complement nomothetic and idiography research

This essay revisits that work and applies it directly to design science research.

Ontology

States design research should be based on sound ontology. However, does not state explicitly (at least at this stage) why this is the case. Not suggesting that it should be based on sound ontology, but I want to know why Juahni thinks it should be.

Talks about Poppers (1978) three worlds as the basis for this ontology (a lecture delivered by Popper)

  • World 1 – physical objects and events, including biological entities
  • World 2 – mental objects and events
  • World 3 – products of the human mind, includes human artifacts and also covers institutions and theories

Popper talks about World 3 include “also aeroplanes and airports and other feats of engineering.”

Iivari argues

  • institutions are social constructions that have been objectified (Berger and Luckman, 1967)
  • truth and ‘truthlikeness’ (Niiniluoto 1999) can be used in the case of theories, but not artifacts
  • Artifacts are only more or less useful for human purposes

Disciplines of computer are interested in IT artifacts. Dahlbom (1996) adopts a broad and possibly confusing interpretation of the concept of the artifact including people and their lives. Coming back to just IT he says

When we say we study artifacts, it is not computers or computer systems we mean, but information technology use, conceived as a complex and changing combine of people and technology. To think of this combine as an artifact means to approach it with a design attitude, asking questions like: Could this be different? What is wrong with it? How could it be improved? (p. 43).

Dahlbom also claims the discipline should be thought of as “using information technology” instead of “developing information systems” (p.34). Need to look at this more to see if there is much more to this claim than the surface interpretation.

Starts thinking about developing a sound ontology for design science. Identifies the need to answer the question about what sort of IT artifacts IS should build, especially if we wish to distinguish ourselves from computer science. In terms of ontology of artifacts mentions

  • Orlikowski and Iacono (2001) – the names from the IT artifact
    And their list of views of technology: computational, tool, proxy and ensemble.
  • March & Smith (1995)/Hevner et al (2004) from design research
    And their constructs, models, methods and instantiations. Iivari suggests this is a very general classification, its application is not always straightforward
  • diffusion of innovations – Lyytinen and Rose (2003), refining Swanson (1994) identify
    • base innovations
    • systems development innovations
    • services – administrative process innovations (e.g. accounting systems) technologyical process innovations (e.g. MRP), technological service innovations (e.g. remove customer order entires), and technological integration innovations (e.g. EDI).

In my view the primary interest of Information Systems lies in IT applications.

Defines 7 archetypes of IT applications. As archetypes they may not occur in practice in their pure forms.

Role/function Metaphors Examples Connection with Orlikowski & Iacono
To automate Processor Many embedded or transaction processing systems technology as labour substitution tool
To augment Tool (proper) Many personal productivity systems; Computer aided design technology as productivity tool
To mediate Medium Email, instant messaging, chat rooms, blogs, electronic storage systems (e.g. CDs and DVDs) technology as socail relations tool
To informate Information source Information systems proper technology as information processing tool
To entertain Game Computer games
To artisticize Piece of art Computer art
To accompany Pet Digital (virtaul and roboting) pets

The above table interprets information system

  • is a system whose purpose “is to supply its group of users with information about a set of topics to support their activities” (Gustafsson et al, 1982, p100)
  • implies that an IS is specific to the organisational/inter-organisational context in which it is implemented
  • information content is also a central aspect

Differences between IT artifacts include

  • In design – different design approaches used for different purposes
  • In their diffusion – Swanson (1994) and Lyytinen and Rose (2003)
  • In their acceptance – Iivari’s conjecture

Proposes that IT artifacts have invaded all of Popper’s worlds

  1. IT artifacts are embedded in natural objects, e.g. to measure physical states, and nanocomputing may open up new opportunities. How IT artifacts affect natural phenomena is likely to become a significant research problem.
  2. IT artifats are influencing our consciousness and mental states, our perceptions.
  3. Significant constituents of organisations and societies – make it feasible to develop more complex theories.

Research phenomena below influence epistemology and methodology

  1. How does the use of a mobile phone affect one’s brain temperature?
  2. How does the use of a mobile phone affect one’s perception of time and space?
  3. How do mobile phones affect the nature of work in organisations?

An ontology for design science

World Explanation Research Phenomena Examples
World 1 Nature IT artifacts + World 1 Evaluation of IT artifacts against natural phenomena
World 2 Conciousness and mental states IT artifacts + World 2 Evaluation of IT artifacts against perceptions, consciousness and mental states
Institutions
Theories
Artifacts: IT artifacts, IT applications, meta IT artifacts
IT artifacts + World 3 Institutions
IT artifacts + World 3 Theories
IT artifacts + World 3 artifacts
Evaluation of organizational information systems
New types of theories made possible by IT artifacts
Evaluation of the performance of artifacts comprising embedded computing

Epistemology of design science

Truth, utility and pragmatism. Argues against the adoption of the idea from pragmatism that truth is seen as practical utility. Artifacts, if theories are excluded, do not have any truth value. Practical action informed by theory may develop some level of truth if it consistently proves to be successful.

Draws on his earlier work in adopting a framework from economics to structure research within IS. It’s again based the type of knowledge being produced, in his case there are three types

  1. Conceptual knowledge – which has no truth value
    Includes concepts, constructs, classifications, taxonomies, typologies and conceptual frameworks.
  2. Descriptive knowledge – has truth value
    Includes observational facts, empirical regularities and theories/hypothesis which group under causal laws.
  3. Prescriptive knowledge – which has no truth value
    Design product knowledge, design process knowledge and technical norms.

The author suggests the following mapping between his framework and Shirley’s types of theory

  1. Conceptual – “Theories for analysing and predicting”
  2. Descriptive – theories for explaining and predicting and theories for explaining (as empirical regularities)
    Can include

    • observational facts – who invented what, when.
    • descriptive knowledge – TAM, Moore’s law
    • empirical regularities and explanatory theories identify causal laws that are either deterministic or probablistic
  3. Prescriptive – theories for design and action
    Relatively speaking, prescriptive knowledge is the least well understood
    form of knowledge in Table 3.

Suggests that theories of explaining, in the form of grand theories such as actor-network theory, do not fit into his framework. But the do in Shirley’s.

On the question of truth value or truthlikeness

  • Conceptual – the goal is essentialist, to identify the essence of the research territory and the relationships. May be more or less useful in developing theories at the descriptive level (quotes Bunge 1967a here).
  • Prescriptive – artifacts and recommendations do not have a truth value. Only statements about their efficiency and effectiveness have such a value

Beckman (2002) identifies four criteria of artefacts

  1. Intentional – the knife is a knife because it is used as a knife
  2. Operational – it is a knife because it works like a knife
  3. Structural – is a knife because it is shaped and has the fabric of a knife
  4. Conventional – is a knife because it fits the reference of the common concept of a ‘knife’

Juhani does not include the conventional with artifact as it may not achieve community acceptance until years after invention and construction.

Prescriptive knowledge is irreducible to descriptive knowledge

Suggests that most IT systems are built divorced from descriptive knowledge. There is only a weak link between IT artifacts and descriptive knowledge. And yet IT systems are still reasonably successful.

This makes one to wonder whether the IS research community tends to exaggerate to the significance of descriptive theoretical knowledge for prescriptive knowledge of how to design IT successful artifacts. In conclusion, in line with Layton (1974) I am inclined to suggest that prescriptive knowledge forms a knowledge realm of its own and is not reducible to descriptive knowledge.

Kernel theories

Believes the presence of a kernel theory is the defining characteristic of a “design theory”.

This is seen as difficult and leads to a softening of requirements for a kernel theory – e.g. Markus et al (2002) allowing any practitioner theory-in-use to serve as a kernel theory. Implying the design theory is not based on scientifically validated knowledge.

Methodology of design science

Classifications of IS research methods (Benbasat, 1985; Jenkins, 1985; Galliers and Land, 1987 and Chen and Hirschheim, 2004) do not recognise anything resembling constructive research methods. Iivari (1991) suggested constructive research as the term to denote the research methods required for constructing artifacts.

Positions building artifacts as a very creative task. Hence it is difficult to define an appropriate method for artifact building. Having constructive research methods is essential for the identity of IS as a design science. The rigor of methods distinguishes the design science from the practice of building IT artifacts.

Suggests two ways to identify the difference

  1. There is no constructive research methods, instead the difference is the evaluation. Design science requires scientific evaluation of the artifacts.
    drawback may lead to reactive research where IS as a designs cience focuses on the evaluation of existing artifacts, rather than building new ones.
  2. Define a rigorous approach for constructive research and use this to differentiate design science from invention in practice.

Iiivari didn’t specify the constructive research methods. Talks about Nunamaker et al (1990-1991) and their suggestion that systems development methods could serve this role. Iivari doesn’t appear to think so. Pitfalls include:

  • Do SDMs allow sufficient room for creativity and serendipity which are essential for innovation?
    A significant concern when attempting to make the building process more disciplined, rigorous and transparent.
  • Most serious weakness of the Nunamaker et al suggestion is that it integrates systems development quite weakly with research activities.

Hevner et al (2004) suggests rigor in designs cience research is derived from the effective use of prior research – using the existing knowledge base. Iivari claims it is in making the construction process as transparent as possible.

The source of ideas

Iivari suggests four major sources for ideas for design research

  1. Practical problems and opportunities
    Emphasizes the practical relevance of this research. Customers known as significant source of innovations (von Hippel 2005). But practice problems may be abstracted or seen slightly differently. Design science can also create solutions long before a problem is seen/understood.
  2. Existing artifacts
    Most design science research consists of incremental improvements to existing artifacts. Must understand what has gone before, if only to evaluate contribution.
  3. Analogies and metaphors
    Known that analogies and metaphors stimulate creativity.
  4. Theories
    i.e. kernel theories can serve as inspiration

Design science and action research

Many authors associated design science and action research, since they both attempt to change the world. Iivari suggests that they are different in a number of ways

  • Historically
    Action research – socio-technical design movement. Design science – engineering.
  • Practically
    Action research – focused on “treating social illnesses” within organisations and other institutions. Technology change may be part of the treatement, but the focus is more on adopting than building technology.
    DSR – focus on the construction of artifacts, most having material embodiment. Usually done in laboratories, clearly separated from potential clients.
  • Ontologically,
    DSR – in engineering/medicine adopts a realistic/materialistic ontology
    Action research – accepts a more nominalistic, idealistic and constructivist ontology

    Materialism attaches primacy to Popper’s World 1, idealism to World 2. Action research is also interested in the institutions of World 3.

  • epistemologically, and
    Consequently, design research, especially in engineering and medicine, have a positivistic epistemology in terms of knoweledge applied from reference disciplines and knowledge produced. Action research is strongly based on an anti-positivistic epistemology. The very idea of AR is anti-positivistic as each client is unique.
  • methodologically.

Cole et al (2005) take the alternate perspective that design science and AR share important assumptions regarding ontology and epistemology. Cole et al implicitly limit design science to IS in an organisational context, if so then shouldn’t the ontology and epistemology of DSR be different. Juhani is doubtful about this, based on his work evaluating systems development approaches – but he’s said earlier that systems development approaches aren’t a good match for constructive research – for DSR. Can he make this connection here?

Ethics of design science

Design science shapes the world. “Even though it may be questionable whether any research can be value-free, it is absolutely clear that design science research cannot be.” which suggests that the basic values of research should be expressed as explicitly as possible.

Juhani then uses his own work (1991) to identify three roles (?types of ethics?)

  1. Means-end oriented
    Knowledge is provided to achieve an ends without questioning the legitimacy of the ends.
    Evaluation here is interested in how effectively the artifact helps achieve the ends
  2. interpretive
    The goal is to enrich understanding of action. Goals are not clear, focus on unintended consequences.
    Evaluation seeks to achieve a rich understanding of how an IT artifact is really appropriated and used and what its effects are, witout focusing on the given ends.
  3. Critical
    Seeks to identify and remove domination and ideological practice. Goals can be subjected to critical analysis.
    Evaluation focuses on how the IT artifact enforces or removes unjustified domination or ideological practices.
  4. Most DSR is means-end oriented, but it can be critical (e.g. Scandinavian trade-unionist systems development approaches)

    Question values of IS research – whose values and what values dominate?

    Conclusions

    Introduces the 12 theses summarised right up the top

    References

    Benbasat, I., & Zmud, R. (2003). The Identity Crisis within the IS Discipline: Defining and Communicating the Discipline’s core properties. MIS Quarterly, 27(2), 183-194.

Initial thoughts from CogEdge accreditation course

As I’ve mentioned before Myers-Briggs puts me into the INTP box, a Kiersey Archiect-Rational. Which amongst many other things I have an interest in figuring out the structure of things.

As part of that interest in “figuring out the structure” I spent three days last week in Canberra at a Cognitive Edge accreditation course. Primarily run by Dave Snowden (you know that a man with his own Wikipedia page must be important), who along with others has significant criticisms of the Myers-Briggs stuff, the course aims to bring people up to speed with Cognitive Edge’s approach, methods and tools to management and social sciences.

Since this paper in 2000, like many software people who found a resonance with agile software development, I’ve been struggling to incorporate ideas with a connection to complex adaptive systems into my practice. Through that interest I’ve been reading Dave’s blog, his publications and listening to his presentations for sometime. When the opportunity to attend one of his courses arose, I jumped at the chance.

This post serves two main roles:

  1. The trip report I need to generate to explain my absence from CQU for a week.
  2. Forcing me to write down some immediate thoughts about how it might be applied at CQU before I forget.

Over the coming weeks on this blog I will attempt to engage, reflect and attempt to integrate into my context the huge amount of information that was funneled my way during the week. Some of that starts here, but I’m likely to be spending years engaging with some of the ideas.

What’s the summary

In essence the Cognitive Edge approach is to take insights from science, in particular complex adaptive systems theory, cognitive science and techniques from other disciplines and apply them to social science, in particular management.

That’s not particularly insightful or original. It’s essentially a rephrasing of the session blurb. In my defence, I don’t think I can come up with a better description and it is important to state this because the Cognitive Edge approach seriously questions much of the fundamental assumptions of current practices in management and the social sciences.

It’s also important to note that the CogEdge approach only questions these assumptions in certain contexts. The approach does not claim universality, nor does it accept claims of universality from other approaches.

That said, the CogEdge approach does provides a number of theoretical foundations upon which to question much of what passes for practices within the Australian higher education sector and within organisations more broadly. I’ll attempt to give some examples in a later section. The next few sub-sections provide a brief overview of some of these theoretical foundations. I’ll try and pick up these foundations and their implications for practice at CQU and within higher education at a later date.

The Cynefin Framework

At the centre of the CogEdge approach is the Cynefin framework.

The Wikipedia page describes it as a decision making framework. Throughout the course we were shown a range of contexts in which it can be used to guide people in making decisions. The Wikipedia page lists knowledge management, conflict resolution and leadership. During the course there were others mentioned including software development.

My summary (see the wikipedia page for a better one) is that the framework is based on the idea that there are five different types of systems (the brown bit in the middle of the above image is the fifth type of system – disorder, when you don’t know which of the four other systems you’re dealing with). Most existing principles are based on the idea of there being just one type of system. An ordered system. The type of system where causality is straight forward and one that the right leader(ship group) can fully understand and design (or most likely adopt them from elsewhere) interventions that will achieve some desired outcome.

If the intervention happens to fail, then it is a problem with the implementation of the intervention. Someone failed, there wasn’t enough communication, not enough attention paid to the appropriate culture and values etc.

The Cynefin Framework suggests that there are 5 different contexts. This suggests an alternate perspective for failure. That is, that the nature of the approach was not appropriate for the type of system.

A good example of this mismatch is the story which Dave regularly tells about the children’s birthday party. Some examples of this include: an mp3 audio description (taken from this presentation) or a blog post that points to a video offering a much more detailed description.

The kid’s birthday party is an example of what they Cynefin framework calls a complex system. The traditional management by objectives approach originally suggested for use is appropriate for the complicated and simple sectors of the Cynefin framework, but not the complex.

Everything is fragmented

“Everything is fragmented” was a common refrain during the course. It draws on what cognitive science has found out about human cognition. The ideal is that human beings are rational decision makers. We gather all the data, consider the problem from all angles, perhaps consult some experts and then make the best decision (we optimize).

In reality, the human brain only gets access to small fragments of the information that is presented. We compare those small fragments against the known patterns we have in our brain (our past experience) and then choose the first match (we satisfice). The argument is that we take fragments of information and assemble them into something, somewhat meaningful.

The CogEdge approach recognises this and its methods and software are designed to build on this strength.

Approach, methods and software

The CogEdge approach is called “naturalising sensemaking”. Dave offers a simple definition of sensemaking here

the way in which we make sense of the world so that we can act in it

Kurtz and Snowden provide a comparison between what passes for the traditional approaches within organisations (idealistic) and their approach (naturalistic). I’m trying to summarise this comparison in the following table.

Idealistic Naturalistic
identify the future state and implement approaches to achieve that state gain sufficient understanding of the present context and choose projects to stimulate the evolution of the system, monitor that evolution and intervene as necessary
Emphasis is on expert knowledge and their analysis and interpretation Emphasis on the inherent un-knowability of a complex system which means affording no privelege to expert interpretation and instead favouring emergent meaning at the coal-face
Diagnosis precedes and is separate from intervention. Diagnosis/research identifies best practice and informs interventions to close the gap between now and the identified future state All diagnosis are also interventions and all interventions provide an opportunity for diagnosis

As well as providing the theoretical basis for these views the CogEdge approach also provides a collection of methods that help management actually act within a naturalistic, sense-making approach. It isn’t an approach that says step back and let it all happen.

There is also the SenseMaker Suite. Software that supports (is supported by) the methods and informed by the same theoretical insights.

Things too question

Based on the theoretical perspective taken by CogEdge it is possible to raise a range of questions (many of a very serious nature) against a range of practices currently within the Australian Higher Education sector. The following list is a collection of suggestions, I need to work more on these.

The content of this list is based on my assumption that learning and teaching within a current Australian university is a context system and fits into the sector of the Cynefin framework. I believe all of the following practices only work within the simple or the complicated sectors of the Cynefin framework.

My initial list includes the following, and where possible I’ve attempted to list what some of the flaws might be of this approach within the complex sector of the :

  • Quality assurance.
    QA assumes you document all your processes. As practiced the written down practices are quite complete. It assumes you can predict the future. As practiced by AUQA it assumes that a small collection of auditors from outside the organisational context can come in, look around for a few days and make informed comments on the validity of what is being done. It assumes that these auditors are experts making rational decisions, not pattern-matchers fiting what they see against their past experience.
  • Carrick grants emphasising cross institutional projects to encourage adoption.
    Still thinking about this one, but my current unease is based on the belief of the uniqueness of each context and the difficulty of moving the same innovation across different institutional contexts as is.
  • Requiring teaching qualifications from new academic staff.
    There is an assumption that the quality of university learning and teaching can be increased by requiring all new academic staff to complete a graduate certificate in learning and teaching. This assumes that folk won’t game the requirement. i.e. complete the grad. cert. and then ignore the majority of what they “learnt” when they return to a context which does not value or reward good teaching. It assumes that academics will gain access to the knowledge they need to improve in such a grad cert. A situation in which they are normally not going to be developing a great deal of TPCK. i.e. the knowledge they get won’t be contextualised to their unique situation.
  • The application of traditional, plan-driven technology governance and management models to the practice of e-learning.
    Such models are inherently idealistic and simply do not work well to a practice that is inherently complex.
  • Current evaluation of learning and teaching.
    The current surveys given to students at the end of term are generally out of context (i.e. applied after the student has had the positive/negative experience). The use of surveys also limit the bredth of the information that can be provided by students to the limitations enshrined in the questions. The course barometer idea we’ve been playing with for a long time is a small step in the right direction.

There are many more, but it’s getting past time to post this.

Possible projects

Throughout the course there were all sorts of ideas about how aspects of the CogEdge approach could be applied to improve learning and teaching at CQU. Of course, many of these have been lost or are still in my notebooks waiting to be saved.

A first step would be to fix the practices which I believe are now highly questionable outlined in the previous section. Some others include

  • Implement a learning and teaching innovation scheme based on some of the ideas of the Grameen bank.
    e.g. if at least 3 academics from different disciplines can develop an idea for a particular L&T innovation and agree to help each other implement it in each of their courses, then it gets supported immediatley. No evaluation by an “expert panel”.
  • Expand/integrate the course barometer idea to collect stories from students (and staff?) during the term and have those stories placed into the SenseMaker software.
    This could significantly increase CQU’s ability to pick up weak signals about trouble (but also about things that are working) and be able to intervene. Not to mention generating a strong collection of evidence to use with AUQA etc.
  • A number of the different CogEdge methods to help create a context in which quality learning and teaching arise more naturally.

There are many others, but it’s time to get this post, posted.

Disclaimers

I’ve been a believer in complexity informed, bottom-up approaches for a long time. My mind has a collection patterns about this stuff to which I am positively inclined. Hence it is no great surprise that the CogEdge approach resonates very strongly with me.

Your mileage may vary.

In fact, I’d imagine that most hard-core, plan-driven IT folk, those in the business process re-engineering and quality assurance worlds and others from a traditional top-down management school probably disagree strong with all of the above.

If so, please feel free to comment. Let’s get a dialectic going.

I’m also still processing all of the material covered in the three day course and in the additional readings. This post was done over a few days in different locations there are certain to be inconsistencies, typos, poor grammar and basic mistakes.

If so, please feel free to correct.

Dealing with "users", freedom and shadow systems

Apparently Accenture have discovered “user-determined computing” and associated issues.

The definition goes something like this

Today, home technology has outpaced enterprise technology, leaving employees frustrated by the inadequacy of the technology they use at work. As a result, employees are demanding more because of their ever-increasing familiarity and comfort level with technology. It’s an emerging phenomenon Accenture has called “user-determined computing.”

It’s not new

This problem, or aspects of it, have been discussed in a number of places. For example, CIO magazine has a collection of articles it aligns with this issue

This has connections to the literature on workarounds and shadow systems. Practices by which people within organisations workaround the official organisational systems or hierarchies and do things their own way.

This is not a problem limited to IT departments. I work within a group responsible for curriculum design, e-learning and materials development at a University. We’re a provider of services for academic staff. Those staff can and do workaround the services we provide.

The question is, what should we do? How should we handle this?

Reactions from IT folk

I find it interesting that a common knee-jerk reaction from IT folk tends towards the negative and/or aggressive. Check out some of the comments on this blog post or one of the Time to rethink your relationship with end-usersCIO articles.

This is often seen in the official reaction of IT departments to shadow systems. “SHUT THEM DOWN!!!!”. It’s a discourse that have been circulating at my institution in recent times.

Having been a creator and heavy user of shadow systems it’s not an approach which I believe is productive. In fact, some colleagues and I have argued that there is a much better approach. From the abstract

Results of the analysis indicate that shadow systems may be useful indicators of a range of problems with enterprise system implementation. It appears that close examination of shadow systems may help both practitioners and researchers improve enterprise system implementation and evolution.

The gulf

The users who know too much CIO article puts it this way

And that disconnect is fundamental. Users want IT to be responsive to their individual needs and to make them more productive. CIOs want IT to be reliable, secure, scalable and compliant with an ever increasing number of government regulations. Consequently, when corporate IT designs and provides an IT system, manageability usually comes first, the user’s experience second. But the shadow IT department doesn’t give a hoot about manageability and provides its users with ways to end-run corporate IT when the interests of the two groups do not coincide.

Other earlier work has suggested that this gap or gulf, in some cases a yawning chasm, is created by a number of different factors.

Perhaps it is the fundamental nature of some of the factors that create the gap which contribute to the negative reactions. The perspectives creating the gap are so fundamental that the people holding them never question them. They don’t see that their view is actually counter-productive (in some situations) or that there are alternatives. They simply can’t understand the apparent stupidity of the alternate perspective and the hugely negative ramifications.

Super-rational versus complexity

One of the fundamental outlooks which contribute to this gap is that most IT, and most organisations, are based on the ideal of top-down design (teleological design). I’ve written about this previously.

That previous writing includes one of the more interesting characterisations of the difference in these two fundamentally different perspectives. I’ve included it as an mp3. It’s by Dave Snowden, and is an excerpt from a presentation he gave in Helsinki on sense-making and strategy. In the excerpt he describes two approaches to organising a child’s birthday party. One based on traditional top-down approaches and another based on complexity.

What should we do?

This is a real problem which we have to address. How do we do it.

The users who know too much CIO article suggests the following principles as starting points

  1. Find out how people really work
    This connects with ideas in our earlier articles. Look at the shadow systems people are using and understand the factors leading them to use them. We need to know much more about how and why staff are doing curriculum design, e-learning etc.
  2. Say yes to evolution
    On reading the article I wonder if “don’t say no” might not be a better name for this principle. One of the nice quotes in the article is “No one will jump through hoops. They’ll go around them.”. We have to make it easy and safe for folk to do their own thing. Not just understand what they are doing, but allow them to evolve and do different things and keep an eye on why, what and how they do it.
  3. Ask yourself if the threat is real
    There is often a reason why IT believes a shadow system is bad – security, inefficiency etc. This principle suggests spending a lot of time considering whether or not this is really a big problem. In our line of work that might be equated to telling an academic that a particular learning/teaching approach is less than good.

    Another quote from the article ” When a CIO….is setting himself up as a tin idol, a moral arbiter. That’s a guaranteed way to antagonize users. And that’s never a good idea.”.

  4. Enforce rules, don’t make them.
    Some recent local experience reinforces the importance of this. It’s not the support group saying no. It’s the rules that were created by the appropriate folk within the business. As an addition to this I would suggest: “Make sure everyone knows who made the rules.”.
  5. Be invisible.
    This principle relates to the “important things” a service division should do. For example, an IT department is responsible for ensuring security of important data. The processes used to do that should be invisible. It shouldn’t cause the users grief in order to be secure. It should just happen.
  6. Messy but fertile beats neat but sterile.
    It’s not included in the article as one of the principles, but it is used as the closing section and I think it deserves to be included. To much of what goes on in organisations is based on the idea of having tidy diagrams, one way to do something of being neat and sterile. “messiness isn’t as bad as stagnation” and “If you want to be an innovator and leverage IT to get a competitive advantage, there has to be some controlled chaos.”

    Another approach

    Nicholas Carr argues for one response in terms of IT departments.

CAUDIT CIO's top 10 issues list – and what it says about them (to me)

According to their website CAUDIT is described as

The Council of Australian University Directors of IT (CAUDIT), includes the CIOs or IT Directors of every university in Australia, New Zealand, Papua New Guinea and Fiji, as well as the CIO’s of prestigious research institutions, CSIRO and AIMS.

Last week I came across the CAUDIT CIO’s top 10 issues list for 2006. The list is

  1. Business Continuity / Disaster Recovery
  2. Identity Management : Authentication, Authorisation, Access
  3. Funding / Resourcing
  4. Work Force Planning: Recruitment, Training, Succession, Retention, Change Management
  5. Security
  6. Governance
  7. Service Management : Support and Delivery: Availability, Capacity, Change Management
  8. Information Management : Storage, Archiving, Records Management
  9. Legacy Systems : Administration : Student Management / ERP
  10. Strategic Planning

What I found interesting was that e-learning, teaching and learning, research and other core tasks for universities did not rank a mention.

In fact, the entire top 10 issues list is focused on issues that have an internal focus. A focus on how IT does some task. No mention of how IT will focus on something important to the organisation.

Sure security, continuity, governance etc are all important and enable the institution and its aims. If they weren’t there, the institution would be in trouble.

What about strategic planning? Isn’t that outward looking. Maybe. But it’s usually more inward looking than outward.

At least the EDCAUSE list included something along those lines.

  1. Security and Identity Management
  2. Funding IT
  3. Administrative/ERP/Information Systems
  4. Disaster Recovery/Business Continuity
  5. Faculty Development, Support, and Training
  6. Infrastructure
  7. Strategic Planning
  8. Governance, Organization, and Leadership
  9. E-Learning/Distributed Teaching and Learning
  10. Web Systems and Services

The missing Ps – Process

The Missing Ps framework is my attempt to generate a way of identifying the flaws in the methods used by Universities to select an LMS.

In this section I’m expanding out my thoughts associated with process. It will include

  • The plan-driven assumption
    The almost automatic assumption that a plan-driven process will be adopted. The almost complete ignorance of the adaptive alternative.
  • IT governance model
  • Process and tool alignment
  • The importance of process change

The Plan-driven assumption

Over emphasis on plan-driven development at the cost of adaptive approaches
This is the one I’ve banged on about in previous papers (e.g. an early one, and the most recent one). The on-going acceptance of agile development methodologies, the Enterprise 2.0 meme incorporating emergence and the idea of rapid incrementalism from the two Johns make pushing this view a bit easier.

There are a number of reasons why this plan-driven approach is not appropriate to elearning

  • elearning, what it is and how best to do it are two very open questions.
    It is impossible to properly plan because what we know now is not going to be the best practice into the future. Examples include the arrival of blogs and social software onto the scene. Also LAMS and the learning design folks.
  • A large portion of any system cost is enhancement.
    This is a finding from software engineering research.

There are problems with the plan driven approach

  • The blind men and an elephant story
    The type of top-down design characteristic of plan driven development leads to the loss of the whole.

Some Peter Drucker quotes

from “The Effective Executive”

“Most discussions of the knowledge worker’s task start with the advice to plan one’s work. This sounds eminently plausible. The only thing wrong with it is that it rarely works. The plans always remain on paper, always remain good intentions. They seldom turn into achievement.”

“Innovation and Entrepreneurship”

“‘Planning’ as the term is commonly understood is actually incompatible with an entrepreneurial society and economy….innovation , almost by definition, has to be decentralized, ad hoc, autonomous, specific and microeconomic.”

IT Governance

The Wikipedia defintion

Corporate governance aims to:

  • align the actions of the individual parts of an organisation toward aggregate mutual benefit
  • provide the means by which each individual part of the organisation can trust that the other parts each make their contribution to the mutual benefit of the organisation and that none gain unfairly at the expense of others
  • provide a means by which information can quickly flow between the various stakeholders to ensure that the changing nature of both the stakeholder needs and desires and the environment in which the organisation operates get effectively factored into decision processes

Need to look more into the idea of IT governance and get some nice definitions.

How this is typically implemented in organisations is through a hierarchical committee structure. i.e. there is some central committee that meets to decide what should happen with IT. That committee has representatives from each part of the organisation. If someone at the coal-face identifies a way to make a potential large improvement their suggestion has to climb up the hierarchy, either within their part of the organisation or up the IT division, until it makes it to the committee. Along the way some small suggestions may get implemented, but anything that is large, crosses organisational boundaries or is a significant change from current practice will generally have to wend its way to the committee.

This is a rather negative description of the problem. On the surface this approach does appear to be somewhat logical if you buy into the rational model of people, organisations and decision making.

The problems with this model include

  • Even if it works it is incredibly slow.
    Often this committees meet no more than 4 times a year. Not exactly agile.
  • At any stage the idea can get knocked on the head.
    Any one of the people on the ladder up to the committee can kill off the idea, especially if it is perceived as significantly different, difficult or threatening.
  • It assumes the committee itself always acts rationally in the best interests of the organisation.
    e.g. the killing off of the AIS idea.
  • It assumes that the decisions of this committee will actually be followed and acted upon.
  • It assumes that people at the coal-face will even bother to start the ball rolling
    This working paper from the Harvard Business School reports on research that indicates that people don’t speak up.

    Qualitative data collected in 190 interviews with employees from all levels and functions suggest that fear of speaking up, even with pro-organizational suggestions, is pervasive and, for many, a source of intense negative affect.

Process and tool alignment

When you’ve been told to use a particular tool, you can’t use a process that doesn’t fit the tool. Or if you do there are going to problems of inefficiency or poor quality.

The implication for higher education when adopting an LMS (or any information system) is that how things are currently done and the system being adopted have to achieve some sort of alignment.

The problem with the implementation of many information systems is that it is assumed that it is the tool that cannot be changed and instead how things are done in the organisation must be changed to meet the tool. With enterprise resource planning systems this is seen as a good thing because these systems are meant to encapsulate “best practice”. But this assumption is highly questionable.

It also ignores the difficulty of forcing process change on an organisation. Especially when the organisation is full of knowledge workers, like a University. This difficulty often means that process change is often ignored or not completed and consequently leads to inefficiencies and poor quality.

Need to mention the idea that the organisation now becomes captive to the system. When the system changes, often due to the vendors needs, not the organisations, the organisation must go through yet another round of difficult process change (or simply ignore it).

There is an alternative. Modern information technology can be implemented in a way when it becomes significantly more malleable than previously thought. It is possible to mould the technology to fit the organisation. This makes it possible to enable a conversation between the organisation, it’s members and the technology where both are modified to provide significantly improved processes. .. this needs to be expanded.

The importance of process change

One way to build a comprehensive model is to place IT in a historical context. Economists and business historians agree that IT is the latest in a series of general-purpose technologies (GPTs), innovations so important that they cause jumps in an economy’s normal march of progress. Electric power, the transistor, and the laser are examples of GPTs that came about in the nineteenth and twentieth centuries. Companies can incorporate some general purpose technologies, like transistors, into products, and others, like electricity, into processes, but all of them share specific characteristics. The performance of such technologies improves dramatically over time. As people become more familiar with GPTs and let go of their old ways of thinking, they find a great many uses for these innovations. Crucially, general purpose technologies deliver greater benefits as people invent or develop complements that multiply the power, impact, and uses of GPTs. For instance, in 1970, fiber-optic cables enabled companies to employ lasers, which had already been in use for a decade, for data transmission. (McAfee 2006)

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php