Assembling the heterogeneous elements for (digital) learning

Category: design theory Page 1 of 14

Exploring knowledge reuse in design for digital learning: tweaks, H5P, constructive templates and CASA

The following has been accepted for presentation at ASCILITE’2019. It’s based on work described in earlier blog posts.

Click on the images below to see full size.

Abstract

Higher education is being challenged to improve the quality of learning and teaching while at the same time dealing with challenges such as reduced funding and increasing complexity. Design for learning has been proposed as one way to address this challenge, but a question remains around how to sustainably harness all the diverse knowledge required for effective design for digital learning. This paper proposes some initial design principles embodied in the idea of Context-Appropriate Scaffolding Assemblages (CASA) as one potential answer. These principles arose out of prior theory and work, contemporary digital learning practices and the early cycles of an Action Design Research process that has developed two digital ensemble artefacts employed in over 30 courses (units, subjects). Early experience with this approach suggests it can successfully increase the level of design knowledge embedded in digital learning experiences, identify and address shortcomings with current practice, and have a positive impact on the quality of the learning environment.

Keywords: Design for Learning, Digital learning, NGDLE.

Introduction

Learning and teaching within higher education continues to be faced with significant, diverse and on-going challenges. Challenges that increase the difficulty of providing the high-quality learning experiences necessary to produce graduates of the standard society is expecting (Bennett, Lockyer, & Agostinho, 2018). Goodyear (2015) groups these challenges into four categories: massification and the subsequent diversification of needs and expectations; growing expectations of producing work-ready graduates; rapidly changing technologies, creating risk and uncertainty; and, dwindling public funding and competing demands on time. Reconceptualising teaching as design for learning has been identified as a key strategy to sustainably, and at scale, respond to these challenges in a way that offers improvements in learning and teaching (Bennett et al., 2018; Goodyear, 2015). Design for learning aims to improve learning processes and outcomes through the creation of tasks, environments, and social structures that are conducive to effective learning (Goodyear, 2015; Goodyear & Dimitriadis, 2013). The ability of universities to develop the capacity of teaching staff to enhance student learning through design for learning is of increasing financial and strategic importance (Alhadad, Thompson, Knight, Lewis, & Lodge, 2018).

Designing learning experiences that successfully integrate digital tools is a wicked problem. A problem that requires the utilisation of expert knowledge across numerous fields to design solutions that respond appropriately to the unique, incomplete, contextual, and complex nature of learning (Mishra & Koehler, 2008). The shift to teaching as design for learning requires different skills and knowledge, but also brings shifts in the conception of teaching and the identity of the teacher (Gregory & Lodge, 2015). Effective implementation of design for learning requires detailed understanding of pedagogy and design and places cognitive, emotional and social demands on teachers (Alhadad et al., 2018). The ability of teachers to deal with this load has significant impact on learners, learning, and outcomes (Bezuidenhout, 2018). Academic staff report perceptions that expertise in digital technology and instructional design will be increasingly important to their future work, but that these are also the areas where they have the least competency and the highest need for training (Roberts, 2018). Helping teachers integrate digital technology effectively into learning and teaching has been at or near the top of issues facing higher education over several years (Dahlstrom, 2015). However, the nature of this required knowledge is often underestimated by common conceptions of the knowledge required by university teachers (Goodyear, 2015). Responding effectively will not be achieved through a single institutional technology, structure, or design, but instead will require an “amalgamation of strategies and supportive resources” (Alhadad et al., 2018, pp. 427-429). Approaches that do not pay enough attention to the impact on teacher workload run the risk of less than optimal learner outcomes (Gregory & Lodge, 2015).

Universities have adopted several different strategies to ameliorate the difficulty of successfully engaging in design for digital learning. For decades a common solution has been that course design, especially involving the adoption of new methods and technologies, should involve systematic planning by a team of people with appropriate expertise in content, education, technology and other required areas (Dekkers & Andrews, 2000). The use of collaborative design teams with an appropriate, complementary mix of skills, knowledge and experience mirrors the practice in other design fields (Alhadad et al., 2018). However, the prevalence of this practice in higher education has been low, both then (Dekkers & Andrews, 2000) and now. The combination of the high demand and limited availability of people with the necessary knowledge mean that many teaching staff miss out (Bennett, Agostinho, & Lockyer, 2017). A complementary approach is professional development that provides teaching staff with the necessary knowledge of digital technology and instructional design (Roberts, 2018). However, access to professional development is not always possible and funding for professional development and training has rarely kept up with the funding for hardware and infrastructure (Mathes, 2019). There has been work focused on developing methods, tools and repositories to help analyse, capture and encourage reuse of learning designs across disciplines and sectors (Bennett et al., 2017). However, it appears that design for learning continues to struggle to enter mainstream practice (Mor, Craft, & Maina, 2015) with design work undertaken by teachers apparently not including the use of formal methods or systematic representations (Bennett et al., 2017). There does, however, remain on-going demand from academic staff for customisable and reusable ideas for design (Goodyear, 2005). Approaches that respond to academic concerns about workload and time (Gregory & Lodge, 2015) and do not require radical changes to existing work practices nor the development of complex knowledge and skills (Goodyear, 2005).

If there are limitations with current common approaches, what other approaches might exist? Leading to the research question of this study:

How might the diverse knowledge required for effective design for digital learning be shared and used sustainably and at scale?

An Action Design Research (ADR) process is being applied to develop one answer to this question. ADR is used to describe the design, development and evaluation of two digital artefacts – the Card Interface and the Content Interface – and the subsequent formulation of initial design principles that offer a potential answer to the research question. The paper starts by describing the research context and research method. The evolution of each of the two digital artefacts is then described. This experience is then abstracted into six design principles encapsulated in the concept of Context-Appropriate Scaffolding Assemblages (CASA). Finally, the conclusions and implications of this work are discussed.

Research context and method

This research project started in late 2018 within the Learning and Teaching (L&T) section of the Arts, Education and Law (AEL) Group at Griffith University. Staff within the AEL L&T section work with the AEL’s teachers to improve the quality of learning and teaching across about 1300 courses (units, subjects) and 68 programs (degrees). This work seeks to bridge the gaps between the macro-level institutional and technological vision and the practical, coal-face realities of teaching and learning (micro-level). In late 2018 the macro-level vision at Griffith University consisted of current and long-term usage of the Blackboard Learn Learning Management System (LMS) along with a recent decision to move to the Blackboard Ultra LMS. In this context, a challenge was balancing the need to help teaching staff continue to improve learning and teaching within the existing learning environment while at the same time helping the institution develop, refine, and achieve its new macro-level vision. It is within this context that the first offering of Griffith University’s Bachelor of Creative Industries (BCI) program would occur in 2019. The BCI is a future-focused program designed to attract creatives who aspire to a career in the creative industries by instilling an entrepreneurial mindset to engage and challenge the practice and business of the creative industries. Implementation of the program was supported through a year-long strategic project including a project manager and educational developer from the AEL L&T section working with a Program Director and other academic staff. This study starts in late 2018 with a focus on developing the course sites for the seven first year BCI courses. A focus of this work was to develop a striking and innovative design that mirrored the program’s aims and approach. A design that could be maintained by the relevant teaching staff beyond the project’s protected niche. This raised the question of how to ensure that the design knowledge required to maintain a digital learning environment into the future would be available within the teaching team?

To answer this question an Action Design Research (Sein, Henfridsson, Purao, & Rossi, 2011) process was adopted. ADR is a merging of Action Research with Design Research developed within the Information Systems discipline. ADR aims to use the analysis of the continuing emergence of theory-ingrained, digital artefacts within a context as the basis for developing generalised outcomes, including design principles (Sein et al., 2011). A key assumption of ADR is that digital artefacts are not established or fixed. Instead, digital artefacts are ensembles that arise within a context and continue to emerge through development, use and refinement (Sein et al., 2011). A critical element of ADR is that the specific problem being addressed – design of online learning environment for courses within the BCI program – is established as an example of a broader class of problems – how to sustainably and at scale share and reuse the diverse knowledge required for effective design for digital learning (Sein et al., 2011). This shift moves ADR work beyond design – as practised by any learning designer – to research intending to provide guidance to how others might address similar challenges in other contexts that belong to the broader class of design problems.

Figure 1 provides a representation of the ADR four-stage process and the seven principles on which ADR is based. Stages 1 through 3 represent the process through which ensemble digital artefacts are developed, used and evolved within a specific context. The next two sections of this paper describe the emergence of two artefacts developed for the BCI program as they cycled through the first three ADR stages numerous times. The fourth stage of ADR – Formalisation of Learning – aims to abstract the situated knowledge gained during the emergence of digital artefacts into design principles that provide guidance for addressing a class of field problems (Sein et al., 2011). The third section of this paper formalizes the learning gained in the form of six initial design principles structured around the concept of Contextually Appropriate Scaffolding Assemblages (CASA).

Action Design Research Method: Stages and Pinciples

Figure 1 – ADR Method: Stages and Principles (adapted from Sein et al., 2011, p. 41)

Card Interface (artefact 1, ADR stages 1-3)

In response to the adoption of a trimester academic calendar, Griffith University encourages the adoption of a modular approach to course design. It is recommended that course profiles use modules to group and describe the teaching and learning activities. Subsequently, it has become common practice for this modular structure to be used within the course site using the Blackboard Learn content area functionality. To do this well, is not straight forward. Blackboard Learn has several functional limitations in legibility, design consistency, content arrangement and content adjustment that make it difficult to achieve quality visual design (Bartuskova, Krejcar, & Soukal, 2015). Usability analysis has also found that the Blackboard content area is inflexible, inefficient to use, and creates confusion for teaching staff regardless of their level of user experience (Kunene & Petrides, 2017). Overcoming these limitations requires levels of technical and design knowledge not typically held by teaching staff. Without this knowledge the resulting designs typically range from purely textual (e.g. the left-hand side of Figure 2) through to exemplars of poor design choices including the likes of blinking text, poor layout, questionable colour choices, and inconsistent design. While specialist design staff can and have been used to provide the necessary design knowledge to implement contextually-appropriate, effective designs, such an approach does not scale. For example, any subsequent modification typically requires the re-engagement of the design staff.

To overcome this challenge the Blackboard Learn user community has developed a collection of related solutions (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) that use Javascript to package the necessary design knowledge into a form that can be used by teachers. Griffith University has for some time used one of these solutions, the Blackboard Tweaks building block (Plaisted & Tkachov, 2011) developed at the Queensland University of Technology. One of the tweaks offered by this building block – the Themed Course Table – has been widely used by teaching staff to generate a tabular representation of course modules (e.g. the right-hand side of Figure 2). However, experience has shown that the level of knowledge required to maintain and update the Themed Course Table can challenge some teaching staff. For example, re-ordering modules can be difficult for some, and the dates commonly used within the table must be manually added and then modified when copied from one offering to another. Finally, the inherently text-based and tabular design of the Themed Course Table is also increasingly dated. This was an important limitation for the Bachelor of Creative Industries. An alternative was required.

Example blackboard content area Themed course table
Figure 2 – Example Blackboard Learn Content Areas: Textual versus Themed Course Table

That alternative would use the same approach as the Themed Course Table to achieve a more appropriate outcome. The approach used by the Themed Course Table, other related examples from the Blackboard community, and the H5P authoring tool (Singh & Scholz, 2017) are contemporary examples of constructive templates (Nanard, Nanard, & Kahn, 1998). Constructive templates arose from the hypermedia discipline to encourage the reuse of design knowledge and have been found to reduce cost and improve consistency, reliability and quality while enabling content experts to author and maintain hypermedia systems (Nanard et al., 1998). Constructive templates encapsulate a specific collection of design knowledge required to scaffold the structured provision of necessary data and generate design instances. For example, the Themed Course Table supports the provision of data through the Blackboard content area interface. It then uses design knowledge embedded within the tweak to transform that data into a table. Given these examples and the author’s prior positive experience with the use of constructive templates within digital learning (Jones, 2011), the initial plan for the BCI Course Content area was to replace the Course Theme Table “template” to adopt both a more contemporary visual design, and a forward-oriented view of design for learning. Dimitriadis and Goodyear (2013) argue that design for learning needs to be more forward-oriented and consider what features will be required in each of the lifecycle stages of a learning activity. That is, as the Course Theme Table replacement is being designed, consider what specific features will be required during configuration, orchestration, and reflection and re-design.

The first step in developing a replacement was to explore contemporary web interface practices for a table replacement. Due to its responsiveness to different devices, highly visual presentation, and widespread use amongst Internet and social media services, a card-based interface was chosen. Based on the metaphor of a paper card, this interface brings together all data for a particular object with an option to add contextual information. Common practice with card-based interfaces is to embed into a card memorable images related to the card content (see Figure 3). Within the context of a course module overview such a practice has the potential to positively impact student cognition, emotions, interest, and motivation (Leutner, 2014; Mayer, 2017). A practical advantage of card-based interfaces is that its widespread use means there are numerous widely available resources to aid implementation. This was especially important to the BCI project team, as it did not have significant graphical and client-side design knowledge to draw upon.

Next, a prototype was developed to test how effectively a card-based interface would represent a course’s learning modules. An iterative process was used to translate features and existing practice from the Course Theme Table to a card-based interface. Feedback from other design staff influenced the evolution of the prototype. It also highlighted differences of opinion about some of the visual elements such as the size of the cards, the number of cards per row, and the inclusion of the date in the top left-hand corner. Eventually the prototype card interface was shown to the BCI teaching team for input and approval. With approval given, a collection of Javascript and HTML was created to transform a specifically formatted Blackboard content area into a card interface.

Figure 3 shows just two of the six different styles of card-based interface currently supported by the Card Interface. This illustrates a key feature of the original conception of constructive templates – separation of content from presentation (Nanard et al., 1998) – allowing for different representations of the same content. The left-hand image in Figure 3 and the inclusion of dates on some cards illustrates one way the Card Interface supports a forward-oriented approach to design. Initially, the module dates are specified during the configuration of a course site. However, the dates typically only apply to the initial offering of the course and will need to be manually changed for subsequent offerings. To address this the Card Interface knows the trimester weekly dates from the university academic calendar. Dates to be included on the Card Interface can then be provided using the week number (e.g. Week 1, Week 5 etc.). The Card Interface identifies the trimester a course offering belongs to and translates all week numbers into the appropriate calendar dates.

view ANother card interface
Figure 3 – Two early visualisations of the Card Interface

Despite being designed for the BCI program, the first use of the Card Interface was not in the BCI program. Instead, in late 2018 a librarian working on a Study Skills site learned of the Card Interface from a colleague. Working without any additional support, the librarian was able to use the Card Interface to represent 28 modules spread over 12 content areas. Implementation of the Card Interface in the BCI courses started by drawing on existing learning module content from course profiles. Google Image Search was used to identify visually striking images that could be associated with each module (e.g. the left-hand side of Figure 3). The Card Interface was also used on the BCI program’s Blackboard site. However, the program site had a broader purpose leading to different design decisions and the adoption of a different style of card-based interface (see the right-hand image in Figure 3).

Anecdotal feedback from BCI staff and students suggest that the initial implementation and use of the Card Interface was positive. In addition, the visual improvements offered by the Card Interface over both the standard Blackboard Content Area and the Course Theme Table tweak led to interest from other courses and programs. As of late July 2019, the Card Interface has been used in over 55 content areas in over 30 Blackboard sites. Adoption has occurred at both the program and individual course level led by exposure within the AEL L&T team or by academics seeing it and wanting it. Widespread use has generated different requirements leading to creative uses of the Card Interface (e.g. the use of animated GIFs as card images) and the addition of new functionality (e.g. the ability to embed a video, instead of an image). Requirements from another strategic project led to a customisation of the Card Interface to provide an overview of assessment items, rather than modules.

With its adoption in multiple courses and use for different purposes the Card Interface appears to have successfully encapsulated a collection of design knowledge into a form that can be readily adopted and adapted. Use of that knowledge has improved the resulting design. Contributing factors to this success include: building on existing practice; providing advantages above and beyond existing practice; and, the capability for both teaching and support staff to rapidly customise the Card Interface. Further work is required to gain greater and more objective insight into the impact of the Card Interface on the student experience and outcomes of learning and teaching.

Content Interface (artefact 2, ADR stages 1-3)

The Card Interface provides a visual overview of course modules. The next challenge for the BCI project was the design, implementation and support of the learning activities and resources that form the content of those course modules. A task that is inherently more creative, important and typically involves significantly more content. Also, a task that must be completed using the same, problematic Blackboard interface. This requirement is known to encourage teaching staff to avoid the interface by using offline documents and slides (Bartuskova et al., 2015). This is despite evidence that failing to leverage affordances of the online environment can create a disengaging student experience (Stone & O’Shea, 2019) and that course content is a significant influence on students’ perceptions of course quality (Peltier, Schibrowsky, & Drago, 2007). Adding to the difficulty, the BCI teaching staff either had limited, none, or little recent experience with Blackboard. In the case of contracted staff, they did not have access to Blackboard. This raised the question of how to support the design, implementation and re-design of effective modular, online learning resources and activities for the BCI?

Observation of, and experience with, the Blackboard interface identified three main issues. First, staff did not know how or have access to the Blackboard content interface. Second, the Blackboard authoring interface provides limited authoring functionality. For example, beyond issues identified in the literature (Bartuskova et al., 2015; Kunene & Petrides, 2017) there is no support for standard authoring functionality such as grammar checking, reference management, commenting, and version control. Lastly, once the content is placed within Blackboard the user interface is limited and quite dated. On the plus side, the Blackboard interface does provide the ability to integrate a variety of different activities such as discussion forums, quizzes etc. The intent was to address the issues while at the same time retaining the ability to use the Blackboard activities.

For better or worse, the most common content creation tool for most University staff is Microsoft Word. Anecdotal observation suggests that many staff have adopted the practice of drafting content in Word before copying and pasting it into Blackboard. The Content Interface is designed to transform Word documents into good quality online learning activities and resources (see Figure 4). This is done by using an open source converter to semantically transform Word to HTML that is then copied and pasted into Blackboard. A collection of design knowledge embedded into Javascript then transforms the HTML in several ways. Semantic elements such as activities and readings are visually transformed. All external web links are modified to open in a new tab to avoid a common Blackboard error. The document is transformed into an accordion interface with vertical list of headings that be clicked on to display associated content. This progressive reveal: allows readers to get an overall picture of the module before focusing on the details; provides greater control over how they engage with the content; and is particularly useful on mobile platforms (Budiu, 2015; Loranger, 2014).

Word Content Interface
Figure 4 – Example Module as a Word document and in the Content Interface in Blackboard

To date, the Content Interface has been used to develop over 75 modules in 13 different Blackboard sites, most of these within the seven BCI courses. Experience using the still incomplete Content Interface suggests that there are significant advantages. For example, Library staff have adopted it to create research skills modules that are used in multiple course sites. Experience in the BCI shows that sharing documents through OneDrive and using comments and track changes enables the Word documents to become boundary objects helping the course development team co-create the module learning activities and resources. Where staff are comfortable with Word as an authoring environment, the authoring process is more efficient. The resulting accordion interface offers an improvement over the standard Blackboard interface. However, creating documents with Word is not without its challenges, especially the use of Word styles and templates. Also, the extra steps required can be perceived as problematic when minor edits need to be made, and when direct editing within Blackboard is perceived to be easier and quicker, especially for time-poor teaching staff. Better integration between Blackboard and OneDrive will help. More advantage is possible when the Content Interface is further contextually customized to offer forward-oriented functionality specific to the module learning design.

Initial Design Principles (ADR stage 4)

This section engages with the final stage of the ADR process – formalisation of learning – to produce design principles that help provide actionable insight for practitioners. The following six design principles help guide the development of Contextually-Appropriate Scaffolding Assemblages (CASA) that help to sustainably and at scale share and reuse the design knowledge necessary for effective design for digital learning. The design principles are grouped using the three components of the CASA acronym.

Contextually-Appropriate

1. A CASA should address a specific contextual need within a specific activity
system
.
The highest quality learning and teaching involves the development of appropriate context-specific approaches (Mishra & Koehler, 2006). A CASA should not be implemented at an institutional level. Such top-down projects are unable to pay enough attention to contextually specific needs as they aim for a solution that works in all contexts. Instead, a CASA should be designed in response to a specific need arising in a course or a small group of related courses. Following Ellis & Goodyear (2019) the focus in designing a CASA should not be the needs of individual students, but instead on the whole activity system. That is, consideration should be given to the complex assemblage of learners, teachers, content, pedagogy, technology, organisational structures and the physical environment with an emphasis on encouraging students to successfully engage in intended learning activities. For example, both the Card and Content Interfaces arose from working with a group of seven courses in the BCI program as the result of two separate, but related, needs. While the issues addressed by these CASA apply to many courses, the ability to develop and test solutions at a small scale was beneficial. Rather than a focus primarily on individual learners, the solutions were heavily influenced by an analysis of the available tools (e.g. Blackboard Tweaks, Office365), practices (e.g. modularisation and learning activities described in course profiles), and other components of the activity systems.

2. CASA should be built using and result in generative technologies. To maximise and maintain contextual appropriateness, a CASA must be able to be designed and redesigned as easily as possible. Zittrain (2008) labels technologies as generative or sterile. Generative technologies have a “capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences” (Zittrain, 2008, p. 70). Sterile technologies prevent this. Generative technologies enable convivial systems where people can be “actively engaged in generating creative extensions to the artefacts given to them” (Fischer & Girgensohn, 1990, p. 183). It is the end-user modifiability of generative technology that is crucial to knowledge-based design environments and enables response to unanticipated, contextual requirements (Fischer & Girgensohn, 1990). Implementing CASA using generative technologies allows easy design for specific contexts. Ensuring that CASA are implemented as generative technologies enables easy redesign for other contexts. Generativity, like other technological affordances, arises from the relationship between the technology and the people using the technology. Not only is it necessary to use technology that is easier to modify, it is necessary to be able to draw upon appropriate technological skills. This could mean having people with those technological skills available to educational design teams. It could also mean having a network of intra- and inter-institutional CASA users and developers collaboratively sharing CASA and the knowledge required for use and development; like that available in the H5P community (Singh & Scholz, 2017).

For example, development of the Card and Content Interfaces was only possible due to Blackboard Learn supporting the embedding of Javascript. The value of this generative capability is evident through the numerous projects (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) from the Blackboard community that leverage this capability; a capability that has been removed in Blackboard’s next version LMS, Ultra. The use of Office365 by the Content Interface illustrates the rise of digital platforms that are generative and raise questions that challenge how innovation through digital technologies are enabled and managed (Yoo, Boland, Lyytinen, & Majchrzak, 2012). Using the generative jQuery library to implement the Content Interface’s accordion enables modification of the accordion look and feel through use of jQuery’s theme roller and library of existing themes. The separation of content from presentation in the Card Interface has enabled at least six redesigns for different purposes. This work was possible because the BCI development team had ready access to the necessary technological skills and was able to draw upon a wide collection of open source software and online support.

3. CASA development should be strategically aligned and supported. Services to support design for learning within Australian universities are limited and insufficient for the demand (Bennett et al., 2017). Services capable of supporting the development of CASA are likely to be more limited. Hence appropriate decisions need to be made about how and what CASA are designed, re-designed and supported. Resources used to develop CASA are best allocated in line with institutional strategic projects. CASA development should proceed with consideration to the “manageably small set of particularly valued activity systems” (Ellis & Goodyear, 2019, p. 188) within the institution and be undertaken with institutionally approved and supported generative technologies. For example, the Card and Content Interfaces arose from an AEL strategic project. Both interfaces were focused on providing contextually-appropriate customization and support for the institutionally important activity system of creating modular learning activities and resources. Where possible these example CASA have used institutionally approved digital technologies (e.g. OneDrive and Blackboard). The sterile nature of existing institutional infrastructure has made it necessary to use more generative technologies (e.g. Amazon Web Services) that are neither officially approved or supported. However, the approach used does build upon an approach from an existing institutional approved technology – Blackboard Tweaks (Plaisted & Tkachov, 2011).

Scaffolding

4. CASA should package appropriate design knowledge to enable (re-)use by teachers and students. Drawing on ideas from constructive templates (Nanard et al., 1998), CASA should package the diverse design knowledge required to respond to a contextually-appropriate need in a way that this design knowledge can be easily reused in different instances. CASA enable the sustainable reuse of contextually applied design knowledge in learning activity systems and subsequently reduce cost and improve quality and consistency. For example, the Card Interface combines the knowledge from web design and multimedia learning research (Leutner, 2014; Mayer, 2017) in a way that has allowed teaching staff to generate a visual overview of the modules in numerous course sites. The Content Interface combines existing knowledge of the Microsoft

Word ecosystem with web design knowledge to improve the design, use and revision of modular content.

5. CASA should actively support a forward-oriented approach to design for learning.

To “thrive outside of the protective niches of project-based innovation” (Dimitriadis & Goodyear, 2013, p. 1) the design of a CASA must not focus only on initial implementation. Instead, CASA design must explicitly consider and include functionality to support the configuration, orchestration, and reflection and re-design of the CASA. For example, the Card Interface leverages contextual knowledge to enable dates to be specified independent of the calendar to automate re-design for subsequent course offerings. As CASA tend to embody a learning design, it should be possible to improve each CASA’s support for orchestration by implementing checkpoint and process analytics (Lockyer, Heathcote, & Dawson, 2013) specific to the CASA’s embedded learning design.

Assemblages

6. CASA are conceptualised and treated as contextual assemblages. Like all technologies, CASA are assemblies of other technologies (Arthur, 2009) where technologies are understood to include techniques such as organisational processes and pedagogies, as well as hardware and software. But a contextual assemblage is more than just technology. It includes consideration of and connections with the policies, practices, funding, literacies and discourse across levels from societal and down through sector, organisational, personal, individual, formal and informal. These are elements that make up the mess and nuance of the context, where the practice of educational technology gets complex (Cottom, 2019). A CASA must be generative in order to be designed and re-designed to respond to this contextual complexity. A CASA needs to be inherently heterogeneous, ephemeral, local, and emergent. A need that is opposed and ill-suited to the dominant rational system view underpinning common digital learning practice which sees technologies as planned, structured, consistent, deterministic, and systematic. Instead, connecting back to design principle one, CASA should be designed in recognition of and as the importance and complex intertwining of the human, social and organisational elements in any attempt to use digital technologies. It should play down the usefulness of distinctions between developer and user, or pedagogy and technology. For example, the Card Interface does not use the Lego approach to assembly that informs the Next Generation Digital Learning Environment (NGDLE) (Brown, Dehoney, & Millichap, 2015) and underpins technologies such as the Learning Tools Interoperability (LTI) standard. Instead of combining clearly distinct blocks with clearly defined connectors the Card and Content Interface is intertwined with and modifies the Blackboard user interface to connect with the specifics of context. Suggesting that the Lego approach is useful, perhaps even necessary, but not sufficient.

Conclusions, Implications, and Further Work

Universities are faced with the strategically important question of how to sustainably and at scale leverage the knowledge required for effective design for digital learning. The early stages of an Action Design Research (ADR) process has been used to formulate one potential answer in the form of six design principles encapsulated in the idea of Context-Appropriate Scaffolding Assemblages (CASA). To date, the ADR process has resulted in the development and use of two prototype CASA within a suite of 7 courses and within 6 months their subsequent adoption in another 24 courses. CASA draw on the idea of constructive templates to capture diverse design knowledge in a form that enables use of that knowledge by teachers and students to effectively address contextually specific needs. By adopting a forward-oriented view of design for learning CASA offer functionality to support configuration, orchestration, and reflection and re-design in order to encourage on-going use beyond the protected project niche of initial implementation. The use of generative technologies and an assemblage perspective enables CASA development to be driven by and re-designed to fit the specific needs of different activity systems and contexts. Such work will be most effective when it is strategically aligned and supported with the aim of supporting and refining institutionally valued activity systems.

Use of the Card and Content Interfaces within and beyond the original project suggest that these CASA have successfully encapsulated the necessary design knowledge to address shortcomings with current practice and had a positive impact on the quality of the digital learning environment. But it’s early days. These CASA can be improved by more completely following the CASA design principles. For example, the Content Interface currently offers only generic support for module design. Significantly greater benefits would arise from customising the Content Interface to support specific learning designs and provide contextually appropriate forward-oriented functionality. More experience is needed to provide insight into how this can be done effectively. Further work is required to establish if, how and what impact the use of CASA has on the quality of the learning environment and the experience and outcomes of both learning and teaching. Further work could also explore the questions raised by the CASA design principles about existing digital learning practice. The generative principle raises questions about whether moves away from leveraging the generativity of web technology – such the design of Blackboard Ultra and the increasing focus on mobile apps – will make it more difficult to integrate contextually specific design knowledge? Do reported difficulties accessing student engagement data with H5P activities (Singh & Scholz, 2017) suggest that the H5P community could fruitfully pay more attention to supporting a forward-oriented design approach? Does the assemblage principal point to potential limitations with some conceptualisations and implementation of next generation of digital learning environments?

References

Abhrahamson, A., & Hillman, D. (2016). Cutomize Learn with CSS and Javascript injection. Presented at the BBWorld 16, Las Vegas, NV. Retrieved from https://community.blackboard.com/docs/DOC-2103

Alhadad, S. S. J., Thompson, K., Knight, S., Lewis, M., & Lodge, J. M. (2018). Analytics-enabled Teaching As Design: Reconceptualisation and Call for Research. Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 427–435.

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves. New York, USA: Free Press.

Bartuskova, A., Krejcar, O., & Soukal, I. (2015). Framework of Design Requirements for E-learning Applied on Blackboard Learning System. In M. Núñez, N. T. Nguyen, D. Camacho, & B. Trawiński (Eds.), Computational Collective Intelligence (pp. 471–480). Springer International Publishing.

Bennett, S., Agostinho, S., & Lockyer, L. (2017). The process of designing for learning: understanding university teachers’ design work. Educational Technology Research & Development, 65(1), 125–145.

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Bezuidenhout, A. (2018). Analysing the Importance-Competence Gap of Distance Educators with the Increased Utilisation of Online Learning Strategies in a Developing World Context. International Review of Research in Open and Distributed Learning, 19(3), 263–281.

Brown, M., Dehoney, J., & Millichap, N. (2015). The Next Generation Digital Learning Environment: A

Report on Research (p. 11). Louisville, CO: EDUCAUSE.

Budiu, R. (2015). Accordions on Mobile. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/mobile-accordions/

Cottom, T. M. (2019). Rethinking the Context of Edtech. EDUCAUSE Review, 54(3). Retrieved from
https://er.educause.edu/articles/2019/8/rethinking-the-context-of-edtech

Dahlstrom, E. (2015). Educational Technology and Faculty Development in Higher Education. Retrieved from ECAR website: https://library.educause.edu/resources/2015/6/educational-technology-and-faculty-development-in-higher-education

Dekkers, J., & Andrews, T. (2000). A meta-analysis of flexible delivery in selected Australian tertiary institutions: How flexible is flexible delivery? In L. Richardson & J. Lidstone, (Eds.), Proceedings of

ASET-HERDSA 2000 Conference, (pp. 172-182)

Dimitriadis, Y., & Goodyear, P. (2013). Forward-oriented design for learning: illustrating the approach. Research in Learning Technology, 21, 1–13.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning,

Strategy and the Academy. Routledge.

Fischer, G., & Girgensohn, A. (1990). End-user Modifiability in Design Environments. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 183–192.

Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1). https://doi.org/10.14742/ajet.1344

Goodyear, P. (2015). Teaching As Design. HERDSA Review of Higher Education, 2, 27–59.

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in

Learning Technology, 21, 1–13.

Gregory, M. S. J., & Lodge, J. M. (2015). Academic workload: the silent barrier to the implementation of technology-enhanced learning strategies in higher education. Distance Education, 36(2), 210–230.

Jones, D. (2011). An Information Systems Design Theory for E-learning (PhD, Australian National University). Retrieved from https://openresearch-repository.anu.edu.au/handle/1885/8370

Kunene, K. N., & Petrides, L. (2017). Mind the LMS Content Producer: Blackboard usability for improved productivity and user satisfaction. Information Systems, 14.

Leutner, D. (2014). Motivation and emotion as mediators in multimedia learning. Learning and

Instruction, 29, 174–175.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459.

Loranger, H. (2014). Accordions for Complex Website Content on Desktops. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/accordions-complex-content/

Mathes, J. (2019). Global quality in online, open, flexible and technology enhanced education: An analysis of strengths, weaknesses, opportunities and threats. Retrieved from International Council for Open and Distance Education website:
https://www.icde.org/knowledge-hub/report-global-quality-in-online-education

Mayer, R. E. (2017). Using multimedia for e-learning. Journal of Computer Assisted Learning,
33(5), 403–423.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Mor, Y., Craft, B., & Maina, M. (2015). Introduction – Learning Design: Definitions, Current Issues and Grand Challenges. In M. Maina, B. Craft, & Y. Mor (Eds.), The Art & Science of Learning Design (pp. ix–xxvi). Rotterdam: Sense Publishers.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design

Patterns and Constructive Templates. 11–20. ACM.

Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The Interdependence of the Factors Influencing the Perceived Quality of the Online Learning Experience: A Causal Model. Journal of Marketing Education; Boulder, 29(2), 140–153.

Plaisted, T., & Tkachov, N. (2011). Blackboard Tweaks: Tools for Academics, Designers and Programmers. Retrieved July 2, 2019, from http://tweaks.github.io/Tweaks/index.html

Roberts, J. (2018). Future and changing roles of staff in distance education: A study to identify training and professional development needs. Distance Education, 39(1), 37–53.

Sein, M. K., Henfridsson, O., Purao, S., & Rossi, M. (2011). Action Design Research. MIS Quarterly,
35(1), 37–56.

Singh, S., & Scholz, K. (2017). Using an e-authoring tool (H5P) to support blended learning: Librarians’ experience. In H. Partridge, K. Davis, & J. Thomas (Eds.), Me, Us, IT! Proceedings ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education (pp. 158–162).

Stone, C., & O’Shea, S. (2019). Older, online and first: Recommendations for retention and success.

Australasian Journal of Educational Technology, 35(1). https://doi.org/10.14742/ajet.3913

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. (2008). The Future of the Internet–And How to Stop It. Yale University Press.

Explaining ISDT and its place in the research process

The following is an initial, under-construction attempt to explain (first to myself) how/what role an Information Systems Design Theory (ISDT) places in the research process. Working my way toward a decent explanation for PhD students.

It does this by linking the components of an ISDT with one explanation of a research project. Hopefully connecting the more known concept (research project) with the less known (ISDT). It also uses as an example the ISDT for emergent university e-learning that was developed as part of my PhD thesis.

Anatomy of an ISDT?

The following uses Gregor and Jones (2007) specification of a design theory as summarised in the following table adapted from (Gregor & Jones, 2007, p. 322). Reading the expanded descriptions of each of the components of an ISDT in Gregor and Jones (2007) will likely be a useful companion to the following.

Component Description
Core components
Purpose and scope (the causa finalis) “What the system is for,” the set of meta-requirements or goals that specifies the type of artifact to which the theory applies and in conjunction also defines the scope, or boundaries, of the theory.
Constructs (the causa materialis) Representations of the entities of interest in the theory.
Principle of form and function (the causa formalis) The abstract “blueprint” or architecture that describes an IS artifact, either product or method/intervention.
Artifact mutability The changes in state of the artifact anticipated in the theory, that is, what degree of artifact change is encompassed by the theory.
Testable propositions Truth statements about the design theory.
Justificatory knowledge The underlying knowledge or theory from the natural or social or design sciences that gives a basis and explanation for the design (kernel theories).
Additional components
Principles of implementation (the causa efficiens) A description of processes for implementing the theory (either product or method) in specific contexts.
Expository instantiation A physical implementation of the artifact that can assist in representing the theory both as an expository device and for purposes of testing.

Problem and question

This explanation assumes that all research starts with a problem and a question. It’s important (as for all research) that the problem/question be interesting and important. The book “Craft of research” talks about identifying your research question.

Problem

An ISDT is generated through design-based research (DBR), which for me at least that it tends to deal with “how” research questions. For example, the research question from my thesis

How to design, implement and support an information system that effectively and efficiently supports e-learning within an institution of higher education?

The challenge here is to develop some knowledge that helps answer this type of question. Baker (2014) talks a bit more about research questions in DBR.

Hopefully the question driving my thesis research is clear from the above. The thesis includes additional background to establish that the related problem (developing IS for e-learning in higher ed) is an important one worthy of research)

You want to help someone/many people do something?

DBR aims to help someone do something. The aim of an ISDT is to provide guidance to someone to build an information system that solves an identified problem. But you’re not interested in the technology?

Avison and Eliot (2006) suggests that in comparison to other IT-related disciplines (e.g. computer science, computer science engineering) the information systems discipline is focused more on the focuses on the application of technology and the subsequent interactions between people/organisations (soft issues) and the technology. It’s not just focused on the technologies. They include the following quote from Lee’s (2001) editorial in MISQ

that research in the information systems field examines more than just the technological system, or just the social system, or even the two side by side; in addition, it investigates the phenomena that emerge when the two interact…the emergent soci-technical phenomena

By answering the research question, you’re hoping that people can develop an information system.

Problem & solution (IS)

But you’re not a consultant

The problem with the last image is that it’s a bit like what a consultant does. Someone has a problem, you come in and create something that solves their problem. DBR is not consultancy. It aims to develop generalised knowledge that can inform the development multiple different information systems. Different people in different contexts should be able to develop information systems appropriate to their requirements.

Problem & multiple solutions

How do they know how to do that?

What’s missing in the last diagram is the knowledge that people will use to develop any information system. This knowledge is the answer to the research question/problem that you aim to develop. This is where the ISDT enters the picture. It is a representation of that knowledge that people use to develop an appropriate information system.

Problem and how

The ISDT encapsulates knowledge about how to build a particular type of information system that effectively answers the research question/problem. i.e. it encapsulates knowledge that serves a particular purpose and scope

But an ISDT is not just a black/grey box. It has components as outlined in the table above.

Purpose and scope

Johanson and Hasselbring (2018) argue that one of the problems with computer scientists and software engineers is that they wished to focus on general principles at the cost of specific principles. Hence such folk develop artifacts like the software development life cycle (and lots more) that are deemed to be general ways to develop information systems. See Vessey (1997) for more on this, including the lovely quote from Plauger (1993)

If you believe that one size fits all, you are living in a panty-hose commercial

Assuming that your research question/problem is not terribly generic, it should be possible for you to identify the purpose and scope for your ISDT. For example, here’s the summary of the purpose and scope from my ISDT for emergent university e-learning

  1. Provide ICT functionality to support learning and teaching within a university environment (e-learning).
  2. Seek to provide context specific functionality that is more likely to be adopted and integrated into everyday practice for staff and students.
  3. Encourage and enable learning about how e-learning is used. Support and subsequently evolve the system based on that learning.

Your research started with a problem (turned into a question). That problem should have defined something important for someone. Something important that your ISDT is going to provide the knowledge they need to solve by building an information system. i.e. something that is not just hardware and software, but considers those elements, the associated soft issues and the interactions between them.

Your ISDT – in the form of purpose and scope – overlaps/connects with your research question/problem

Problem and purpose

But as illustrated by the example research question and purpose and scope used here they are different. My original research question is fairly generic. The purpose and scope has a fair bit more detail. Where did that detail come from?

Justificatory knowledge informs purpose and scope

An ISDT should include justificatory knowledge. Knowledge and theory from the natural, social or design sciences that inform how you understand and respond to the research problem. The example purpose and scope in the previous section includes explicit mention of ‘context specific functionality’ and ‘integrated into everyday practice’ (amongst others). Each of these narrow the specific purpose and scope. This ISDT is not just about developing a system that will support L&T in a tertiary setting. It’s also quite explicit that the system should also encourage adoption and evolution.

This particular narrowing is informed by a variety of theoretical insights that form part of the justificatory knowledge of my ISDT. Theoretical insights drawn from end-user development and distributed cognition. One of the reasons why my ISDT is for emergent university e-learning.

This gives some insight into how a different ISDT for university e-learning could take a different approach (informed by different justificatory knowledge). e.g. I’d argue that current approaches to university e-learning tacitly have the purpose (perhaps priority) of being efficient and achieving institutional goals, rather than encouraging adoption, contextual functionality, and emergence.

Design research aims to make use of existing knowledge and theory to construct artefacts that improve some situation (Simon, 1996). How you understand the “some situation” should be informed by your justificatory knowledge.

Justificatory knowledge

But what do they do?

You want your ISDT to help people design effective/appopriate information systems. But how do people know what to do to develop a good information system? Where’s that knowledge in the ISDT to help them do this?

This is where the design principles enter the picture. There are two sets of design principles:

  1. principles of form and function; and

    i.e. what are you going to build? What features does it have?

  2. principles of implementation.

    i.e. how should you build it? What steps should you follow to efficiently and effectively put the IS in place?

These principles should be abstract enough that they can inform the design of different information systems. They should also be directly connected to your ISDT’s justificatory knowledge. You don’t just pull them out of the air, or because that’s they way you do it.

Principles

For the emergent university e-learning ISDT there were 13 principles of form and function organised into three groups with explicitly links to justificatory knowledge

  1. Comprehensive, integrated and independent services – software wrappers
  2. Adaptive and inclusive system architecture – system of systems; best of breed; service orientated architectures; end-user development; micro-kernel architecture
  3. Scaffolding, context-sensitive conglomerations – constructive templates; end-user development; distributed cognition.

And there were 11 principles of implementation organised into another 3 groups, again, each explicitly linked to justificatory knowledge

  1. Multi-skilled, integrated development and support team – job rotation; multi-skilling; organisational learning; situated learing/action; communities of practice; knowlege-based theory of organisational capability
  2. An adopter-focused, emergent development process – emergent/ateleological development;
  3. A supportive organisational context – organisational fit; strategic alignment; bricolage; mindfull innovation

What can they expect to happen?

The premise of an ISDT is that if someone is able to successful follow the design principles then they should be able to expect that they will develop an IS that solves the particular problem in a way that is better in some way than other systems.

If people follow these principles, based on your justificatory knowledge you are claiming that certain things will happen. These are the testable propositions

propositions

The ISDT for emergent university e-learning has five testable propositions

  1. Be able to provide the functionality and services necessary to support university e-learning.
  2. Over time provide a set of functionality that is specific to the institutional context.
  3. Over time show increasing levels and quality of adoption by staff and students.
  4. Better enable and encourage the university, its e-learning information systems, and its staff and students to observe and respond to new learning about and insight into the design, support and use of university e-learning.
  5. Through the combination of the above, provide a level of differentiation and competitive advantage to the host institution.

Each of these are based in some way on justificatory knowledge as instantiated by the design principles.

The first testable proposition is essentially that the resulting IS will be fit for purpose/address the necessary requirements. The remaining propositions identify how the resulting IS will be better than others. Propositions that can be tested once such an IS is instantiated.

How do you/they know it works/

Just because you can develop a theory, doesn’t mean it will work. However, if you have a working version of an information system designed using the ISDT (i.e. an instantiation of the ISDT) then it becomes a bit easier to understand. An instantiation also helps identify issues with the ISDT which can be refined (more on this below). An instantiation can also help explain the ISDT.

Constructs

In the above the arrows between instantiation, the design principles, and the testable propositions are intended to indicate how the instantiation should be informed/predicted by the principles and propositions, but also how the experience of building and using the instantiation can influence the principles (more of this below) and propositions.

In spite of the argument above, not everyone assumes that an instantiation is necessary (it’s listed as an additional components in the ISDT specification). As the previous paragraphs suggest I think an instantiation is a necessary component.

The emergent university e-learning ISDT was based on a system called Webfuse used at a particular University from 1996 through 2010 (or so).

Important concepts

The research problem (and scope) and the justificatory knowledge all embody a particular perspective on the world. Rather than trying to understand everything about and every perspective on the research problem (beyond the scope of mere mortals) an ISDT focuses attention on a particular view of the research problem. There are certain elements that are deemed to be more important and more interesting to a particular ISDT than others. These more interesting elements become the constructs of the ISDT. They define the meaning of these interesting elements. They become some of the fundamental building blocks of the ISDT.

The following image perhaps doesn’t capture the importance of constructs.
ISDT and research process

The following table summarises the constructs from the ISDT for emergent university e-learning

Construct Definition
e-learning The use of information and communications technologto support and enhance learning and teaching in higher education institutions (OECD, 2005c)
Service An e-learning related function or application such as a discussion forum, chat room, online quiz etc.
Package Mechanism through which all services are integrated into and managed within the system.
Conglomerations Groupings of services that provide scaffolding and context-specific support for the performance of high-level e-learning tasks. (e.g., creating a course site with a specific design; using a discussion forum to host debate; using blogs to encourage reflection)

What happens next?

Lee’s (2001) quote above mentioned “emergent soci-technical phenomena”. The idea that when technology meets society something different emerges and keeps on emerging. Not just because of the combination of these two complex categories of interest, but also because digital technology itself is protean/mutable. While digital technologies show rapid change due to the evolution of the technology, digital technologies are also inherently protean. They can be programmed.

The importance of this feature means that artifact mutability – how you expect IS built following your ISDT will/should/could change – is a core component of an ISDT.

Mutability

As an ISDT for emergent e-learning artifact mutability/what happens next was a key consideration described as

As an ISDT for emergent e-learning systems the ability to learn and evolve in response to system use is a key part of the purpose of this ISDT. It is actively supported by the principles of form and function, as well as the principles of implementation.

i.e. mutability was a first order consideration in this ISDT. It actively tried to encourage and enable this in a positive way through both sets of principles.

How do you do it? (Research approach)

The above has tried to explain the components of an ISDT by starting with a typical research question/problem. It doesn’t address the difficult question of how “how do you formulate an ISDT?”. In particular, how do you do it with some rigor, practical relevance etc. Answering these questions are somewhat independent from the components of an ISDT. Whatever research approach you use should produce appropriately the various components, but how you develop it is separate from the ISDT.

My thesis work followed the iterative action research process from Markus et al (2002)

Iterative action research for formulating ISDT

A related but more common research process within design-based research out of the education discipline is Reeves (2006) approach. I’ll expand upon this one, but there are overlaps.

DBR cycle

As Reeve’s image sugestions, these aren’t four distinct, sequential phases. Instead, they can tend toward almost to concurrent tasks that you are stepping back and forth between. As you engage in iterative cycles of testing and refinement (3rd phase) you learn something that modifies understanding of the practical problem (1st) and perhaps principles (2nd) and perhaps highlights something to reflect.

However, at least initially, I’m wondering if you should have spent some time developing an initial version of your ISDT (and all its components) fairly early on in the research cycle. This forms the foundation/mechanism by which you move backward and forward between the different stages.

i.e. as you develop a specific solution (instantiation) of your ISDT you might find yourself having to undertake a particular step or develop a particular feature that isn’t explained by your existing design principles.

Approach

Similarly, you might be observing an instantiation in action by gathering/analysing data etc (Phase 3) or perhaps you might be reflecting upon what’s happened and realise that a particular issue isn’t covered, or that your initial assumptions were wrong. Leading to more refinement.

That refinement may in turn lead to changes in the instantiation(s) and thus more opportunities to learn and refine.

References

Avison, D., & Eliot, S. (2006). Scoping the discipline of information systems. In J. L. King & K. Lyytinen (Eds.), Information systems: the state of the field (pp. 3–18). Chichester, UK: John Wiley & Sons.

Bakker, A. (2014). Research Questions in Design-Based Research (pp. 1–6). Retrieved from http://www.fi.uu.nl/en/summerschool/docs2014/design_research_michiel/Research Questions in DesignBasedResearch2014-08-26.pdf

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312–335.

Johanson, A., & Hasselbrin<, W. (2018). Software Engineering for Computational Science: Past, Present, Future. Computing in Science & Engineering. https://doi.org/10.1109/MCSE.2018.108162940

Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A design theory for systems that support emergent knowledge processes. MIS Quarterly, 26(3), 179–212.

Reeves, T. (2006). Design research from a technology perspective. In J. van den Akker, K. Gravemeijer, S. McKenney, & N. Nieveen (Eds.), Educational Design Research (pp. 52–66). Milton Park, UK: Routledge.

Simon, H. (1996). The sciences of the artificial (3rd ed.). MIT Press.

Vessey, I. (1997). Problems Versus Solutions: The Role of the Application Domain in Software. In Papers Presented at the Seventh Workshop on Empirical Studies of Programmers (pp. 233–240). New York, NY, USA: ACM. https://doi.org/10.1145/266399.266419

Fixing one part of the peoplesoft gradebook

The following is a development log of an attempt to fix one aspect of the Peoplesoft gradebook used at my current institution.

Why and what?

The problem

At the end of semester all assignment marks end up in the Peoplesoft gradebook. An old school web information systems that the academic in charge of a course has to use to do some last minute checks and changes. One of those changes is to change the grade for students who are within 0.5 of grade level. e.g. a student with a mark of 49.6 shouldn’t get an F, they should get a C (which is the pass mark).

Peoplesoft won’t do this. The academic has to manually scroll through the list of students (ordered alphabetically by student name) looking for those that in this range. Once found the new grade has to be manually entered into a textbox. This is a problem, especially if your class has a couple of hundred students.

The solution

The solution developed below is a Greasemonkey script that will automate this process. It will, once installed

  1. Detect that the peoplesoft gradebook is being displayed.
  2. Look for any students within 0.5 of a grade level.
  3. For each of these students found
    • Change the background for that row to red.
    • Place the upgraded grade in the appropriate textbox.
  4. Look for any students who have already been upgraded, change the background of their row to green.

How?

Identifying the gradebook

First initial problem is that the Peoplesoft gradebook is using iframes. Which complicates things a little. Especially in identifying the appropriate iframe and then getting the script to only activiate when the appropriate document is loaded. Not to mention no great surprise that we’re talking some really ugly HTML here.


The actual data for each student is spread over a row with XXX main cells each with div elements with specific ids (the $0 appears to increment per student)

  • win0divHCR_PERSON_NM_I_NAME$0 – span HCR_PERSON_NAM_I_NAME$0 contains the name
  • win0divSTDNT_GRADE_HDR_EMPLID$0 – span STDNT_GRADE_HDR_EMPLID$0 – contains the EMPLID
  • win0divSTDNT_GRADE_HDR_GRADE_AVG_CURRENT$0 – span STDNT_GRADE_HDR_GRADE_AVG_CURRENT$0 – has the result.
  • win0divSTDNT_GRADE_HDR_COURSE_GRADE_CALC$0 – span STDNT_GRADE_HDR_COURSE_GRADE_CALC$0 – has the grade
  • input text box with id STDNT_GRADE_HDR_CRSE_GRADE_INPUT$0 is where the changed grade might get entered.

It appears to be part of a form with the URL ending in SA_LEARNING_MANAGEMENT.LAM_CLASS_GRADE.GBL and appearing in an IFRAME with id ptifrmtgtframe – which I assume is a generic iframe used on all the pages.

So the plan appears to be for the script to

  1. Only respond for the broad URL associated with the institutional gradebook.
    Done via the standard Greasemonkey approach.
  2. Only kick into action on the loading of the iframe with id ptifrmtgtframe.
    This appears to work.
    [code lang=”javascript”]
    var theFrame;
    theFrame = document.getElementById(‘ptifrmtgtframe’);
    theFrame.addEventListener( "load", my_func, true );
    [/code]
  3. Check to see if the form SA_LEARNING_MANAGEMENT.LAM_CLASS_GRADE.GBL OR perhaps the presence of the ids from the table above
    Have modified the above to pass the frame in and was using that to determine the presence of the textbox. The problem is that there is a further complication to the interface. Jumping to the specific page in the gradebook (there are three) is being done by a “javascript:submitAction_win0(document.win0…..)”. This isn’t showing up as an on load for the frame.

    Found this post which talks about one potential solution but also points to someone who’s been doing this for much longer and in more detail.

  4. Have they included the number of students in the HTML? – no, doesn’t look like it.

A rough attempt to understand what is going on

  1. Faculty centre loads with list of courses.
    The standard entry into gradebookFix is run at this stage – alert is shown. And then the iframes load.
  2. Click on gradebook icon trigger the current iframe load event and shows the three different gradebook icons.
    The my_func function is run via an event listener for onLoad for the ptifrmtgtframe iframe. But this is only run the once as….
  3. Click on the “cumulative grades” doesn’t load a new iframe, calls the javascript:submitAction_win0 method.

The aim is to modify the click on the particular link so that something else happens. How about

  1. Modify onload to look for that link and add a onclick event.
    The id for the link is DERIVED_SSR_LAM_SSS_LINK_ANCHOR3. The problem is that attempting to add an event listener to this is not working. i.e. a call to getElementById is not working. Aghh, that’s because these things aren’t normal Javascript type objects, but special Greasemonkey wrapped stuff.
    [code lang=”javascript”]
    var theLink = theFrame["contentDocument"].getElementById(‘DERIVED_SSR_LAM_SSS_LINK_ANCHOR3’);

    theLink.addEventListener( "click", function(){ alert( "CLICK ON LINK CUMULATIVE" ); }, false );
    [/code]

  2. Have a function that is called on click.
    The struggle here will be that the click is actually the start of a query that results in the content being changed. But not necessarily recognised by Greasemonkey.

    Perhaps a timeout and then another bit of code like this might work. This could be tested simply be re-adding the on-click. This will sort of work, but again, is only set when the iframe loads for the first time. If any other navigation happens it won’t re-add any changes in.

    Have added it to the other two main links for gradebook. Possible this will be a sufficient kludge for now.

  3. Looks like we need to capture the submitAction_win0 method after all.
    Nope, have figured a kludge

Identifying the student rows

The following code segment will change the background/font color of the first student’s name
[code lang=”javascript”]
function updateResults(element) { var name = element.getElementById(‘win0divHCR_PERSON_NM_I_NAME$0’);
name.style.backgroundColor = ‘red’;
name.style.color = ‘white’;
}
[/code]

Above specifies the names of the different student fields. The difference is the number after the dollar sign – 0 up to the last.

Steps required here

  1. Identify how many students are on the page.
    Will be useful for a for loop to go through each. xpath might offer a possibility? JQuery? A simple while loop could also do the trick. Will go with that.
  2. Determine what to change
    Plan is

    • RED – need attention i.e. marks that should be over-ridden with suggested override in place.
    • GREEN – those that have already been over-ridden previously.
    • no colour/change – correct as is.

All done. Seems to work.

Breaking BAD to bridge the reality/rhetoric chasm

The following is a copy of a paper accepted at ASCILITE’2014 (and nominated for best paper) written by myself and Damien Clark (CQUniversity – @damoclarky). The official conference version of the paper is available as a PDF.

Presentation slides available on Slideshare Google Slides.

The source code for the Moodle Activity Viewer is available on github. As are some of the scripts produced at USQ.

Abstract

The reality of using digital technologies to enhance learning and teaching has a history of falling short of the rhetoric. Past attempts at bridging this chasm have tried: increasing the perceived value of teaching; improving the pedagogical and technological knowledge of academics; redesigning organisational policies, processes and support structures; and, designing and deploying better pedagogical techniques and technologies. Few appear to have had any significant, widespread impact, perhaps because of the limitations of the (often implicit) theoretical foundations of the institutional implementation of e-learning. Using a design-based research approach, this paper develops an alternate theoretical framework (the BAD framework) for institutional e-learning and uses that framework to analyse the development, evolution, and very different applications of the Moodle Activity Viewer (MAV) at two separate universities. Based on this experience it is argued that the reality/rhetoric chasm is more likely to be bridged by interweaving the BAD framework into existing practice.

Keywords: bricolage, learning analytics, e-learning, augmented browsing, Moodle.

Introduction

In a newspaper article (Laxon, 2013) Professor Mark Brown makes the following comment on the quality of contemporary University e-learning:

E-learning’s a bit like teenage sex. Everyone says they’re doing it but not many people really are and those that are doing it are doing it very poorly. (n.p).

E-learning – defined by the OECD (2005) as the use of information and communications technology (ICT) to support and enhance learning and teaching – has been around for so long that there have been numerous debates about replacing it with other phrases. Regardless of the term used, there “has been a long-standing tendency in education for digital technologies to eventually fall short of the exaggerated expectations” (Selwyn, 2012, n.p.). Writing in the early 1990s Geoghagen (1994) seeks to understand why a three decade long “vision of a pedagogical utopia” (n.p.) promised by instructional technologies has failed to eventuate. Ten years on, Salmon (2005) notes that e-learning within universities is still struggling to move beyond projects driven by innovators and engage a significant percentage of students and staff. Even more recently, concerns remain about how much technology is being used to effectively enhance student learning (Kirkwood & Price, 2013). Given that “Australian universities have made very large investments in corporate educational technologies” (Holt et al., 2013, p. 388) it is increasingly important to understand and address the reality/rhetoric chasm around e-learning.

Not surprisingly the literature provides a variety of answers to this complex question. Weimer (2007) observes that academics come to the task of teaching with immense amounts of content knowledge, but little or no knowledge of teaching and learning, beyond perhaps their personal experience. A situation which may not change significantly given that academics are expected to engage equally in research and teaching and yet work towards promotion criteria that are perceived to primarily value achievements in research (Zellweger, 2005). It has been argued that the limitations of the Learning Management System (LMS) – the most common university e-learning tool – make the LMS less than suitable for more effective learner-centred approaches and is contributing to growing educator dissatisfaction (Rahman & Dron, 2012). It’s also been argued that the “limited digital fluency of lecturers and professors is a great challenge” (Johnson, Adams Becker, Cummins, & Estrada, 2014, p. 3) for the creative leveraging of emerging technologies. Another contributing factor is likely to be Selwyn’s (2008) suggestion that educational technologists have failed to be cognisant of “the more critical analyses of technology that have come to the fore in other social science and humanities disciplines” (p. 83). Of particular interest here is the observation of Goodyear et al (2014) that the “influence of the physical setting (digital and material) on learning activity is often important, but is under-researched and under-theorised: it is often taken for granted” (p. 138).

This paper reports on the initial stages of a design-based research project that aims to bridge the e-learning reality/rhetoric chasm by exploring and harnessing alternative theoretical foundations for the institutional implementation of e-learning. The paper starts comparing and contrasting two different theoretical foundations of institutional e-learning. The SET framework is suggested as a description of the mostly implicit assumptions underpinning most contemporary approaches. The BAD framework is proposed as an alternative and perhaps complementary framework that better captures the reality of what happens and if effectively integrated into institutional practices may help bridge the chasm. The development of a technology – the Moodle Activity Viewer (MAV) – and its use at two different universities is then used to illustrate the benefits and limitations of the SET and BAD frameworks, and how the two can be fruitfully combined. The paper closes with some discussion of implications and future work.

Breaking BAD versus SET in your ways

The work described here is part of an on-going cycle of design-based research that aims to develop new artefacts and theories that can help bridge the e-learning reality/rhetoric chasm. We believe that bridging this chasm is of theoretical and practical significance to the sector and to us personally. The interventions we describe in the following sections arose out of our day-to-day work and were informed by a range of theoretical perspectives. This section offers a brief description of the theoretical frameworks that have informed and been refined by this work. This is important as design-based research should depart from a problem (McKenney & Reeves, 2013), be grounded in practice, theory-driven and seek to refine both theory and practice (Wang & Hannafin, 2005). The frameworks described here are important because they identify a mindset (the SET framework) that contributes significantly to the on-going difficulty in bridging the e-learning reality/rhetoric chasm, and offers an alternate mindset (the BAD framework) that provides principles that can help bridge the chasm. The SET and BAD frameworks are broadly incommensurable ways of answering three important, inter-related questions about the implementation of e-learning. While the SET framework represents the most commonly accepted mindset used in practice, both frameworks are evident in both the literature and in practice. Table 1 provides an overview of both frameworks.

Table 1: The BAD and SET frameworks for e-learning implementation
Question SET BAD
What work gets done? Strategy – following a global plan intended to achieve a pre-identified desired future state. Bricolage – local piecemeal action responding to emerging contingencies.

 

How ICT is perceived? Established – ICT is a hard technology and cannot be changed. People and their practices must be modified to fit the fixed functionality of the technology. Affordances – ICT is a soft technology that can be modified to meet the needs of its users, their context, and what they would like to achieve.
How you see the world? Tree-like – the world is relatively stable and predictable. It can be understood through logical decomposition into a hierarchy of distinct black boxes. Distributed – the world is complex, dynamic, and consists of interdependent assemblages of diverse actors (human and not) connected via complex networks.

What work gets done: Bricolage or Strategic

The majority of contemporary Australian universities follow a strategic approach to deciding what work gets done. Numerous environmental challenges and influences have led to universities being treated as businesses with an increasing prevalence of managers using “strategic control and a focus on outputs which can be quantified and compared” (Reid, 2009, p. 575) to manage academic activities. A strategic approach involves the creation of a vision identifying a desired future state and the development of operational plans to bring about the desired future state. The only work that is deemed acceptable is that which fits within the established operational plan and is seen to contribute to the desired future state. All other work is deemed inefficient. The strategic approach is evident at all levels of institutional e-learning. Inglis (2007) describes how government required Australian universities to have institutional learning and teaching strategic plans published on their websites. The strategic or planning-by-objectives (e.g. learning outcomes, graduate attributes) approach also underpins how course design is largely assumed to occur with Visscher-Voerman and Gustafson (2004) finding that it underpins “a majority of the instructional design models in the literature” (p. 77). The strategic approach is so ingrained that it is often forgotten that these ideas have not always existed (Kezar, 2001), have significant flaws, and that there is at least one alternate perspective.

Bricolage, “the art of creating with what is at hand” (Scribner, 2005, p. 297) or “designing immediately” (BŸscher, Gill, Mogensen, & Shapiro, 2001, p. 23) involves the manipulation and creative repurposing of existing, and often unlikely, resources into new arrangements to solve a concrete, contextualized problem. Ciborra (1992) argues that bricolage – defined as the “capability of integrating unique ideas and practical design solutions at the end-user level” (p. 299) – is more important in developing organisational applications of ICT that provide competitive advantage than traditional strategic approaches. Scribner (2005) and other authors have used bricolage to understand the creative and considered repurposing of readily available resources that teachers use to engage in the difficult task of helping people learn. Bricolage is not without its problems. There are risks associated with extremes of both the strategic and bricolage approaches to how work gets done (Jones, Luck, McConachie, & Danaher, 2005). In the context of institutional e-learning, the problem is that at the moment the strategic is crowding out bricolage. For example, Groom and Lamb (2014) observe that the cost of supporting an enterprise learning tool (e.g. LMS) limits resources for user-driven innovation, in part because it draws “attention and users away” (n.p) from the strategic tool (i.e. LMS). The demands of sustaining the large and complex strategic tool dominates priorities and leads to “IT organizationsÉdefined by what’s necessary rather than what’s possible” (Groom & Lamb, 2014, n.p). There would appear to be some significant benefit to exploring a dynamic and flexible interplay between the strategic and bricolage approaches to deciding what work gets done.

How ICT is perceived: Affordances or Established

The established view sees ICT as a hard technology (Dron, 2013). What can be done with hard technology is fixed in advance either by embedding it in the technology or “in inflexible human processes, rules and procedures needed for the technology’s operation” (Dron, 2013, p. 35). An example of this is the IT person quoted by Sturgess and Nouwens (2004) as suggesting in the context of an LMS evaluation process that “we should seek to change people’s behavior because information technology systems are difficult to change” (n.p). This way of perceiving ICTs assumes that the functionality provided by technology is established and cannot be changed. This creates the problem identified by Rushkoff (2010) where “instead of optimizing our machines for humanity – or even the benefit of some particular group – we are optimizing humans for machinery” (p. 15). Perhaps in no small way the established view of ICT in e-learning contributes to Dede’s (2008) observation that “widely used instructional technology applications have less variety in approach than a low-end fast-food restaurant” (p. 58). The established view of ICT challenges Kay’s (1984) discussion of the “protean nature of the computer” (p. 59) as “the first metamedium, and as such has degrees of freedom and expression never before encountered” (p. 59). The problem is that digital technology is “biased toward those with the capacity to write code” (Rushkoff, 2010, p. 128) and increasingly those who can code have been focused on avoiding it.

The established view of ICT represents a narrow view of technological change and human agency. When unable to achieve a desired outcome, people will use the available knowledge and resources to create an alternative path, they will create a workaround (Koopman & Hoffman, 2003). For example, Hannon (2013) talks about the “hidden effort” (p. 175) of “meso-level practitioners – teaching academics, learning technologies, and academic developers” (p. 175) to bridge the gaps created by centralised technologies. The established view represents the designer-centred idea of achieving “perfect” software (Koopman & Hoffman, 2003), rather than recognising the need for on-going adaptation due to the diversity, complexity and on-going change inherent in university e-learning. The established view also ignores Kay’s (1984) description of the computer as offering “degrees of freedom and expression never before encountered” (p. 59). The established view does not leverage the affordance of ICT for change and freedom. Following Goodyear et al (2014), affordances are not a feature of a technology, but rather it is a relationship between the technology and the people using the technology. Within university e-learning the affordance for change has been limited due to both the perceived nature of the technology – best practice guidelines for integrated systems such as LMS and ERP recommend vanilla implementation (Robey, Ross, & Boudreau, 2002) – and the people – the apparent low digital fluency of academics (Johnson, Adams Becker, Cummins, & Estrada, 2014, p. 3). However, this is changing. There are faculty and students who are increasingly digitally fluent (e.g. the authors of this paper) and easily capable of harnessing the advent of technologies that “help to make bricolage an attainable reality” (BŸscher et al., 2001, p. 24) such as the IMS LTI standards, APIs (Lane, 2014) and augmented browsing (Dai, Tsai, Tsai, & Hsu, 2011). An affordances perspective of ICT seeks to leverage the capacity for ICT to be manipulated so that it offers the best possible affordances for learners and teachers. A move away from the established “design of an artefact towards emergent design of technology-in-use, particularly by the users” (Johri, 2011, p. 212).

How you see the world: Distributed or Tree-like

The methods used to solve most of the large and complex problems that make up institutional e-learning rely upon a tree-like or hierarchical conception of the world. To manage a university it is broken up into a tree-like structure consisting of divisions, faculties, schools, and so on. The organisation of the formal learning and teaching done at the university relies upon a tree-like structure of degrees, majors/minors, courses or units, learning outcomes, weeks, lectures, tutorials, etc. The information systems used to enable formal learning and teaching mirror the tree-like structure of the organisation with separation into different systems responsible for student records, learning management, learning content management etc. The individual information systems themselves are broken up into tree-like structures reliant on modular design. These tree-like structures are the result of the reliance on methods that use analysis and logical decomposition to reduce larger complex wholes into smaller more easily understood and manageable parts (Truex, Baskerville, & Travis, 2000). These methods produce tree-like structures of independent, largely black-boxed components that interact through formally approved mechanisms that typically involve oversight or approval from further up the hierarchy. For example, a request for a new feature in an LMS must wend its way up the tree-like governance structure until it is considered at the institutional level, compared against institutional priorities and ranked against other requests, before possibly being passed down to the other organisational black-box that can fulfill that request. There are numerous limitations associated with tree-like structures. For example, Holt et al (2013) identify just one of these limitations when they argue that the growing complexity of institutional e-learning means that no one leader at the top of a hierarchical tree has the knowledge to “possibly contend with the complexity of issues” (p. 389).

The solution suggested by Holt et al (2013) is distributed leadership which is in turn based on broader theoretical foundations of distributed cognition, social learning, as well as network and activity theories. A theoretical foundation that can be seen in a broad array of distributed ways of looking at the world. For example, in terms of learning, Siemens’ (2008) lists the foundations of connectivism: as activity theory; distributed and embodied cognition; complexity; and network theory. At the core of connectivism is the “thesis that knowledge is distributed across a network of connections and therefore learning consists of the ability to construct and traverse those networks” (Downes, 2011, n.p). Johri (2011) links much of this same foundation to socio-materiality and suggests that it offers “a key theoretical perspective that can be leveraged to advance research, design and use of learning technologies” (p. 210). Poldolny & Page (1998) apply the distributed view to governance and organisations and describe it as meaning that two or more actors are able to undertake repeated interactions over a period of time without having a centralised authority responsible for resolving any issues arising from those interactions. Rather than the responsibility and capability for specific actions being seen as belonging to any particular organisational member or group (tree-like), the responsibility and capability is distributed across a network of individuals, groups and technologies. The distributed view sees institution e-learning as a complex, dynamic, and interdependent assemblages of diverse actors (both human and not) distributed in complex networks.

It is our argument that being aware of the differences in thinking between the SET and BAD frameworks offers insight that can guide the design of interventions that are more likely to bridge the e-learning reality/rhetoric chasm. The following sections describe the development and adaptation of the Moodle Activity Viewer (MAV) at both CQUni and USQ as an example of what is possible when breaking BAD.

Breaking BAD and the development of MAV

The second author works for Learning and Teaching Services at CQUniversity (CQUni). In late 2012, he was working on a guide for teaching staff titled “How can I enhance my teaching practice?”. In contributing to the “Designing effective course structure” section of this guide, the author asked a range of rhetorical questions including “How do you know which resources your students access the most, and the least?”. Providing an answer to this question for the reader took more effort than expected. There are reports available in Moodle 2.2 (the version being used by CQUni at the time) that can be used to answer this question. However, they suffer from a number of limitations including: duplicated report names; unclear differences between reports; usage values include both staff and student activity; poor speed of generation; and, a tabular format. It was apparent that these limitations were acting as a barrier to reflection on course design. This was especially problematic, as the institution had placed increased emphasis on generating and responding to student feedback (CQUniversity, 2012). Annual course enhancement reports – introduced in 2010 – required teaching staff to respond to feedback from students and highlight enhancements to be made for the course’s next offering (CQUniversity, 2011). Information about activity and resource usage on the course Moodle site was seen by some to be useful in completing these reports. However, there was no apparent strategic or organisational imperative to address issues with the Moodle reports and it appeared likely that the aging version of Moodle (version 2.2) would persist for some time given other organisational priorities. As a stopgap solution the author and a colleague engaged in some bricolage and began writing SQL queries for the Moodle database and generating Excel spreadsheets. Whilst this approach provided more useful data, the spreadsheets were manually generated on request and the teaching staff had to bridge the conceptual gap between the information within the Excel spreadsheet and their Moodle course site.

In the months following, the author started thinking about a better approach. While CQUni had implemented a range of customisations to the institution’s Moodle instance, substantial changes required a clear understanding of the final requirements, alignment with strategic imperatives, and support of the senior management. At this stage of the process it was not overly clear what the final requirements of a solution would be, hence more experimentation was required to better understand the problem and possible solutions, prior to making the case for modifying Moodle.  While the author did not have the ability to change the institution’s version of Moodle itself, he did have access to: a copy of the Moodle database; access to a server computer; and software development abilities. Any bridging of this particular gap would need to draw on available resources (bricolage) and not disturb or impact critical high-availability services such as Moodle. Given uncertainty about what functionality might best enable reflection on course design any potential solution would also need to enable a significant level of agility and experimentation (bricolage).

The technical solution that seemed to best fulfill these requirements was augmented browsing. Dai et al (2011) define augmented browsing as “an effective means for dynamically adding supplementary information to a webpage without having users navigate away from the page” (p. 2418). The use of augmented browsing to add functionality to a LMS is not new.  Leony et al (2012) created a browser add-on that embeds learning analytics graphs directly within the Moodle LMS course home page. Dawson et al (2011) used what is known as bookmarklets to generate interactive sociograms to visualise student learning networks as part of SNAPP.  The problems that drove SNAPP’s use of augmented browsing – complex and difficult to interpret LMS reports and the difficulty of getting suggestions from teaching staff integrated into an institution LMS (Dawson et al., 2011) – mirror those faced at CQU.

Through a process of bricolage the Moodle Activity Viewer (MAV) was developed as an add-on for the Firefox web browser. More specifically, the MAV is built upon another popular Firefox add-on called Greasemonkey, and in Greasemonkey terms MAV is known as a userscript.  However, for the purposes of this paper, the MAV will be referred to more generally as an add-on to the browser. The intent was that the MAV would generate a heat map and embed it directly onto any web page produced by Moodle. A heat map shades each of the links in a web page with a spectrum of colours where the deeper red shades indicate links that are being clicked on more often (see Figure 1). The implementation of the MAV is completely separate from the institutional Moodle instance meaning its use has no impact on the production Moodle environment. Once the MAV add-on is installed into Firefox, and with it turned on, any web page from a Moodle course site can have a heat map overlaid on all Moodle links in that page. This process starts with the MAV add-on recognising a newly loaded page as belonging to a Moodle course site. When this occurs the MAV will generate a query asking for usage figures associated with every relevant Moodle link on that web page. This query is sent to the MAV server hosted on an available server computer. The MAV server translates the query into appropriate queries that will extract the necessary information from the Moodle database. As implemented at CQU, the MAV server relies on a copy of the Moodle database that is updated daily. While not necessary, use of a copy of the Moodle database ensures that there is no risk of disrupting the production Moodle instance.

The MAV add-on can be configured to generate overlays based on the number of clicks on a link, or the number of students who have clicked on a link. It can also be configured to limit the overlays to particular groups of students or to a particular student. When used on the main course page, MAV provides an overview of how students are using all of the course resources. Looking at a discussion forum page with the MAV enabled allows the viewer to analyse which threads or messages are receiving the most attention. Hence MAV can provide a simple form of process analytics (Lockyer, Heathcote, & Dawson, 2013).

An initial proof-of-concept implementation of the MAV was developed by April 2013. A few weeks later this implementation was demonstrated to the “Moodle 2 Project Board” to seek approval to continue development. The plan was to engage in small trials with academic staff and evolve the tool. The intent was that this would generate a blueprint for the implementation of heat maps within Moodle itself.  The low-risk nature of the approach contributed to approval to continue. However, by July 2013, the institution downsized through an organisational restructure and resources in the IT department were subsequently reduced.  As part of this restructure, and in an effort to reduce costs, the IT Department set to reduce the level of in-house systems development in favour of more established “vanilla” systems (off-the-shelf with limited or no customisations).  This new strategy made it unlikely that the MAV would be re-implemented directly within Moodle, and the augmented browsing approach might be viable longer term. As the MAV was being developed and refined, it was being tested by a small group of teaching staff within the creator’s team. Then in September 2013, the first official trial was launched making the MAV available to all staff within one of CQUniversity’s schools.

How MAV works by David T Jones, on FlickrFigure 1: How MAV works (Click on the image to see larger version)

Early in March 2012, prior to the genesis of the MAV, the second author and a colleague developed a proposal for a student retention project. It was informed by ongoing research into learning analytics at the institution and motivated by a strategic institutional imperative to improve student retention (CQUniversity, 2011).  It was not until October 2013 – after the commencement of the first trial of the MAV – that a revised version of the proposal received final approval and the project commenced in November under the name EASICONNECT.  Part of the EASICONNECT project was the inclusion of an early alerts system for disengaged students called EASI (Early Alert Student Indicators) to identify disengaged students early, and provide simple tools to nudge the students to re-engage, with the hope of improving student retention. In 2013, between the proposal submission and final approval of the EASICONNECT Project, EASI under a different name (Student Support Indicators – SSI) was created as a proof-of-concept and used in a series of small term-based trials, evolving similarly to the MAV. One of the amendments made to the approved proposal by the project sponsor (management) was the inclusion of the MAV as a project deliverable in the EASICONNECT project.

Neither EASI nor the MAV were strictly the results of strategic plans. Both systems arose from bricolage being undertaken by two members of CQUni’s Learning and Teaching Services that was later recognised as contributing to the strategic aims of the institution. With the eventual approval of the EASICONNECT project, the creators of EASI and the MAV worked more closely together on these tools and the obvious linkages between them were developed further. Initially this meant modifying the MAV so staff participating in the EASI trial could easily navigate from the MAV to EASI. In Term 1, 2014 EASI introduced links for each student in a course, that when clicked, would open the Moodle course site with the MAV enabled only for the selected student. While EASI showed a summary of the number of clicks made by the student in the course site, the MAV could then contextualise this information, revealing where those clicks took place directly within Moodle. In Term 2, 2014 a feature often requested by teaching staff was added to the MAV that would identify students who had and hadn’t clicked on links. The MAV also provided an option for staff to open EASI to initiate an email nudge to either group of students. Figure 2 provides a comparison of week-to-week usage of MAV between term 1 and 2, of 2014. The graphs show usage in terms of the number of page views and number of staff using the system, with the Term 2 figures including up until the end of Week 10 (of 15).

Both MAV and its sister project EASI were initiated as a form of bricolage. It was only later that both projects enjoyed the synthesised environment of a strategic project that provided the space and institutional permission for this work to scale and continue to merge. MAV arose due to the limited affordances offered by the LMS and the promise that different ICT could be harnessed to enhance the perceived affordances. Remembering that affordances are not something innate to a tool, but are instead co-constitutive between tool, user and context; the on-going use of bricolage allowed the potential affordances of the tool to evolve in response to use by teaching staff. Through this approach MAV has been able to evolve from potentially offering affordances of value to teaching staff as part of “design for reflection and redesign” (Dimitriadis & Goodyear, 2013) to also offering potential affordances for “design for orchestration” (Dimitriadis & Goodyear, 2013).

Figure 2: 2014 MAV usage at CQUni: Comparison between T1 and T2 (Click on images to see larger versions of the graphs)
MAV Usage - page views by David T Jones, on Flickr
MAV usage - # staff by David T Jones, on Flickr

Implementing MAV as a browser add-on also enables a break from the tree-like conceptions that underpin the design of large integrated systems like an LMS. The tree-like conception is so evident in the Moodle LMS that it is visible in the name. Moodle is an acronym for Modular Object-Oriented Dynamic Learning Environment. With Modular capturing the fact that “Moodle is built in a highly modular fashion” (Dougiamas & Taylor, 2003, p. 173), meaning that logical decomposition is used to break the large integrated system into small components or modules. This modular architecture allows the rapid development and addition of independent plugins and is a key enabler of the flexibility of Moodle. However, this is based on each of the modules being largely independent of each other, which has the consequence of making it more difficult to have functionality that crosses modular boundaries, such as taking usage information from the logging systems and integrating that information into all of the modules that work together to produce a web page generated by Moodle.

Extending MAV at another institution

In 2012 the first author commenced work within the Faculty of Education at the University of Southern Queensland (USQ). The majority of the allocated teaching load involved two offerings of EDC3100, ICTs and Pedagogy. EDC3100 is a large (300+ on-campus and online students first semester, and ~100 totally online second semester) core, third year course for Bachelor of Education (BEdu) students. The author expected that USQ would have high quality systems and processes to support large, online courses. This was due to USQ’s significant reputation in the practice and research of distance and online education; it’s then stated vision “To be recognised as a world leader in open and flexible higher education” (USQ, 2012, p. 5); and the observation that “by 2012 up to 70% of students in the Bachelor of Education were studying at least some subjects online” (Albion, 2014, p. 1163). The experience of teaching EDC3100 quickly revealed an e-learning reality/rhetoric chasm.

As a core course EDC3100 students study at all of USQ’s campuses, a Malaysian partner, and online from across Australia and the world. The students are studying to become teachers in early childhood, primary, secondary and VET settings. The course is designed so that the “Study Desk” (the Moodle course site) is an essential source of information and support for all students. The course design makes heavy use of discussion forums for a range of learning activities. Given the size and diversity of the student population there are times when it is beneficial for teaching staff to customise their responses to the student’s context and specialisation. For instance, an example from the Australian Curriculum may be appropriate for a primary or lower secondary pre-service teacher based in Australia, but inappropriate for a VET pre-service teacher. Whilst the Moodle discussion forum draws on user profiles to identify authors of posts, the available information is limited to that provided centrally via the institution and by the users. For EDC3100 this means that a student’s campus is apparent through their membership of the Moodle groups automatically created by USQ’s systems, however, seeing this requires navigating away from the discussion forum. The student’s specialisation is not visible in Moodle. The only way this information is available is to ask an administrative staff member with the appropriate student records access to generate a spreadsheet (and then update the spreadsheet as students add and drop the course) that includes this specific information. The lack of easy access to this information constrains the ability of teaching staff to effectively intervene.

One explanation for the existence of this gap is the limitations of the SET approach to institutional e-learning systems. The tree-based practice of logical decomposition results in distinct tasks – such as the management of student demographic and enrolment data (Peoplesoft), and the practice of online learning (Moodle) – being supported by different information systems with different data models and owned by different organisational units. Logical decomposition allows each of these individual systems and their owners to focus on the efficiency of their primary task. However, it comes at the cost of making it more difficult to both recognise and respond to requirements that go across the tasks (e.g. teaching). It is even more difficult when the requirement is specific to a subset of the organisation. For example, ensuring that information about the specialisation of BEdu students is evident in Moodle is only of interest to some of the staff teaching into the BEdu. Even if this barrier could be overcome, modifying the Moodle discussion forum to make this type of information more visible would be highly unlikely due to the cost, difficulty and (quite understandable) reluctance to make changes to enterprise software inherent in the established-view of technology.

To address this need the MAV add-on was modified to recognise USQ Moodle web pages that contain links to student profiles (e.g. a forum post). On recognising such a page the modified version of MAV queries a database populated using the manually provided spreadsheet described above. MAV uses that information to add to each student profile link a popup dialog that provides student information such as specialisation and campus without leaving the page. Adding different information (e.g. activity completion, GPA etc.) to this dialog can proceed without the approval of any centralised authority. The MAV server and the database run on the author’s laptop and the author has the skill to modify the database and write new code for both the MAV server and client. As such it’s an example of Podonly and Page’s (1998) distributed approach to governance. The only limitation is whether or not the necessary information can be retrieved in a format that can be easily imported into the database.

Conclusions, implications and future work

Future work will focus on continuing an on-going cycle of design-based research exploring how and with what impacts the BAD framework can be fruitfully integrated into the practice of institutional e-learning. To aid this process we are exploring how MAV, its various modifications, and descendants can be effectively developed and shared within and between institutions. As a first step, the CQU MAV code has been released on GitHub (https://github.com/damoclark/mav), development is occurring in the open and interested collaborators are welcome. A particular interest is in exploring and evaluating the use of MAV to implement scaffolding and context-sensitive conglomerations. Proposed in Jones (2012) a conglomeration seeks to enhance the affordances offered by any standard e-learning tool (e.g. a discussion forum) with a range of additional and often contextually specific information and functionality. Both uses of MAV described above are simple examples of a conglomeration. Of particular interest is whether these conglomerations can be used to explore whether Goodyear’s (2009) idea that “research-based evidence and the fruits of successful teaching experience can be embodied in the resources that teachers use at design time” can be extended to institutional e-learning tools.

Perhaps the biggest challenge to this work arises from the observation that the SET framework forms the foundation for current institutional practice and that the SET and BAD frameworks are largely incommensurable. At CQU, MAV has benefited from recognition and support of senior management; yet, it still challenges the assumptions of those operating solely through the SET framework. The incommensurable nature of the SET and BAD frameworks imply that any attempts to fruitfully merge the two will need to deal with existing, and sometimes strongly held assumptions and mindsets. For example, rather than require the IT division to formally approve and develop all applications of ICT, their focus should perhaps turn (at least in part) to enabling and encouraging “ways to make work-arounds easier for users to create, document and share” (Koopman & Hoffman, 2003, p. 74) through organisational “settings, and systems É arranged so that invention and prototyping by end-users can flourish” (Ciborra, 1992, p. 305). Similarly, rather than academic staff development focusing on ensuring that the appropriate knowledge is embedded in the heads of teaching staff (e.g. formal teaching qualifications), there should be a shift to a focus on ensuring that the appropriate knowledge is embedded within the network of actors – both people and artefacts – distributed within and perhaps outside the institution. Rather than accept “the over-hyped, pre-configured digital products and practices that are being imported continually into university settings” (Selwyn, 2013, p. 3), perhaps universities should instead actively contribute to “a genuine grassroots interest needs to be developed in the co-creation of alternative educational technologies.  In short, mass participation is needed in the development of “digital technology for university educators by university educators” (p. 3).

Biggs (2012) conceptualises the job of a teacher as being responsible for creating a learning context in which “all students are more likely to use the higher order learning processes which ‘academic’ students use spontaneously” (p. 39). If this perspective is taken one step back, then it is the responsibility of a university to create an institutional context in which all teaching staff are more likely to create the type of learning context which ‘good’ teachers create spontaneously. The on-going existence of the e-learning reality/rhetoric chasm suggests many universities are yet to achieve this goal. This paper has argued that this is due in part to the institutional implementation of e-learning being based on a limited SET of theoretical conceptions. The paper has compared the SET framework with the BAD framework and argued that the BAD framework provides a more promising theoretical foundation for bridging this chasm. It has illustrated the strengths and weaknesses of these two frameworks through a description of the origins and on-going use of the Moodle Activity Viewer (MAV) at two institutions. The suggestion here is not that institutions should see the BAD framework as a replacement for the SET framework, but rather that they should engage in some bricolage and explore how contextually appropriate mixtures of both frameworks can help bridge their e-learning reality/rhetoric chasm. Perhaps universities need to break a little BAD?

References

Albion, P. (2014). From Creation to Curation: Evolution of an Authentic’Assessment for Learning’Task. In M. Searson & M. Ochoa (Eds.), Society for Information Technology & Teacher Education International Conference (pp. 1160-1168). Chesapapeake, VA: AACE.

Biggs, J. (2012). What the student does: teaching for enhanced learning. Higher Education Research & Development, 31(1), 39-55. doi:10.1080/07294360.2012.642839

BŸscher, M., Gill, S., Mogensen, P., & Shapiro, D. (2001). Landscapes of practice: bricolage as a method for situated design. Computer Supported Cooperative Work, 10(1), 1-28.

Ciborra, C. (1992). From thinking to tinkering: The grassroots of strategic information systems. The Information Society, 8(4), 297-309.

CQUniversity. (2011). CQUniversity Annual Report 2010 (p. 136). Rockhampton.

CQUniversity. (2012). CQUniversity Annual Report 2011 (p. 84). Rockhampton.

Dai, H. J., Tsai, W. C., Tsai, R. T. H., & Hsu, W. L. (2011). Enhancing search results with semantic annotation using augmented browsing. IJCAI Proceedings – International Joint Conference on Artificial Intelligence, 22(3), 2418-2423.

Dawson, S., Bakharia, A., Lockyer, L., & Heathcote, E. (2011). “Seeing” networks : visualising and evaluating student learning networks Final Report 2011. Canberra: Australian Learning and Teaching Council.

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), International Handbook of Information Technology in Primary and Secondary Education (pp. 43-62). New York: Springer.

Dimitriadis, Y., & Goodyear, P. (2013). Forward-oriented design for learning : illustrating the approach. Research in Learning Technology, 21, 1-13. Retrieved from http://www.researchinlearningtechnology.net/index.php/rlt/article/view/20290

Downes, S. (2011). “Connectivism” and Connective Knowledge. Retrieved from http://www.huffingtonpost.com/stephen-downes/connectivism-and-connecti_b_804653.html

Dron, J. (2013). Soft is hard and hard is easy: learning technologies and social media. Form@ Re-Open Journal per La Formazione in Rete, 13(1), 32-43. Retrieved from http://fupress.net/index.php/formare/article/view/12613

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conference of The International Business Schools Computing Association. Baltimore, MD.

Goodyear, P. (2009). Teaching, technology and educational design: The architecture of productive learning environments (pp. 1-37). Sydney. Retrieved from http://www.olt.gov.au/system/files/resources/Goodyear%2C P ALTC Fellowship report 2010.pdf

Goodyear, P., Carvalho, L., & Dohn, N. B. (2014). Design for networked learning: framing relations between participants’ activities and the physical setting. In S. Bayne, M. de Laat, T. Ryberg, & C. Sinclair (Eds.), Ninth International Conference on Networked Learning 2014 (pp. 137-144). Edinburgh, Scotland. Retrieved from http://www.networkedlearningconference.org.uk/abstracts/pdf/goodyear.pdf

Groom, J., & Lamb, B. (2014). Reclaiming innovation. EDUCAUSE Review, 1-12. Retrieved from http://www.educause.edu/visuals/shared/er/extras/2014/ReclaimingInnovation/default.html

Hannon, J. (2013). Incommensurate practices: sociomaterial entanglements of learning technology implementation. Journal of Computer Assisted Learning, 29(2), 168-178. doi:10.1111/j.1365-2729.2012.00480.x

Holt, D., Palmer, S., Munro, J., Solomonides, I., Gosper, M., Hicks, M., É Hollenbeck, R. (2013). Leading the quality management of online learning environments in Australian higher education. Australasian Journal of Educational Technology, 29(3), 387-402. Retrieved from http://www.ascilite.org.au/ajet/submission/index.php/AJET/article/view/84

Inglis, A. (2007). Approaches taken by Australian universities to documenting institutional e-learning strategies. In R. J. Atkinson, C. McBeath, S.K. Soong, & C. Cheers (Eds.), ICT: Providing Choices for Learners and Learning. Proceedings ASCILITE Singapore 2007 (pp. 419-427). Retrieved from http://www.ascilite.org.au/conferences/singapore07/procs/inglis.pdf

Johnson, L., Adams Becker, S., Cummins, M., & Estrada, V. (2014). 2014 NMC Technology Outlook for Australian Tertiary Education: A Horizon Project Regional Report. Austin, Texas. Retrieved from http://www.nmc.org/publications/2014-technology-outlook-au

Johri, A. (2011). The socio-materiality of learning practices and implications for the field of learning technology. Research in Learning Technology, 19(3), 207-217. Retrieved from http://researchinlearningtechnology.net/coaction/index.php/rlt/article/view/17110

Jones, D. (2012). The life and death of Webfuse : principles for learning and leading into the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 414-423). Wellington, NZ.

Jones, D., Luck, J., McConachie, J., & Danaher, P. A. (2005). The teleological brake on ICTs in open and distance learning. In 17th Biennial Conference of the Open and Distance Learning Association of Australia. Adelaide.

Kay, A. (1984). Computer Software. Scientific American, 251(3), 53-59.

Kezar, A. (2001). Understanding and Facilitating Organizational Change in the 21st Century: Recent Research and Conceptulizations. ASHE-ERIC Higher Education Report, 28(4).

Kirkwood, A., & Price, L. (2013). Technology-enhanced learning and teaching in higher education: what is “enhanced” and how do we know? A critical literature review. Learning, Media and Technology, (August), 1-31. doi:10.1080/17439884.2013.770404

Koopman, P., & Hoffman, R. (2003). Work-arounds, make-work and kludges. Intelligent Systems, IEEE, 18(6), 70-75.

Lane, K. (2014). The University of API (p. 28). Retrieved from http://university.apievangelist.com/white-paper.html

Laxon, A. (2013, September 14). Exams go online for university students. The New Zealand Herald.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439-1459. doi:10.1177/0002764213479367

McKenney, S., & Reeves, T. C. (2013). Systematic Review of Design-Based Research Progress: Is a Little Knowledge a Dangerous Thing? Educational Researcher, 42(2), 97-100. doi:10.3102/0013189X12463781

OECD. (2005). E-Learning in Tertiary Education: Where do we stand? (p. 289). Paris, France: Centre for Educational Research and Innovation, Organisation for Economic Co-operation and Development. Retrieved from http://new.sourceoecd.org/education/9264009205

Podolny, J., & Page, K. (1998). Network forms of organization. Annual Review of Sociology, 24, 57-76.

Rahman, N., & Dron, J. (2012). Challenges and opportunities for learning analytics when formal teaching meets social spaces. In 2nd International Conference on Learning Analytics and Knowledge (pp. 54-58). Vancourver, British Columbia: ACM Press. doi:10.1145/2330601.2330619

Reid, I. C. (2009). The contradictory managerialism of university quality assurance. Journal of Education Policy, 24(5), 575-593. doi:10.1080/02680930903131242

Robey, D., Ross, W., & Boudreau, M.-C. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17-46.

Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books.

Salmon, G. (2005). Flying not flapping: a strategic framework for e-learning and pedagogical innovation in higher education institutions. ALT-J, Research in Learning Technology, 13(3), 201-218.

Scribner, J. (2005). The problems of practice: Bricolage as a metaphor for teachers’ work and learning. Alberta Journal of Educational Research, 51(4), 295-310. Retrieved from http://ajer.journalhosting.ucalgary.ca/ajer/index.php/ajer/article/view/587

Selwyn, N. (2008). From state‐of‐the‐art to state‐of‐the‐actual? Introduction to a special issue. Technology, Pedagogy and Education, 17(2), 83-87. doi:10.1080/14759390802098573

Selwyn, N. (2012). Social media in higher education. The Europa World of Learning. Retrieved from http://www.educationarena.com/pdf/sample/sample-essay-selwyn.pdf

Selwyn, N. (2013). Digital technologies in universities: problems posing as solutions? Learning, Media and Technology, 38(1), 1-3. doi:10.1080/17439884.2013.759965

Siemens, G. (2008). What is the unique idea in Connectivism? Retrieved July 13, 2014, from http://www.connectivism.ca/?p=116

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3). Retrieved from http://tojde.anadolu.edu.tr/tojde15/articles/sturgess.htm

Truex, D., Baskerville, R., & Travis, J. (2000). Amethodical systems development: the deferred meaning of systems development methods. Accounting Management and Information Technologies, 10, 53-79.

USQ. (2012). University of Southern Queensland 2011 Annual Report. Toowoomba. doi:10.1037/e543872012-001

Visscher-Voerman, I., &Gustafson, K. (2004). Paradigms in the theory and practice of education and training design. Educational Technology Research and Development, 52(2), 69-89.

Wang, F., & Hannafin, M. (2005). Design-Based Research and Technology-Enhanced Learning Environments. Educational Technology Research and Development, 53(4), 5-23.

Weimer, M. (2007). Intriguing connections but not with the past. International Journal for Academic Development, 12(1), 5-8.

Zellweger, F. (2005). Strategic Management of Educational Technology: The Importance of Leadership and Management. Riga, Latvia.

Staff need to be using the same tools they use to teach to also learn

The title of this post is from a presentation by someone at a University responsible for the institutional e-learning systems. It doesn’t matter which university because I imagine it’s a line that has been used at quite a few of them. It does matter that I think it’s completely wrong-headed and illustrates perfectly the problem with institutional e-learning systems and the processes and people that support them.

They are designed to ensure people use the provided systems, rather than what’s best for learning.

philosophy by erix!, on Flickr
Creative Commons Attribution 2.0 Generic License  by  erix! 

They’ll be better at the LMS if we use the LMS to support them

The idea is that any staff development that occurs should be done via the LMS and other institutional e-learning systems. The benefit of this is that learning through these tools not only addresses a learning need, but it also provides teachers with experience from the perspective student.

Who learns with an LMS?

What would happen if I ran a survey asking people what tools they use to learn every day?

I’d imagine tools like Google, Twitter, Diigo, Pinterest etc would be near the top. I don’t imagine an LMS would be anywhere near the top.

It’s a focus on the selected tool (hammer), not on learning (the egg)

The problem with this statement

Staff need to be using the same tools they use to teach to also learn

is that it reflects the mindset that what’s best for learning is using the tools that have already been adopted by the institution. Those tools are the starting point.

What’s not the starting point are the tools people are already using, or the tools that are better for learning. Especially for the time when they stop studying at the institution. This connects to my recent post about the failure of institutional eportfolios.

Another example is getting help with Moodle. Moodle is the LMS used by the institution for which I work. When I want to learn about something related to Moodle I use Google which invariably takes me to either the main Moodle site or some of the good quality Moodle related resources shared on the websites of other institutions (e.g. UNSW). It is my understand that I will never find any of the Moodle how-to resources created by my current institution because they reside in a Moodle instance that isn’t searchable by Google. An example of how the focus is on the tool, not on how people actually learn.

Another example is past experience when talking about BIM. BIM is essentially a tool to enable the use of individual student blogs. But whenever central L&T folk at a Moodle institution hear blogs, their first question is something like, “Did you know that Moodle has blogs built-in?”.

If all you have is a hammer….

The network challenge to the LMS mindset

It’s been an intellectually draining few days with a range of visitors to the institution talking about a range of different topics. It started on Monday with George Siemens giving a talk titled “The future of the University: Learning in distributed networks” (video and audio from the talk available here). A key point – at least one I remember – was that the structure of universities follows the structure of information and the current trend is toward distributed networks. On Tuesday, Mark Drechsler ran a session on Extending Moodle using LTI which raises some interesting questions about how the LMS mindset will handle the challenge of a distributed network mindset.

LMS != distributed network

The LMS is an integrated, enterprise system sourced from a single vendor. The institution as a whole decides upon which of the different available systems it will choose and then implements it on a server. The students and staff of that institution must now make use of that system. Changes to that system are controlled by a governance structure that may/may not approve the addition or removal of functionality into the enterprise system. Only a designated IT group (outsourced or local) are actually able to make the change. The “network” has a single node.

The typical mindset encouraged by an LMS when designing learning is not what is the best way to engage student learning. It’s the best way to engage student learning within the constraints of the functionality currently provided by the LMS. I wrote more about the limitations of this model in Jones (2012) and almost incessantly over the last few years. Chapter 2 of the PhD thesis a fair bit of this argument.

Over recent years most institutions have realised that a single node network consisting of the LMS isn’t sufficient for their needs. The network has had a few new nodes such as a lecture capture system, a content repository, an eportfolio system and a range of others. However, this “network” of services isn’t really a distributed network in that it’s still only the institution approved processes that can add to the network. I as an academic, or one of my students, can’t decide we’d like to add a service that is integrated into this institutional network.

Sure we can use standard hyperlinks to link off to Google docs or any one of the huge array of external services that are out there. An extreme example is my kludge for using BIM this year. Where I’m hosting a version of BIM on my laptop because for various reasons (including many of my own making) BIM couldn’t get installed into the institutional version of Moodle in time.

The trouble is that these kludges are not members of the distributed learning systems network associated with the institution. The most obvious indicator of this is the amount of manual work I need to engage in to get information about students from the institutional system into my local install of BIM and then to get information out of my local install of BIM back into the institutional ecosystem.

To have seamless integration into the institutional LMS network requires going through the institutional governance structure. Now there are good reasons for this, but many of them arise from the problem of the LMS not being a network. Some examples include

  • a “bad” addition to the LMS could bring the system down for everyone;

    If the LMS were a network, then this wouldn’t happen. The additions would be on another node so that if the addition was “bad” only that node would be impacted. If nodes could be added by individuals, then only that individual’s applications would be impacted.

  • not enough people are going to use the addition;

    To make it worthwhile to integrate something into the system, there has to be the likelihood that a large number of people are going to use it. Otherwise it’s not worth the effort. The cost of adding something to an integrated system is high. With a network approach the cost of adding a new feature should be low enough to make it economical for only one person to use it.

  • who’s going to help people use the new addition;

    Since a large number of people have to be able to use the system, this raises the question of who is going to support those people. In a network approach, there isn’t this need. In fact, I may decide I don’t want other academics using the service I’ve added.

  • the inertia problem;

    The other major impact of this high cost of integrating a tool into the LMS is inertia. The cost of making changes and the cost of negative impacts means great care must be taken with changes. This means that rapid on-going improvement is difficult leading to inertia. Small-scale improvements suffer from a starvation problem.

  • the trust problem;

    Since it’s a high cost, high risk situation then only a very limited group of people (i.e. central IT) are allowed to make changes and only after approval of another limited group of people (the governance folk).

  • vanilla implementation.

    All of the above leads to vanilla implementations. It’s too difficult to manage the above, so let’s implement the system as is. I’ve heard stories of institutions moving away from more flexible systems (e.g. Moodle) back toward more constrained commercial systems because it removes what element of choice there is. If there’s no choice, then there’s no need for complex discussions. It’s easier to be vanilla.

The LTI Challenge

The Learning Tools Interoperability standard, or more precisely it’s integration into various LMS offer a challenge to this LMS mindset. LTI offers the possibility – at least for some – of turning all this into more of a network than an integrated system. The following will illustrate what I mean. What I wonder, is how well will the existing governance structures around institutional LMS – with their non distributed network mindset – respond to this possibility?

Will they

  1. Recognise it as a significant advantage and engage in exploring how they can effectively encourage and support this shift?
  2. Shut it down because it doesn’t match the LMS mindset?

BIM and LTI

In the very near future, BIM will be installed into the institutional Moodle install for use by others. I have always feared this step because – due to the reasons expressed above – once BIM is installed I will not be able to modify it quickly.

LTI apparently offers a solution to this via this approach

  1. I set up a version of Moodle on one of the freely available hosted services.

    This would be my install of Moodle, equivalent to what I run on my laptop. No-one else would rely on this version. I could make changes to it without effecting anyone. It’s a separate node in the network relied upon by my course. I can install a version of BIM on it and modify it to my hearts content confident that no-one else will be impacted by changes.

  2. Install the Moodle LTI Provider module on my version of Moodle.
  3. Set up a course on my version of Moodle, create a BIM activity and add it to the LTI provider module.

    This allows any other LTI enabled system to connect to and use this BIM activity as if it were running within that system, when it is actually running on my version of Moodle. Of course, this is only possible when they have the appropriate URL and secret.

  4. Go to the institutional version of Moodle and the course in which my students are enrolled and add an “External Tool” (the Moodle name for an LTI consumer) that connects to BIM running on my version of Moodle.

    From the student (and other staff) perspective, using this version of BIM would essentially look the same as using the version of BIM on the institutional Moodle.

LTI allows the institutional LMS become a network. A network that I can add nodes to which are actually part of the network in terms of sharing information easily. It’s a network where I control the node I added, meaning it no longer suffers from the constraints of the institutional LMS.

The downsides and the likely institutional response

This is not an approach currently in the reach of many academics. It’s not an approach required by many academics. But, that’s the beauty of a network over an integrated system, you don’t need to be constrained by the lowest common denominator. Different requirements can be addressed differently.

In terms of technical support, there would be none. i.e. you couldn’t expect the institutional helpdesk to be able to help diagnose problems with my Moodle install. I would have to take on the role of troubleshooting and ensure that the students, if they have problems, aren’t asking the helpdesk.

Perhaps more difficult are questions around student data. I got in trouble last year for using a Google spreadsheet to organise students into groups due to students entering their information onto a system not owned by the institution (even though the student email system is outsourced to Google). I imagine having some student information within a BIM activity on an externally hosted server that hasn’t been officially vetted and approved by the institution would be seen as problematic. In fact, I seem to recollect a suggestion that we should not be using any old Web 2.0 tool in our teaching without clearing it first with the institutional IT folk.

Which brings me back to the my questions above, will the organisational folk

  1. Recognise the LTI-enabled network capability as a significant advantage and engage in exploring how they can effectively encourage and support this shift?
  2. Shut LTI down (or at least restrict it) because it doesn’t match the LMS mindset?

    How long before the LTI consumer module in the institutional LMS is turned off?

LTI seems to continue what I see as the inexorable trend to a more networked approach, or as framed earlier as enabling the best of breed approach to developing these systems. LTI enables the loose coupling of systems. Interesting times ahead.

References

Jones, D. (2012). The life and death of Webfuse : principles for learning and leading into the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future challenges, sustainable futures. Proceedings ascilite Wellington 2012 (pp. 414–423). Wellington, NZ.

Identifying and filling some TPACK holes

The following post started over the weekend. I’m adding this little preface as a result of the wasted hours I spent yesterday battling badly designed systems and the subsequent stories I’ve heard from others today. One of those stories revolved around how the shortening available time and the poorly designed systems is driving one academic to make a change to her course that she knows is pedagogically inappropriate, but which is necessary due to the constraints of these systems.

And today (after a comment from Mark Brown in his Moodlemoot’AU 2013 keynote last week) I came across this blog post from Larry Cuban titled “Blaming Doctors and Teachers for Underuse of High-tech tools”. It includes the following quote

For many doctors, IT-designed digital record-keeping is a Rube Goldberg designed system.

which sums up nicely my perspective of the systems I’ve just had to deal with.

Cuban’s post finishes with three suggested reasons why he thinks doctors and teachers get blamed for resisting technology. Personally, I think he’s missed the impact of “enterprise” IT projects, including

  • Can’t make the boss look bad.

    Increasingly IT projects around e-learning have become “enterprise”, i.e. big. As big projects, the best practice manual requires that the project be visibly led by someone in the upper echelons of senior management. When large IT projects fail to deliver the goods, you can’t make this senior leader look bad. So someone has to be blamed.

  • The upgrade boat.

    When you implement a large IT project, it has to evolve and change. Most large systems – including open source systems like Moodle – do this by having a vendor driven upgrade process. So every year or so the system will be upgraded. An organisational can’t fall behind versions of a system, because eventually they are no longer supported. So, significant resources have to be invested in regularly upgrading the system. Those resources contribute to the intertia of change. You can’t change the system to suit local requirements as all the resources are invested in the upgrade boat. Plus, if you did make a change, then you’d miss the boat.

  • The technology dip.

    The upgrade boat creates another problem, the technology dip. Underwood and Dillon (2011) talk about the technology dip as dip in educational outcomes that arises after the introduction of technological change. As the teachers and students grapple with the changes in technology they have less time and energy to expend on learning and teaching. When you have an upgrade boat coming every 12 months, then the technology dip becomes a regular part of life.

The weekend start to this post

Back from Moodlemoot’AU 2013 and time to finalise results and prepare course sites for next semester. Both are due by Monday. The argument from my presentation at the Moot was that the presence of “TPACK holes” (or misalignment) causes problems. The following is a slide from the talk which illustrates the point.

Slide14

I’d be surprised if anyone thought this was an earth breaking insight. It’s kind of obvious. If this was the case then I wouldn’t expect institutional e-learning to be replete with examples of this. The following is an attempt to document some of the TPACK holes I’m experiencing in the tasks I have to complete this weekend. It’s also an example of recording the gap outlined in this post.

Those who haven’t submitted

Of the 300+ students in my course there are some that have had extension, but haven’t submitted their final assignment. Most likely failing the course. I’d like to contact them and double check that all is ok. I’m not alone in this, I know most people do it. All of my assignments are submitted via an online submission system, but there is no direct support in this system for this task.

The assignment system will give me a spreadsheet of those who haven’t submitted. But it doesn’t provide an email address for those students, nor does it connect with other information about the students. For example, those who have dropped the course or have failed other core requirements. Focusing on those students with extensions works around that requirement. But I do have to get the email addresses.

Warning markers about late submissions

The markers for the course have done a stellar job. But there are still a few late assignments to arrive. In thanking the markers I want to warn them of the assignments still to come, but even with only less than 10 assignments to come this is more difficult than it sounds due to the following reasons

  • The online assignment submission treats “not yet submitted” assignments as different from submitted assignments and submitted assignments is the only place you can allocate students to markers. You can’t allocate before submission.
  • The online assignment submission system doesn’t know about all the different types of students. e.g. overseas students studying with a university partner are listed as “Toowoomba, web” by the system. I have to go check the student records system (or some other system) to determine the answer.
  • The single sign-on for the student records system doesn’t work with the Chrome browser (at least in my context) and I have to open up Safari to get into the student records system.

Contacting students in a course

I’d like to send a welcome message to students in a course prior to the Moodle site being made available.

The institution’s version of Peoplesoft provides such a notify method (working in Chrome, not Safari) but doesn’t allow the attachement of any files to the notification.

I can copy the email addresses of students from that Peoplesoft system, but Peoplesoft uses commas to separate the email addresses meaning I can’t copy and paste the list into the Outlook client (it expects semi-colons as the separator).

Changing dates in a study schedule

Paint me as old school, but personally, I believe there remains a value to students of having a study schedule that maps out the semester. A Moodle site home page doesn’t cut it. I’ve got a reasonable one set up for the course from last semester, but new semester means new dates. So I’m having to manually change the dates, something that could be automated.

Processing final results

As someone in charge of a course, part of my responsibilities is to check the overall results for students, ensure that it’s all okay as per formal policy and then put them through the formal approval processes. The trouble is that none of the systems provided by the institution support this. I can’t see all student results in a single system in a form that allows my to examine and analyse the results.

All the results will eventually end up in a Peoplesoft gradebook system. In which the results are broken up based on the students “mode” of learning i.e. one category for each of the 3 different campuses and another for online students. But from which I cannot actually get any information out of in a usable form. It is only available in a range of different web pages. If the Peoplesoft web interface was halfway decent this wouldn’t be such a problem, but dealing with it is incredibly time consuming. Especially in a course with 300+ students.

I need to get all the information into a spreadsheet so that I can examine, compare etc. I think I’m going to need

  • Student name, number and email address (just in case contact is needed), campus/online.

    Traditionally, this will come from Peoplesoft. Might be some of it in EASE (online assignment submission).

  • Mark for each assignment and their Professional Experience.

    The assignment marks are in EASE. The PE mark is in the Moodle gradebook.

    There is a question as to whether or not the Moodle gradebook will have an indication of whether they have an exemption for PE.

EASE provides the following spreadsheets, and you’re not the only one to wonder why these two spreadsheets weren’t combined into one.

  1. name, number, submission details, grades, marker.
  2. name, number, campus, mode, extension date, status.

Moodle gradebook will provide a spreadsheet with

  • firstname, surname, number…..email address, Professional Experience result

Looks like the process will have to be

  1. Download Moodle gradebook spreadsheet.
  2. Download EASE spreadsheet #1 and #2 (see above) for Assignment 1.
  3. Download EASE spreadsheet #1 and #2 (see above) for Assignment 2.
  4. Download EASE spreadsheet #1 and #2 (see above) for Assignment 3.
  5. Bring these together into a spreadsheet.

    One option would be to use Excel. Another simpler method (for me) might be to use Perl. I know Perl much better than Excel and frankly it will be more automated with Perl than it would be with Excel (I believe).

    Perl script to extract data from the CSV files, stick it in a database for safe keeping and then generate an Excel spreadsheet with all the information? Perhaps.

Final spreadsheet might be

  • Student number, name, email address, campus/mode,
  • marker would be good, but there’ll be different markers for each assignment.
  • a1 mark, a2 mark, a3 mark, PE mark, total, grade

An obvious extension would be to highlight students who are in situations that I need to look more closely at.

A further extension would be to have the Perl script do comparisons of marking between markers, results between campuses, generate statistics etc.

Also, probably better to have the Perl script download the spreadsheets directly, rather than do it manually. But that’s a process I have’t tried yet. Actually, over the last week I did try this, but the institution uses a single sign on method that involves Javascript which breaks the traditional Perl approaches. There is a potential method involving Selenium, but that’s apparently a little flaky – a task for later.

Slumming it with Peoplesoft

I got the spreadsheet process working. It helped a lot. But in the end I still had to deal with the Peoplesoft gradebook and the kludged connection between it and the online assignment submission system. Even though the spreadsheet helped reduce a bit of work, it didn’t cover all of the significant cracks. In the absence of better systems, these are cracks that have to be covered over by human beings completing tasks for which evolution has poorly equipped them. Lots of repetitive, manual copying of information from one computer application to another. Not a process destined to be completed without human error.

Documenting the gap between "start of art" and "state of the actual"

Came across Perrotta et al (2013) in my morning random ramblings through my PLN and was particular struck by this

a rising awareness of a gap between ‘state of art’ experimental studies on learning and technology and the ‘state of the actual’ (Selwyn, 2011), that is, the messy realities of schooling where compromise, pragmatism and politics take centre stage, and where the technological transformation promised by enthusiasts over the last three decades failed to materialize. (pp. 261-262)

For my own selfish reasons (i.e. I have work within the “state of the actual”) my research interests are in understanding and figuring out how to improve the “state of the actual”. My Moodlemoot’AU 2013 presentation next week is an attempt to establish the rationale and map out one set of interventions I’m hoping to undertake. This post is about an attempt to make explicit some on-going thinking about this and related work. In particular, I’m trying to come up with a research project to document the “state of the actual” with the aim of trying to figure out how to intervene, but also, hopefully, to inform policy makers.

Some questions I need to think about

  1. What literature do I need to look at that documents the reality of working with current generation university information systems?
  2. What’s a good research method – especially data capture – to get the detail of the state of the actual?

Why this is important

A few observations can and have been made about the quality of institutional learning and teaching, especially university e-learning. These are

  1. It’s not that good.

    This is the core problem. It needs to be better.

  2. The current practices being adopted to remedy these problems aren’t working.

    Doing more of the same isn’t going to fix this problem. It’s time to look elsewhere.

  3. The workload for teaching staff is high and increasing.

    This is my personal problem, but I also think it’s indicative of a broader issue. i.e. much of the current practices aimed at improving quality assume a “blame the teacher” approach. Sure there are some pretty poor academics, but the most of the teachers I know are trying the best they can.

My proposition

Good TPACK == Good learning and teaching

Good quality learning and teaching requires good TPACK – Technological Pedagogical and Content Knowledge. The quote I use in the abstract for the Moodlemoot presentation offers a good summary (emphasis added)

Quality teaching requires developing a nuanced understanding of the complex relationships between technology, content, and pedagogy, and using this understanding to develop appropriate, context-specific strategies and representations. Productive technology integration in teaching needs to consider all three issues not in isolation, but rather within the complex relationships in the system defined by the three key elements. (Mishra & Koehler, 2006, p. 1029)

For some people the above is obvious. You can’t have quality teaching without a nuanced and context specific understanding of the complex relationships between technology, pedagogy and context. Beyond this simple statement there are a lot of different perspectives on the nature of this understanding, the nature of the three components and their relationships. For now, I’m not getting engaged in those. Instead, I’m simply arguing that

the better the quality of the TPACK, then the better the quality of the learning and teaching

Knowledge is not found (just) in the teacher

The current organisational responses to improving the quality of learning and teaching is almost entirely focused on increasing the level of TPACK held by the teacher. This is done by a variety of means

  1. Require formal teaching qualifications for all teachers.

    Because obviously, if you have a teaching qualification then you have better TPACK and the quality of your teaching will be better. Which is obviously way the online courses taught by folk from the Education disciplines are the best.

  2. Running training sessions introducing new tools.
  3. “Scaffolding” staff by requiring them to follow minimum standards and other policies.

This is where I quote Loveless (2011)

Our theoretical understandings of pedagogy have developed beyond Shulman’s early characteristics of teacher knowledge as static and located in the individual. They now incorporate understandings of the construction of knowledge through distributed cognition, design, interaction, integration, context, complexity, dialogue, conversation, concepts and relationships. (p. 304)

Better tools == Better TPACK == Better quality learning and teaching

TPACK isn’t just found in the head of the academic. It’s found in the tools, the interaction etc they engage in. The problem that interests me is that the quality of the tools etc found in the “state of the actual” within university e-learning is incredibly bad. Especially in terms of helping the generation of TPACK.

Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. Due, however, to the “machine-centered view of the design of machines and, for that matter, the understanding of people” (Norman, 1993, p. 9) our artifacts, rather than aiding cognition, “more often interferes and confuses than aids and clarifies” (p. 9). Without appropriately designed artifacts “human beings perform poorly or cannot perform at all” (Dickelman, 1995, p. 24). Norman (1993) identifies the long history of tool/artifact making amongst human beings and suggests that

The technology of artifacts is essential for the growth in human knowledge and mental capabilities (p. 5)

Documenting the “state of the actual”

So, one of the questions I’m interested in is just how well are the current artifacts being used in institutional e-learning helping “the growth in human knowledge and mental capabilities”?

For a long time, I’ve talked with a range of people about a research project that would aim to capture the experiences of those at the coal face to answer this question. The hoops I am having to currently jump through in trying to bring together a raft of disparate information systems to finalise results for 300+ students has really got me thinking about this process.

As a first step, I’m thinking I’ll take the time to document this process. Not to mention my next task which is the creation/modification of three course sites for the courses I’m teaching next semester. The combination of both these tasks at the same time could be quite revealing.

References

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Perrotta, C., & Evans, M. A. (2013). Instructional design or school politics? A discussion of “orchestration” in TEL research. Journal of Computer Assisted Learning, 29(3), 260–269. doi:10.1111/j.1365-2729.2012.00494.x

Does institutional e-learning have a TPACK problem?

The following is the first attempt to expand upon an idea that’s been bubbling along for the last few weeks. It arises from a combination of recent experiences, including

  • Working through the institutional processes to get BIM installed on the institutional Moodle.
  • Using BIM in my own teaching and the resulting changes (and maybe something along these lines) that will be made.
  • Talking about TPACK to students in the ICTs and Pedagogy course.
  • On-going observations of what passes for institutional e-learning within some Australian Universities (and which is likely fairly common across the sector).

Note: the focus here is on the practice of e-learning within Universities and the institutionally provided systems and processes.

The problem(s)

A couple of problems that spark this thinking

  1. How people and institutions identify the tools available/required.
  2. How the tools provide appropriate support, especially pedagogical, to the people using it.

Which tools?

One of the questions I was asked to address in my presentation to ask for BIM to be installed on the institutional LMS was something along the lines “Why would other people want to use this tool? We can’t install a tool just for one peson.”

Well one answer was that a quick Google search of the institution’s course specifications that revealed 30+ 2012 courses using reflective journals of varying types. BIM is a tool designed primarily to support the use of reflective learning journals by students via individual blogs.

I was quite surprised to find 30+ courses already doing this. This generated some questions

  • How are they managing the workload and the limitations of traditional approaches?
    The origins of BIM go back to when I took over a course that was using a reflective journal assessment task. Implemented by students keeping them as Word documents and submitting at the end of semester. There were problems.
  • I wonder how many of the IT and central L&T people knew that there were 30+ courses already using this approach?
    In this context, it would be quite easy to draw the conclusion that the IT and central L&T folk are there to help people with the existing tools and keep their own workload to a minimum by controlling what new tools are added to the mix. Rather than look for opportunities for innovation within the institution. Which leads to..
  • I wonder why the institution wasn’t already actively looking for tools to help these folk?
    Especially given that reflective learning journals (diaries etc) are “recognised as a significant tool in promoting active learning” (Thorpe, 2004, p. 327) but at the same time the are also “demanding and time-consuming for both students and educators” (Thorpe, 2004, p. 339)

A combination of those questions/factors seem to contribute to recent findings about the workloads faced by academics in terms of e-learning (Tynan et al, 2012)

have increased both the number and type of teaching tasks undertaken by staff, with a consequent increase in their work hours

and (Bright, 2012, n.p)

Lecturers who move into the online learning environment often discover that the workload involved not only changes, but can be overwhelming as they cope with using digital technologies. Questions arise, given the dissatisfaction of lecturers with lowering morale and increasing workload, whether future expansion of this teaching component in tertiary institutions is sustainable.

How the tools provide support?

One of the problems I’m facing with BIM is that the pedagogical approach I originally used and which drove the design of BIM is not the pedagogical approach I’m using now. The features and functions in BIM currently, don’t match what I want to do pedagogically. I’m lucky, I can change the system. But not many folk are in this boat.

And this isn’t the first time we’ve faced this problem. Reaburn et al (2009) used BIM’s predecessor in a “work integrated learning” course where the students were working in a professional context. They got by, but this pedagogical approach had yet again different requirements.

TPACK

“Technological Pedagogical Content Knowledge (TPACK) is a framework that identifies the knowledge teachers need to teach effectively with technology” (Koehler, n.d.). i.e. it identifies a range of different types of knowledge that are useful, perhaps required, for the effective use of technology in teaching and learning. While it has it’s detractors, I believe that TPACK can provide a useful lens for examining the problems with institutional e-learning and perhaps identify some suggestions for how institutional e-learning (and e-learning tools) can be better designed.

To start, TPACK proposes that successful e-learning (I’m going to use that as short-hand for the use of technology in learning and teaching) requires the following types of knowledge (with my very brief descriptions)

  • Technological knowledge (TK) – how to use technologies.
  • Pedagogical knowledge (PK) – how to teach.
  • Content knowledge (CK) – knowledge of what the students are meant to be learning.

Within institutional e-learning you can see this separation in organisational structures and also the assumptions of some of the folk involved. i.e.

  • Technological knowledge – is housed in the institutional IT division.
  • Pedagogical knowledge – is housed in the central L&T division.
  • Content knowledge – academics and faculties are the silos of content knowledge.

Obviously there is overlap. Most academics have some form of TK, PK and CK. But when it comes to the source of expertise around TK, it’s the IT division. etc.

TPACK proposes that there are combinations of these three types of knowledge that offer important insights

  • Pedagogical Content Knowledge (PCK) – the idea that certain types of content is best taught using certain types of pedagogy.
  • Technological Pedagogical Knowledge (TPK) – the knowledge that certain types of technologies work well with certain types of pedagogy (e.g. teaching critical analysis using a calculator probably isn’t a good combination)
  • Technological Content Knowledge (TCK) – that content areas draw on technologies in unique ways (e.g. mathematicians use certain types of technologies that aren’t used by historians)

Lastly, TPACK suggests that there is a type of knowledge in which all of the above is combined and when used effectively this is where the best examples of e-learning arise.  i.e. TPACK – Technological, Pedagogical and Content Knowledge.

The problem I see is that institutional e-learning, its tools, its processes and its organisational structures are getting in the way of allowing the generation and application of effective TPACK.

Some Implications

Running out of time, so some quick implications that I take from the above and want to explore some more. These are going to be framed mostly around my work with BIM, but there are potentially some implications for broader institutional e-learning systems which I’ll briefly touch on.

BIM’s evolution is best when I’m teaching with it

Assuming that I have the time, the best insights for the future development of BIM have arisen when I’m using BIM in my teaching. When I’m able to apply the TPACK that I have to identify ways the tool can help me. When I’m not using BIM in my teaching I don’t have the same experience.

At this very moment, however, I’m only really able to apply this TPACK because I’m running BIM on my laptop (and using a bit of data munging to bridge the gap between it and the institutional systems). This means I am able to modify BIM in response to a need, test it out and use it almost immediately. When/if I begin using BIM on the institutional version of Moodle, I won’t have this ability. At best, I might hope for the opportunity for a new version of BIM to be installed at the end of the semester.

There are reasons why institutional systems have these constraints. The problem is that these constraints get in the way of generating and applying TPACK and thus limit the quality of the institutional e-learning.

I also wonder if there’s a connection here and the adoption of Web 2.0 and other non-institutional tools by academics. i.e. do they find it easier to generate and apply TPACK to these external tools because they don’t have the same problems and constraints as the institutional e-learning tools?

BIM and multiple pedagogies

Arising from the above point is the recognition that BIM needs to be able to support multiple pedagogical approaches. i.e. the PK around reflective learning journals reveals many different pedagogical approaches. If BIM as an e-learning tool is going to effectively support these pedagogies then new forms of TPK need to be produced. i.e. BIM itself needs to know about and support the different reflective journal pedagogies.

There’s a lot of talk about how various systems are designed to support a particular pedagogical approach. However, I wonder just how many of these systems actually provide real TPK assistance? For example, the design of Moodle “is guided by a ‘social constructionist pedagogy'” but it’s pretty easy to see examples of how it’s not used that way when course sites are designed.

There are a range of reasons for this. Not the least of which is that the focus of teachers and academics creating course sites is often focused on more pragmatic tasks. But part of the problem is also, I propose, the level of TPK provided by Moodle. The level of technological support it provides for people to recognise, understand and apply that pedagogical approach.

There’s a two-edged sword here. Providing more TPK may help people adopt this approach, but it can also close off opportunities for different approaches. Scaffolding can quickly become a cage. Too much focus on a particular approach also closes off opportunities for adoption.

But on the other hand, the limited amount of specific TPK provided by the e-learning tools is, I propose, a major contributing factor to the workload issues around institutional e-learning. The tools aren’t providing enough direct support for what teachers want to achieve. So the people have to bridge the gap. They have to do more work.

BIM and distributed cognition – generating TPACK

One of the concerns raised in the committee that had to approve the adoption of BIM was about the level of support. How is the institution going to support academics who want to use BIM? The assumption being that we can’t provide the tool without some level of support and training.

This is a valid concern. But I believe there are two asumptions underpinning it which I’d like to question and explore alternatives. The observations are

  1. You can’t learn how to use the tool, simply by using the tool.
    If you buy a good computer/console game, you don’t need to read the instructions. Stick it in and play. The games are designed to scaffold your entry into the game. I haven’t yet met an institutional e-learning tool that can claim the same. Some of this arises, I believe, from the limited amount of TPK most tools provide. But it’s also how the tool is designed. How can BIM be designed to support this?
  2. The introduction of anything new has to be accompanied by professional development and other forms of formal support.
    This arises from the previous point but it also connected to a previous post titled “Professional development is created, not provided”. In part, this is because the IT folk and the central L&T folk see their job as (and some have their effectiveness measured by) providing professional development sessions or the number of helpdesk calls they process.

It’s difficult to generate TPACK

I believe that the current practices, processes and tools used by institutional e-learning systems make it difficult for the individuals and organisations involved to develop TPACK. Consequently the quality of institutional e-learning suffers. This contributes to the poor quality of most institutional e-learning, the limited adoption of features beyond content distribution and forums, and is part of the reason behind the perceptions of increasing workload around e-learning.

If this is the case, then can it be addressed? How?

References

Bright, S. (2012). eLearning lecturer workload: working smarter or working harder? In M. Brown, M. Hartnett, & T. Stewart (Eds.), ASCILITE’2012. Wellington, NZ.

Reaburn, P., Muldoon, N., & Bookallil, C. (2009). <a href=”“>Blended spaces, work based learning and constructive alignment: Impacts on student engagement. Same places, different spaces. Proceedings ascilite Auckland 2009 (pp. 820–831). Auckland, NZ.

Thorpe, K. (2004). Reflective learning journals : From concept to practice. Reflective practice: International and Multidisciplinary Perspectives, 5(3), 327–343.

Tynan, B., Ryan, Y., Hinton, L., & Mills, L. (2012). Out of hours Final Report of the project e-Teaching leadership: planning and implementing a benefits-oriented costs model for technology-enhanced learning. Strawberry Hills, Australia.

The life and death of Webfuse: What's wrong with industrial e-learning and how to fix it

The following is a collection of presentation resources (i.e. the slides) for an ASCILITE’2012 of this paper. The paper and presentation are a summary of the outcomes my PhD work. The thesis goes into much more detail.

Abstract

Drawing on the 14-year life and death of an integrated online learning environment used by tens of thousands of people, this paper argues that many of the principles and practices underpinning industrial e-learning – the current dominant institutional model – are inappropriate. The paper illustrates how industrial e-learning can limit outcomes of tertiary e-learning and limits the abilities of universities to respond to uncertainty and effectively explore the future of learning. It limits their ability to learn. The paper proposes one alternate set of successfully implemented principles and practices as being more appropriate for institutions seeking to learn for the future and lead in a climate of change.

Slides

The slides are available on Slideshare and should show up below. These slides are the extended version, prior to the cutting required to fit within the 20 minute time limit.

References

Arnott, D. (2006). Cognitive biases and decision support systems development: a design science approach. Information Systems Journal, 16, 55–78.

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. 16th Australasian Conference on Information Systems. Sydney.

Brews, P., & Hunt, M. (1999). Learning to plan and planning to learn: Resolving the planning school/learning school debate. Strategic Management, 20(10), 889–913.

Cecez-Kecmanovic, D., Janson, M., & Brown, A. (2002). The rationality framework for a critical study of information systems. Journal of Information Technology, 17, 215–227.

Central Queensland University. (2004). Faculty teaching and learning report. Rockhampton, Australia.

Davenport, T. (1998). Putting the Enterprise into the Enterprise System. Harvard Business Review, 76(4), 121–131.

Dede, C. (2008). Theoretical perspectives influencing the use of information technology in teaching and learning. In J. Voogt & G. Knezek (Eds.), (pp. 43–59). New York: Springer.

Dillard, J., & Yuthas, K. (2006). Enterprise resource planning systems and communicative action. Critical Perspectives on Accounting, 17(2-3), 202–223.

Fleming, P., & Spicer, A. (2003). Working at a cynical distance: Implications for power, subjectivity and resistance. Organization, 10(1), 157–179.

Haywood, T. (2002). Defining moments: Tension between richness and reach. In W. Dutton & B. Loader (Eds.), (pp. 39–49). London: Routledge.

Hutchins, E. (1991). Organizing work by adaptation. Organization Science, 2(1), 14–39.

Introna, L. (1996). Notes on ateleological information systems development. Information Technology & People, 9(4), 20–39.

Jamieson, K., & Hyland, P. (2006). Factors that influence Information Systems decisions and outcomes: A summary of key themes from four case studies. Adelaide, Australia.

Jones, D. (1996). Solving Some Problems of University Education: A Case Study. In R. Debreceny & A. Ellis (Eds.), Proceedings of AusWebÕ96 (pp. 243–252). Gold Coast, QLD: Southern Cross University Press.

Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. In P. Barker & S. Rebelsky (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications 2002 (pp. 884–889). Denver, Colorado: AACE.

Jones, D. (2003). Course Barometers: Lessons gained from the widespread use of anonymous online formative evaluation. QUT, Brisbane.

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. In A. Christie, B. Vaughan, & P. James (Eds.), Making New Connections, asciliteÕ1996 (pp. 331–345). Adelaide.

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. In G. Siemns & C. Fulford (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 398–406). Chesapeake, VA: AACE.

Jones, N., & OÕShea, J. (2004). Challenging hierarchies: The impact of e-learning. Higher Education, 48, 379–395.

Katz, R. (2003). Balancing Technology and Tradition: The Example of Course Management Systems. EDUCAUSE Review, 38(4), 48–59.

Kurtz, C., & Snowden, D. (2007). Bramble Bushes in a Thicket: Narrative and the intangiables of learning networks. In M. Gibbert & T. Durand (Eds.), . Blackwell.

Laurillard, D. (2002). Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies. London: Routledge.

Light, B., Holland, C. P., & Wills, K. (2001). ERP and best of breed: a comparative analysis. Business Process Management Journal, 7(3), 216–224.

March, J. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.

Mintzberg, H. (1989). Mintzberg on Management, Inside our Strange World of Organisations. New York: Free Press.

Morgan, Glenda. (2003). Faculty use of course management systems. Educause Centre for Applied Research.

Morgan, Glenn. (1992). Marketing discourse and practice: Towards a critical analysis. In M. Alvesson & H. Willmott (Eds.), (pp. 136–158). London: SAGE.

Pozzebon, M., Titah, R., & Pinsonneault, A. (2006). Combining social shaping of technology and communicative action theory for understanding rhetorical closuer in IT. Information Technology & People, 19(3), 244–271.

Robey, D., Ross, W., & Boudreau, M.-C. (2002). Learning to implement enterprise systems: An exploratory study of the dialectics of change. Journal of Management Information Systems, 19(1), 17–46.

Rossi, D., & Luck, J. (2011). Wrestling, wrangling and reaping: An exploration of educational practice and the transference of academic knowledge and skill in online learning contexts. Studies in Learning, Evaluation, Innovation and Development, 8(1), 60–75.

Seely Brown, J., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42.

Seely-Brown, J., & Hagel, J. (2005). From push to pull: The next frontier of innovation. The McKinsey Quarterly. McKinsey & Company.

Simon, H. (1991). Bounded rationality and organizational learning. Organization Science, 2(1), 125–134.

Sturgess, P., & Nouwens, F. (2004). Evaluation of online learning management systems. Turkish Online Journal of Distance Education, 5(3).

Thomas, J. (2012). Universities canÕt all be the same – it’s time we embraced diversity. The Conversation. Retrieved June 28, 2012, from http://theconversation.edu.au/universities-cant-all-be-the-same-its-time-we-embraced-diversity-7379

Truex, Duane, Baskerville, R., & Travis, J. (2000). Amethodical systems development: the deferred meaning of systems development methods. Accounting Management and Information Technologies, 10, 53–79.

Truex, Duanne, Baskerville, R., & Klein, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117–123.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

Underwood, J., & Dillon, G. (2011). Chasing dreams and recognising realities: teachersÕ responses to ICT. Technology, Pedagogy and Education, 20(3), 317–330. doi:10.1080/1475939X.2011.610932

Wagner, E., Scott, S., & Galliers, R. (2006). The creation of Òbest practiceÓ software: Myth, reality and ethics. Information and Organization, 16(3), 251–275.

Weick, K., & Quinn, R. (1999). Organizational change and development. Annual Review of Psychology, 50, 361–386.

The illusion we understand the past fosters overconfidence in our ability to predict the future

As mentioned in the last post I’m currently reading Thinking, Fast and Slow by Daniel Kahneman. The title of this post comes from this quote from that book

The illusion that we understand the past fosters overconfidence in our ability to predict the future

Earlier in the same paragraph Kahneman writes

As Nassim Taleb pointed out in The Black Swan, our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting ability.

Later in the same chapter, Kahneman writes (my emphasis)

The main point of this chapter is not that people who attempt to predict the future make many errors; that goes without saying. The first lesson is that errors of prediction are inevitable because the world is unpredictable. The second is that high subjective confidence is not to be trusted as an indicator of accuracy (low confidence could be more informative).

The connection to e-learning and the LMS

I read this section of Kahneman’s book while at lunch. On returning I found that @sthcrft had written about “The post-LMS non-apocalypse” in part influenced by @timklapdor’s post from earlier this week Sit down we need to talk about the LMS.

In @sthcrft’s post she tries (and by her own admission somewhat fails) at describing what a “post-LMS” world might look like. She’s being asked to predict the future. Which given the above (and a range of other perspectives) a silly thing to try and do. And this is my main problem with the current top-down, “management science” driven approach being adopted by universities. An approach that is predicated on the assumption that you can predict the future. But, before moving onto management, lets just focus on the management of IT systems and the LMS.

(About to paraphrase some of my own comments on @sthcrft’s post).

I have a problem with the LMS as a product model. It has serious flaws. But in seeking to replace the LMS, most universities are continuing to use the same Process model. The plan-driven process model that underpins all enterprise information systems procurement/development assumes you can predict the future. In this case, that you can predict all of the features that are ever going to be required by all of the potential users of the system.
Not going to happen.

Even though I like @timklapdor’s idea of the environment as a much better product model. It will suffer from exactly the same problems if it is developed/implemented without changing the process model and all that it brings with it. The focus on the plan-driven process model ends up with hierarchical organisations with the wrong types of people/roles with the wrong types of inter-connections between them to deal with the “post-lms” world.

This is one of the reasons why I don’t think the adoption of open source LMSes (e.g. Moodle) are going to show any significant changes in the practice of e-learning.

This is the point I will try to make in an 2012 ASCILITE paper. In that same paper, I’ll briefly touch on an alternative. For the longer version of that story – made significantly inaccessible through the requirements of academic writing – see my thesis.

Management and narratives

On a related note, a conversation with a colleague today reinforced the idea that one of the primary tasks taken on by senior managers (e.g. Vice-Chancellors) of a certain type is the creation of coherent narratives. Creating a positive narrative of the institution and its direction and accomplishments seems to have become a necessary tool to demonstrate that the senior manager has made a positive contribution to the institution. It’s a narrative destined to please all stakeholders, perhaps especially the senior managers set of potential employers.

I wonder about the cause/impact that this increasing emphasis on a coherent institutional narrative has on the belief of those within organisations that you can and should predict the future? I wonder if this type of narrative preventing organisations from preparing to fulfil Alan Kay’s quotation

The best way to predict the future is to make it

Perhaps organisations with certain types of leaders are too busy focused on predicting the future that they can’t actually make it?
Management is all about constructing coherent narratives.

Lessons for the meta-level of networked learning?

This semester I’m teaching EDU8117, Networked and Global Learning, one of the Masters level courses here at USQ. It’s been an interesting experience because I’m essentially supporting the design – a very detailed “constructive alignment” design – prepared by someone else. The following is a belated start of my plan to engage in the course at some level like a student. The requirement was to use one of a few provided quotes attempting to define either networked learning or global learning and link it to personal experience. A first step in developing a research article in the topic.

Networked learning

As a nascent/closet connectivist, networked learning is the term in this pair that is of most interest – though both are increasingly relevant to my current practice. All of the three quotes around networked learning spoke to various aspects of my experience, however, the Bonzo and Parchoma (2010, p. 912) quote really resonated, especially this part (my emphasis added)

that social media is a collection of ideas about community, openness, flexibility, collaboration, transformation and it is all user-centred. If education and educational institutions can understand and adopt these principles, perhaps there is a chance for significant change in how we teach and learn in formal and informal settings. The challenge is to discover how to facilitate this change.

At the moment I have yet to read the rest of the article – it is somewhat ironic that I am focusing on networked learning, whilst struggling with limited network access due to the limitations of a local telecommunications company – so I will have to assume that Bonzo and Parchoma are using this collection of ideas from social media as important ideas for networked learning.

What stikes me about this quote is that I think the majority of what passes for institutional support for networked learning – in my context I am talking about Australian Universities (though I believe there is significant similarities in universities across the world) – is failing, or at least struggling mightly “to discover how to facilitate this change”.

This perspective comes from two main sources:

  1. my PhD thesis; and,
    The thesis argued that how universities tend to implement e-learning is completely wrong for the nature of e-learning and formulated an alternate design theory. Interestingly, a primary difference between the “wrong” (how they are doing it now) and the “right” (my design theory) way is how well they match (or don’t) Bonzo and Parchoma’s (2010) collection of ideas from social media.
  2. my recent experience starting work as a teaching academic at a new university.
    In my prior roles – through most of the noughties I was in an environment where I had significant technical knowledge and access. This meant that when I taught I was able to engage in an awful lot on bricolage 1. In the main because the “LMS” I was using was one that I had designed to be user-centered, flexible and open and I still had the access to to make changes.

    On arriving at my new institution, I am now just a normal academic user of the institutional LMS, which means I’m stuck with what I’m given. What I’ve been given – the “LMS” and other systems – are missing great swathes of functionality and there is no way I can engage in bricolage to transform an aspect of the system into something more useful or interesting.

Meta-networked learning

Which brings me to a way in which I’m interested in extending this “definition” of networked learning to a community. Typically networked learning – at least within an institutional setting – is focused on how the students and the teachers are engaging in networked learning. More specifically, how they are using the LMS and associated institutional systems (because you can get in trouble for using something different). Whilst this level of interest in networked learning is important and something I need to engage in as a teaching academic within an institution. I feel what I can do at this level is being significantly constrained because the meta-level of networked learning is broken.

I’m defining the meta-level of networked learning as how the network of people (teaching staff, support staff, management, students), communities, technologies, policies, and processes within an institution learn about how to implement networked learning. How the network of all these elements work (or not) together to enable the other level of networked learning.

Perhaps the major problem I see with the meta-level of networked learning is that it isn’t though of as a learning process. Instead it is widely seen as the roll-out of an institutional, enterprise software system under the auspices of some senior member of staff. A conception that does not allow much space for being about “community, openness, flexibility, collaboration, transformation and it is all user-centred” (Bonzo and Parchoma, 2010, p. 912). Subsequently, I wonder “If education and educational institutions can understand and adopt these principles” (Bonzo and Parchoma, 2010, p. 912) and apply them to the meta-level of networked learning, then “perhaps there is a chance for significant change in how we teach and learn in formal and informal settings” (Bonzo and Parchoma, 2010, p. 912). As always, “The challenge is to discover how to facilitate this change” (Bonzo and Parchoma, 2010, p. 912). Beyond that, I wonder what impact such a change might have on the experience of the institution’s learners, teachers, other staff. Indeed, what impact it might have on the institutions.

References

Bonzo, J., & Parchoma, G. (2010). The Paradox of Social Media and Higher Education Institutions. Networked Learning: Seventh International Conference (pp. 912–918). Retrieved from http://lancaster.academia.edu/GaleParchoma/Papers/301035/The_Paradox_of_Social_Media_and_Higher_Education_Institutions

Hovorka, D., & Germonprez, M. (2009). Tinkering, tailoring and bricolage: Implications for theories of design. AMCIS’2009. Retrieved from http://aisel.aisnet.org/amcis2009/488

1 Hovorka and Germonprez (2009) cite Gabriel (2002) and Ciborra (2002) as describing bricolage as “as a way of describing modes of use characterized by tinkering, improvisation, and the resulting serendipitous, unexpected outcomes”.

Reducing meaningless freedom and a Mahara feature request

Note: An update to this post included at the end.

I’m currently finalising results for a course with 250+ students spread across multiple campuses and online. The final large assignment – worth 70% of the final mark – requires that students create a portfolio (are people still using the term “eportfolio”?) in Mahara and submit it via the institutional assignment submission system.

Due to the nature of the portfolio content and concerns about privacy of school student (it’s a pre-service teacher course) the portfolio cannot be opened up to everyone. Not to mention the fact that many of the students retain this fear that someone is going to copy their work. So, students have to create this multi-page, multi-resource portfolio in Mahara and make sure that certain people can access the portfolio.

With 250+ students it was always going to be the case that a decent handful would have problems, even with reasonable instructions. And it is this decent handful that is creating extra workload for the teaching staff. Additional workload that could be avoided if a principle we formulated in early work around assignment submission – reduce meaningless freedom – was applied to Mahara.

The following outlines that principle and outlines a feature request for Mahara that might help.

Reduce meaningless freedom

Online assignment submission was one of the first applications of online learning we explored back in the mid-1990s in our courses with large numbers of distance education students (Jones and Jamieson, 1997). The early systems were not that well designed and increased the workload on the marker (sorry Kieren). However, they did help with a range of improvements over the traditional physical assignment submission process.

From this experience, we developed the principle of “reduce meaningless freedom”. Here’s how it was described in Jones (1999)

An important lesson from the on-going development of online assignment submission is to reduce the amount of “meaningless freedom” available to students. Early systems relied on students submitting assignments via email attachments. The freedom to choose file formats, mail programs and types of attachments significantly increased the amount of work required to mark assignments. Moving to a Web-based system where student freedom is reduced to choosing which file to upload was a significant improvement.

The problem was that when marking large numbers of assignments, you want to get into a routine. You can only get into a routine if certain important aspects are the same (e.g. file formats, file names etc, the ability to access a Mahara portfolio). The trouble is that if you have a large number of people completing a process, if there is any flexibility in the process they will complete it in different ways (including not completing it properly).

This is not a problem that can be solved by improving the instructions or applying a checklist. With a large enough number of people, there will be people who can’t follow those instructions or ignore the checklist.

Consequently, the information system has to be designed to remove any freedom to vary from the process. It shouldn’t remove all freedom, just those that aren’t important to the outcome of the process but will increase workload in processing. For example, we want the students to be able to express their creativity with their Mahara portfolios. We just don’t want them to have the freedom to submit a URL for a portfolio that we can’t access.

The system should remove, or at least limit, this freedom.

Mahara feature request

The problem here is that for people new to Mahara, it is very difficult to check who can access a complex, multi-page portfolio. I do know that there is a way to give access to a complex, multi-page portfolio: you create a collection, add the pages to that collection, and create a secret URL that the collection. This is the process we described to students. The trouble is that the students have to choose to follow this process and they are free not to.

You could be hard about this and be very explicit. “If you don’t do this you will fail!”. But that doesn’t create a positive learning environment I’d like to have in my courses and fails to recognise that our tools should be helping us achieve our goals.

What would be useful, is if Mahara had a “show/check access” feature. Where a person creating a Mahara portfolio could submit the URL and Mahara would generate a report of who could access which components of that portfolio. It would recurse through all the Mahara links accessible from that URL and report on who could access those links.

Having this as a feature that people have to choose to use still involves some freedom. To remove that freedom a bit more, this process could run in the background and the outcome could be made visible via the Mahara interface. For example, when editing a page that contains links to other parts of Mahara, the interface could add an appropriate label explaining who can access those links.

An update

Thanks to @icampus21.com it is revealed to me that Mahara already has this feature. That its share facility does show what is accessible. Kudos to the Mahara developers. Now I don’t follow @icampus21.com on Twitter and pretty sure they don’t follow me. So this is a nice bit of learning thanks to Twitter, hash tags and @icampus21.com

This begs the question as to why I wasn’t already aware of this. After all, I’m responsible for this course and somewhat computer literate. A significant part of the answer has to be the limitations of my approach to learning about Mahara. But other contributing factors would include that this feature is neither explicitly obvious from using Mahara nor in the preparation/resources provided by my current institution.

One perspective is that there is too much freedom in the way the institution allows the use of Mahara in courses. It should remove the freedom I have from getting this far into a semester without being aware of this. But perhaps it can also be addressed by making it more explicit/obvious in Mahara?

Then there is the whole robustness versus resilience perspective as argued by Dave Snowden.

References

Jones, D., & Jamieson, B. (1997). Three Generations of Online Assignment Management. In R. Kevill, R. Oliver, & R. Phillips (Eds.), (pp. 317-323). Perth, Australia.

Jones, D. (1999). Solving some problems with university education: Part II., Proceedings of AUSWEB’99, Balina, Australia.

People and e-learning – limitations and an alternative

So the last of three sections examining the limitations of industrial e-learning and suggesting an alternative. Time to write the conclusion, read the paper over again and cut it down to size.

People

The characteristics of the product and process of industrial e-learning (e.g. focus on long periods of stable use and the importance of efficient use of the chosen LMS) directly reinforced by and directly impact the people and roles involved with tertiary e-learning. This section briefly examines just four examples of this impact, including:

  1. The negative impact of organizational hierarchies on communication and knowledge sharing.
    The logical decomposition inherent in teleological design creates numerous, often significant, organizational boundaries between the people involved with e-learning. Such boundaries are seen as inhibiting the ability to integrate knowledge across the organization. The following comments from Rossi and Luck (2011, p. 68) partially illustrate this problem:

    During training sessions … several people made suggestions and raised issues with the structure and use of Moodle. As these suggestions and issues were not recorded and the trainers did not feed them back to the programmers … This resulted in frustration for academic staff when teaching with Moodle for the first time as the problems were not fixed before teaching started.

  2. Chinese whispers.
    Within an appropriate governance structure the need for changes to an LMS would typically need to flow up from the users to a central committee typically made up of senior leaders from the faculties, Information Technology and central learning and teaching. There would normally be some representation from teaching staff and students. The length of the communication chain for the original need becomes like a game of Chinese Whispers as it is interpreted through the experiences and biases of those involved. Leading to this impression reported by Rossi and Luck (2011, p. 69)

    The longer the communication chain, the less likely it was that academic users’ concerns would be communicated correctly to the people who could fix the problems.

    The cost of traversing this chain of communication means it is typically not worth the effort of raising small-scale changes.

    Not to mention killing creativity which just came through my Twitter feed thanks to @kyliebudge.

  3. Mixed purposes.
    Logical decomposition also encourages different organizational units to focus on their part of the problem and lose sight of the whole picture. An IT division evaluated on its ability to minimize cost and maximize availability is not likely to want to support technologies in which it has limited expertise. This is one explanation for why the leader of an IT division would direct the IT division’s representatives on an LMS selection panel to ensure that the panel selected the LMS implemented in Java. Or a decision to use the latest version of the Oracle DBMS – the DBMS supported by the IT division – to support the new Moodle installation even though it hasn’t been tested with Moodle and best practice advice is to avoid Oracle. A decision that leads to weeks at the start of the “go live” term where Moodle is largely unavailable.
  4. The perils of senior leadership.
    Having the support and engagement of a senior leader at an institution is often seen as a critical success factor for an LMS implementation. But when the successful completion of the project is tied to the leader’s progression within the leadership hierarchy it can create the situation where the project will be deemed a success, regardless of the outcome.

As an alternative, the Webfuse system relied on a multi-skilled, integrated development and support team. This meant that the small team was responsible for training, helpdesk support, and systems development. The helpdesk person handling the user’s problem was typically also a Webfuse developer who was empowered to make small changes without formal governance approval. Behrens (2009, p. 127) quotes a manager in CQU’s IT division describing the types of changes made to Webfuse as “not even on the priority radar” due to traditional IT management techniques. The developers were also located within the faculty, so they also interacted with academic staff in the corridors and the staff room. This context created an approach to the support of an e-learning system with all the hallmarks of a social constructivist, situated cognition, or community of practice. The type of collaborative and supportive environment identified by Tickle et al (2009) in which academics learn through attempts to solve genuine educational problems, rather than being shown how to adapt their needs to the constraints of the LMS.

References

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124-129.

Rossi, D., & Luck, J. (2011). Wrestling, wrangling and reaping: An exploration of educational practice and the transference of academic knowledge and skill in online learning contexts. Studies in Learning, Evaluation, Innovation and Development, 8(1), 60-75. Retrieved from http://www.sleid.cqu.edu.au/include/getdoc.php?id=1122&article=391&mode=pdf

Tickle, K., Muldoon, N., & Tennent, B. (2009). Moodle and the institutional repositioning of learning and teaching at CQUniversity. Auckland, NZ. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/tickle.pdf

Introducing the alternative

The last couple of posts have attempted to (in the confines of an #ascilite12 paper) summarise some constraints with the dominant product and process models used in industrial e-learning and suggest an alternative. The following – which probably should have been posted first – describes how and where this alternative comes from.

As all this is meant to go into an academic paper, the following starts with a discussion about “research methods” before moving onto describing some of the reasons why this alternative approach might have some merit.

As with the prior posts, this is all still first draft stuff.

Research methods and limitations

From the initial stages of its design the Webfuse system was intended to be a vehicle for both practice (it hosted over 3000 course sites from 1997-2009) and research. Underpinning the evolution of Webfuse was an on-going process of cycle action research that sought to continually improve the system through insights from theory and observation of use. This commenced in 1996 and continued, at varying levels of intensity, through to 2009 when the system ceased directly supporting e-learning. This work has contributed in varying ways to over 25 peer-reviewed publications. Webfuse has also been studied by other researchers investigating institutional adoption of e-learning systems (Danaher, Luck, & McConachie, 2005) and shadow systems in the context of ERP implementation (Behrens, 2009; Behrens & Sedera, 2004).

Starting in 2001 the design of Webuse became the focus of a PhD thesis (Jones, 2011) that made two contributions towards understanding e-learning implementation within universities: the Ps Framework and an Information Systems Design Theory (ISDT). The Ps Framework arose out of an analysis of existing e-learning implementation practices and as a tool to enable the comparison of alternate approaches (Jones, Vallack, & Fitzgerald-Hood, 2008). The formulated ISDT – An ISDT for emergent university e-learning systems –offers guidance for e-learning implementation that brings a number of proposed advantages over industrial e-learing. These contributions to knowledge arose from an action research process that combined broad theoretical knowledge – the principles of the ISDT are supported by insights from a range of kernel theories – with empirical evidence arising from the design and support of a successful e-learning system. Rather than present the complete ISDT – due primarily to space constraints – this paper focuses on how three important components of e-learning can be re-conceptualised through the principles of the ISDT.

The ISDT – and the sub-set of principles presented in this paper – seek to provide theoretical guidance about how to develop and support information systems for university e-learning that are capable of responding to the dominant characteristics (diversity, uncertainty and rapid change) of university e-learning. This is achieved through a combination of product (principles of form and function) and process (principles of implementation) that focus on developing a deep and evolving understanding of the context and use of e-learning. It is through being able to use that understanding to make rapid changes to the system, which ultimately encourages and enables adoption and on-going adaptation. It suggests that any instantiation built following the ISDT will support e-learning in a way that: is specific to the institutional context; results in greater quality, quantity and variety of adoption; and, improves the differentiation and competitive advantage of the host institution.

As with all research, the study described within this study has a number of limitations that should be kept in mind when considering its findings. Through its use of action research, this work suffers the same limitations, to varying degrees, of all action research. Baskerville and Wood-Harper (1996) identify these limitations as: (1) lack of impartiality of the researcher; (2) lack of discipline; (3) mistaken for consulting; and (4) context-dependency leading to difficulty of generalizing findings. These limitations have been addressed within this study through a variety of means including: a history of peer-reviewed publications throughout the process; use of objective data sources; the generation of theory; and, an on-going process of testing. Consequently the resulting ISDT and the principles described here have not been “proven”. This was not the aim of this work. Instead, the intent was to gather sufficient empirical and theoretical support to build and propose a coherent and useful alternative to industrial e-learning. The question of proof and further testing of the ISDT in similar and different contexts provides – as in all research aiming to generate theory – an avenue for future research.

On the value of Webfuse

This section aims to show that there is some value in considering Webfuse. It seeks to summarise the empirical support for the ISDT and the principles described here by presenting evidence that the development of Webfuse led to a range of features specific to the institution and to greater levels of adoption. It is important to note that from 1997 through 2005 Webfuse was funded and controlled by one of five faculties at CQUniversity. Webfuse did not become a system controlled by a central IT division until 2005/2006 as a result of organizational restructures. During the life-span of Webfuse CQU adopted three different official, institutional LMS: WebCT (1999), Blackboard (2004), and Moodle (2010).

Specific to the context

During the period from 1999 through 2002 the “Webfuse faculty” saw a significant increase in the complexity of its teaching model including the addition of numerous international campuses situated within capital cities and a doubling in student numbers, primarily through full-fee paying overseas students. By 2002, the “Webfuse faculty” was teaching 30% of all students at the University. Due to the significant increased in complexity of teaching in this context, a range of teaching management and support services were integrated into Webfuse including: staff and student “portals”, an online assignment submission and management system, a results upload application, an informal review of grade system, a timetable generator, student photo gallery, academic misconduct database, email merge facility, and assignment extension systems.

The value of these systems to the faculty is illustrated by this quote from the Faculty annual report for 2003 cited by Danaher, Luck & McConachie (2005, p. 39)

[t]he best thing about teaching and learning in this faculty in 2003 would be the development of technologically progressive academic information systems that provide better service to our students and staff and make our teaching more effective. Webfuse and MyInfocom development has greatly assisted staff to cope with the complexities of delivering courses across a large multi-site operation.

By 2003 the faculties not using Webfuse were actively negotiating to enable their staff to have access to the services. In 2009 alone, over 12,000 students and 1100 staff made use of these services. Even though no longer officially supported, it is a few of these services that continue to be used by the university in the middle of 2012.

Quotes from staff using the Webfuse systems reported in various publications (Behrens, 2009; Behrens, Jamieson, Jones, & Cranston, 2005; Jones, Cranston, Behrens, & Jamieson, 2005) also provide some insights into how well Webfuse supported the specific context at CQUni.

my positive experience with other Infocom systems gives me confidence that OASIS would be no different. The systems team have a very good track record that inspires confidence

The key to easy use of OASIS is that it is not a off the shelf product that is sooooo generic that it has lost its way as a course delivery tool.

I remember talking to [a Webfuse developer] and saying how I was having these problems with uploading our final results into [the Enterprise Resource Planning (ERP) system] for the faculty. He basically said, “No problem, we can get our system to handle that”…and ‘Hey presto!’ there was this new piece of functionality added to the system … You felt really involved … You didn’t feel as though you had to jump through hoops to get something done.

Beyond context specific systems supporting the management of learning and teaching, Webfuse also included a number of context specific learning and teaching innovations. A short list of examples includes:

  • the course barometer;
    Based on an innovation (Svensson, Andersson, Gadd, & Johnsson, 1999) seen at a conference the barometer was designed to provide students a simple, anonymous method for providing informal, formative feedback about a course (Jones, 2002). Initially intended only for the authors courses, the barometer became a required part of all Webfuse course sites from 2001 through 2005. In 2007/2008 the barometers were used as part of a whole of institution attempt to encourage formative feedback in both Webfuse and Blackboard.
  • Blog Aggregation Management (BAM); and
    BAM allowed students to create individual, externally hosted web-logs (blog) and use them as reflective journals. Students registered their external blog with BAM, which then mirrored all of the students’ blog posts on an institutional server and provided a management and marking interface for teaching staff. Created by the author for use in his own teaching in 2006, BAM was subsequently used in 26 course offerings by 2050+ students and ported to Moodle as BIM (Jones & Luck, 2009). In reviewing BAM, the ELI guide to blogging (Coghlan et al., 2007) identified as
    One of the most compelling aspects of the project was the simple way it married Web 2.0 applications with institutional systems. This approach has the potential to give institutional teaching and learning systems greater efficacy and agility by making use of the many free or inexpensive—but useful—tools like blogs proliferating on the Internet and to liberate institutional computing staff and resources for other efforts.
  • A Web 2.0 course site.
    While it looked like a normal course website, none of the functionality – including discussion, wiki, blog, portfolio and resource sharing – was implemented by Webfuse. Instead, freely available and externally hosted Web 2.0 tools and services provided all of the functionality. For example, each student had a portfolio and a weblog provided by the site http://redbubble.com. The content of the default course site was populated by using BAM to aggregate RSS feeds (generated by the external tools) which were then parsed and displayed by Javascript functions within the course site pages. Typically students and staff did not visit the default course site, as they could access all content by using a course OPML file and an appropriate reader application.

Even within the constraints placed on the development of Webfuse it was able to develop an array of e-learning applications that are either not present in industrial LMSes, were added much later than the Webfuse services, or had significantly reduced functionality.

Greater levels of adoption

Encouraging staff adoption of the Webfuse system was one of the main issues raised in the original Webfuse paper (Jones & Buchanan, 1996). Difficulties in encouraging high levels of quality use of e-learning within universities has remained a theme throughout the literature. Initial use of Webfuse in 1997 and 1998 was not all that successful in achieving that goal, with only five – including the designer of Webfuse who made 50% of all edits using the system – of 60 academic staff making any significant use of Webfuse by early 1999 (Jones & Lynch, 1999). These limitations were addressed from 1999 onwards by a range of changes to the system, how it was supported and the organizational context. The following illustrates the success of these changes by comparing Webfuse adoption with that of the official LMS (WebCT 1999-2003/4; Blackboard 2004-2009) used primarily by the non-Webfuse faculties. It first examines the number of course sites and then examines feature adoption.

From 1997 Webfuse automatically created a default course site for all Faculty courses by drawing on a range of existing course related information. For the official institutional LMS course sites were typically created on request and had to be populated by the academics. By the end of 2003 – 4 years after the initial introduction of WebCT as the official institutional LMS – only 15% (141) of courses from the non-Webfuse faculties had WebCT course sites. At the same time, 100% (302) of the courses from the Webfuse faculty had course sites. Due to the need for academics to populate WebCT and Blackboard courses sites, the presence of a course website doesn’t necessarily imply use. For example, Tickle et al (2009) report that 21% of the 417 Blackboard courses being migrated to Moodle in 2010 contained no documents.

Research examining the adoption of specific categories of LMS features provides a more useful insight into LMS usage. Figures 1 through 4 use the research model proposed by Malikowski, Thompson, & Thies (2007) to compare the adoption of LMS features between Webfuse (the thick continuous lines in each figure), CQUni’s version of Blackboard (the dashed lines), and range of adoption rates found in the literature by Malikowski et al (2007) (the two dotted lines in each figure). This is done for four of the five LMS feature categories identified by Malikowski et al (2007): content transmission (Figure 1), class interaction (Figure 2), student assessment (Figure 3), and course evaluation (Figure 4).

(Click on the graphs to see large versions)

Content Transmission Interactions
Figure 1: Adoption of content transmission features: Webfuse, Blackboard and Malikowski Figure 2: Adoption of class interactions features: Webfuse, Blackboard and Malikowski
(missing archives of most pre-2002 course mailing lists)
Evaluate Students Evaluate Courses
Figure 3: Adoption of student assessment features: Webfuse, Blackboard and Malikowski Figure 4: Adoption of course evaluation features: Webfuse, Blackboard and Malikowski

The Webfuse usage data included in Figures 1 through 4 only include actual feature use by academics or students. For example, from 2001 through 2005 100% of Webfuse courses contained a course evaluation feature called a course barometer, only courses where the course barometer was actually used by students are included in Figure 4. Similarly, all Webfuse default course sites contained content (either automatically added from existing data repositories or copied across from a previous term). Figure 1 only includes data for those Webfuse course sites where teaching staff modified or added content.

Figures 2 and 3 indicate Webfuse adoption rates of greater than 100%. This is possible because a number of Webfuse features – including the EmailMerge and online assignment submission and management applications – were being used in course sites hosted on Blackboard. Webfuse was seen as providing services that Blackboard did not provide, or that were significantly better than what Blackboard did provide. Similarly, the spike in Webfuse course evaluation feature adoption in 2008 to 51.6% is due to a CQU wide push to improve formative feedback across all courses that relied on the Webfuse course barometer feature.

Excluding use by non-Webfuse courses and focusing on the time period 2003-2006, Figures 2 and 3 show that adoption of Webfuse class interaction and student assessment features significantly higher than the equivalent Blackboard features at CQU. It is also significantly higher than the adoption rates found by Malikowski et al (2007) in the broader literature. It also shows adoption rates that appear to be somewhat higher than that found amongst 2008, Semester 1 courses at the University of Western Sydney and Griffith University by Rankine et al (2009). Though it should be noted that Rankine et al (2009) used different sampling and feature categorization strategies that make this comparison tentative.

References

Behrens, S. (2009). Shadow systems: the good, the bad and the ugly. Communications of the ACM, 52(2), 124-129.

Behrens, S., Jamieson, K., Jones, D., & Cranston, M. (2005). Predicting system success using the Technology Acceptance Model: A case study. 16th Australasian Conference on Information Systems. Sydney. Retrieved from http://cgit.nutn.edu.tw:8080/cgit/PaperDL/tkw_090717140108.pdf

Behrens, S., & Sedera, W. (2004). Why do shadow systems exist after an ERP implementation? Lessons from a case study. In C.-P. Wei (Ed.), (pp. 1713-1726). Shanghai, China.

Coghlan, E., Crawford, J., Little, J., Lomas, C., Lombardi, M., Oblinger, D., & Windham, C. (2007). ELI Discovery Tool: Guide to Blogging. EDUCAUSE. Retrieved from http://www-cdn.educause.edu/eli/GuideToBlogging/13552

Danaher, P. A., Luck, J., & McConachie, J. (2005). The stories that documents tell: Changing technology options from Blackboard, Webfuse and the Content Management System at Central Queensland University. Studies in Learning, Evaluation, Innovation and Development, 2(1), 34-43.

Jones, D. (2002). Student Feedback, Anonymity, Observable Change and Course Barometers. In S. R. Philip Barker (Ed.), (pp. 884-889). Denver, Colorado: AACE.

Jones, D. (2011). An Information Systems Design Theory for E-learning. Philosophy. Australian National University. Retrieved from https://djon.es/blog/research/phd-thesis/

Jones, D., & Buchanan, R. (1996). The design of an integrated online learning environment. In P. J. Allan Christie Beverley Vaughan (Ed.), (pp. 331-345). Adelaide.

Jones, D., Cranston, M., Behrens, S., & Jamieson, K. (2005). What makes ICT implementation successful: A case study of online assignment submission. Adelaide.

Jones, D., & Luck, J. (2009). Blog Aggregation Management: Reducing the Aggravation of Managing Student Blogging. In G. Siemns & C. Fulford (Eds.), World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 398-406). Chesapeake, VA: AACE. Retrieved from http://www.editlib.org/p/31530

Jones, D., & Lynch, T. (1999). A Model for the Design of Web-based Systems that supports Adoption, Appropriation and Evolution. In Y. D. San Murugesan (Ed.), (pp. 47-56). Los Angeles.

Jones, D., Vallack, J., & Fitzgerald-Hood, N. (2008). The Ps Framework: Mapping the landscape for the PLEs@CQUni project. Hello! Where are you in the landscape of educational technology? ASCILITE’2008. Melbourne.

Malikowski, S., Thompson, M., & Theis, J. (2007). A model for research into course management systems: bridging technology and learning theory. Journal of Educational Computing Research, 36(2), 149-173.

Rankine, L., Stevenson, L., Malfroy, J., & Ashford-Rowe, K. (2009). Benchmarking across universities: A framework for LMS analysis. Ascilite 2009. Same places, different spaces (pp. 815-819). Auckland. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/rankine.pdf

Svensson, L., Andersson, R., Gadd, M., & Johnsson, A. (1999). Course-Barometer: Compensating for the loss of informal feedback in distance education (pp. 1612-1613). Seattle, Washington: AACE.

Tickle, K., Muldoon, N., & Tennent, B. (2009). Moodle and the institutional repositioning of learning and teaching at CQUniversity. Auckland, NZ. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/tickle.pdf

The e-learning process – limitations and an alternative

And here’s the followup to the well received “LMS Product” post. This is the second section looking at the limitations of how industrial e-learning is implemented, this time focusing on the process used. Not really happy with this one, space limitations are making it difficult to do a good job of description.

Process

It has become a maxim of modern society that without objectives, without purpose there can be no success, the setting of goals and achieving them has become the essence of “success” (Introna, 1996). Many, if not most, universities follow, or at least profess to follow, a purpose driven approach to setting strategic directions (Jones, Luck, McConachie, & Danaher, 2005). This is how institutional leaders demonstrate their strategic insight, their rationality and leadership. This is not a great surprise since such purpose driven processes – labeled as teleological processes by Introna (1996) – has dominated theory and practice to such an extent that it has become ingrained. Even though the debate between the “planning school” of process thought and the “learning school” of process thought has been one of the most pervasive debates in management (Clegg, 2002).

Prior papers (Jones et al., 2005; Jones & Muldoon, 2007) have used the nine attributes of a design process formulated by Introna (1996) to argue that purpose driven processes are particularly inappropriate to the practice of tertiary e-learning. The same papers have presented and illustrated the alternative, ateleological processes. The limitations of teleological processes can be illustrated by examining Introna’s (1996) three necessary requirements for teleological design processes

  1. The system’s behaviour must be relatively stable and predictable.
    As mentioned in the previous section, stability and predictability do not sound like appropriate adjectives for e-learning, especially into the future. Especially given the popular rhetoric about organizations in the present era no longer being stable, and instead are continuously adapting to shifting environments that places them in a state of constantly seeking stability while never achieving it (Truex, Baskerville, & Klein, 1999).
  2. The designers must be able to manipulate the system’s behaviour directly.
    Social systems cannot be “designed” in the same way as technical systems, at best they can be indirectly influenced (Introna, 1996). Technology development and diffusion needs cooperation, however, it takes place in a competitive and conflictual atmosphere where different social groups – each with their own interpretation of the technology and the problem to be solved – are inevitably involved and seek to shape outcomes (Allen, 2000). Academics are trained not to accept propositions uncritically and subsequently cannot be expected to adopt strategies without question or adaptation (Gibbs, Habeshaw, & Yorke, 2000).
  3. The designers must be able to determine accurately the goals or criteria for success.
    The uncertain and confused arena of social behaviour and autonomous human action make predetermination impossible (Truex, Baskerville et al. 2000). Allen (2000) argues that change in organizational and social setting involving technology is by nature undetermined.

For example, Tickle et al (2009) offer one description of the teleological process used to transition CQUni to the Moodle LMS in 2009. One of the institutional policies introduced as part of this process was the adoption of Minimum Service Standards for course delivery (Tickle et al., 2009, p. 1047). Intended to act as a starting point for “integrating learning and teaching strategies that could influence students study habits” and to “encourage academic staff to look beyond existing practices and consider the useful features of the new LMS” (Tickle et al., 2009, p. 1042). In order to assure the quality of this process a web-based checklist was implemented in another institutional system with the expectation that the course coordinator and moderator would actively check the course site met the minimum standards. A senior lecturer widely recognized as a quality teacher described the process for dealing with the minimum standards checklist as

I go in and tick all the boxes, the moderator goes in and ticks all the boxes and the school secretary does the same thing. It’s just like the exam check list.

The minimum standards checklist was removed in 2011.

A teleological process is not interested in learning and changing, only in achieving the established purpose. The philosophical assumptions of teleological processes – modernism and rationality – are in direct contradiction to views of learning meant to underpin the best learning and teaching. Rossi and Luck (2011, p. 62) talk about how “[c]onstructivist views of learning pervade contemporary educational literature, represent the dominant learning theory and are frequently associated with online learning”. Wise and Quealy (2006, p. 899) argue, however, that

while a social constructivist framework may be ideal for understanding the way people learn, it is at odds not only with the implicit instructional design agenda, but also with current university elearning governance and infrastructure.

Staff development sessions become focused on helping the institution achieve the efficient and effective use of the LMS, rather than quality learning and teaching. This leads to staff developers being “seen as the university’s ‘agent’” (Pettit, 2005, p. 253). There is a reason why Clegg (2002) references to teleological approaches as the “planning school” of process thought and the alternative ateological approach the “learning school” of process.

The ISDT abstracted from the Webfuse work includes 11 principles of implementation (i.e. process) divided into 3 groups. The first and second groupings refer more to people and will be covered in the next section. The second grouping focused explicitly on the process and was titled “An adopter-focused, emergent development process”. Webfuse achieved this by using an information systems development processes based on principles of emergent development (Truex et al., 1999) and ateleological design (Introna, 1996). The Webfuse development team was employed and located within the faculty. This allowed for a much more in-depth knowledge of the individual and organizational needs and an explicit focus on responding to those needs. The quote early in this paper about the origins of the results uploading system is indicative of this. Lastly, at its best Webfuse was able to seek a balance between teleological and ateleological processes due to a Faculty Dean who recognized the significant limitations of a top-down approach.

This process, when combined with a flexible and responsive product, better enabled the Webfuse team to work with the academics and students using the system to actively modify and construct the system in response to what was learned while using the system. It was an approach much more inline with a social constructivist philosophy.

References

Allen, J. (2000). Information systems as technological innovation. Information Technology & People, 13(3), 210-221.

Clegg, S. (2002). Management and organization paradoxes. Philadelphia, PA: John Benjamins Publishing.

Gibbs, G., Habeshaw, T., & Yorke, M. (2000). Institutional learning and teaching strategies in English higher education. Higher Education, 40(3), 351-372.

Introna, L. (1996). Notes on ateleological information systems development. Information Technology & People, 9(4), 20-39.

Jones, D., Luck, J., McConachie, J., & Danaher, P. A. (2005). The teleological brake on ICTs in open and distance learning. Adelaide.

Jones, D., & Muldoon, N. (2007). The teleological reason why ICTs limit choice for university learners and learning. In R. J. Atkinson, C. McBeath, S. K. A. Soong, & C. Cheers (Eds.), (pp. 450-459). Singapore. Retrieved from http://www.ascilite.org.au/conferences/singapore07/procs/jones-d.pdf

Pettit, J. (2005). Conferencing and Workshops: a blend for staff development. Education, Communication & Information, 5(3), 251-263. doi:10.1080/14636310500350505

Rossi, D., & Luck, J. (2011). Wrestling, wrangling and reaping: An exploration of educational practice and the transference of academic knowledge and skill in online learning contexts. Studies in Learning, Evaluation, Innovation and Development, 8(1), 60-75. Retrieved from http://www.sleid.cqu.edu.au/include/getdoc.php?id=1122&article=391&mode=pdf

Tickle, K., Muldoon, N., & Tennent, B. (2009). Moodle and the institutional repositioning of learning and teaching at CQUniversity. Auckland, NZ. Retrieved from http://www.ascilite.org.au/conferences/auckland09/procs/tickle.pdf

Truex, D., Baskerville, R., & Klein, H. (1999). Growing systems in emergent organizations. Communications of the ACM, 42(8), 117-123.

Wise, L., & Quealy, J. (2006). LMS Governance Project Report. Melbourne, Australia: Melbourne-Monash Collaboration in Education Technologies. Retrieved from http://www.infodiv.unimelb.edu.au/telars/talmet/melbmonash/media/LMSGovernanceFinalReport.pdf

Page 1 of 14

Powered by WordPress & Theme by Anders Norén

css.php