Assembling the heterogeneous elements for (digital) learning

Category: highereducation Page 1 of 2

Productivity commission recommended a need to grow access to higher education, contain fiscal costs, and improve teaching quality

Gatherers, Weavers and Augmenters: Three principles for dynamic and sustainable delivery of quality learning and teaching

Henry Cook, Steven Booten and I gave the following presentation at the THETA conference in Brisbane in April 2023.

Below you will find

  • Summary – a few paragraphs summarising the presentation.
  • Slides – copies of the slides used.
  • Software – some of the software produced/used as part of the work.
  • References – used in the summary and the slides.
  • Abstract – the original conference abstract.

Summary

The presentation used our experience as part of a team migrating 1500+ course sites from Blackboard to Canvas to explore a broader challenge. A challenge recently expressed in the Productivity Commission’s “Advancing Prosperity” report with its recommendations to grow access to tertiary education while containing cost and improving quality. This challenge to maximise all cost efficiency and quality and access (diversity & scale) is seen as a key issue for higher education (Ryan et al., 2021). It has even been labelled the “Iron Triangle” because – unless you change the circumstances and conditions – improving one indicator will almost inevitably lead to deterioration in the other indicators (Mulder, 2013). The pandemic emergency response being the most recent example of this. Necessarily rapid changes to access (moving from face-to-face to online) required significant costs (staff workload) to produce outcomes that are perceived to be of questionable quality.

Leading to the question we wanted to answer:

How do you stretch the iron triangle? (i.e. maximise cost efficiency, quality, and accessibility)?

In the presentation, we demonstrated that the fundamental tasks (gather and weave) of an LMS migration are manual and repetitive. Making it impossible to stretch the iron triangle. We illustrated why this is the case, demonstrated how we addressed this limitation, and proposed three principles for broader application. We argue that the three principles can be usefully applied beyond LMS migration to business as usual.

Gatherers and weavers – what we do

Our job is to help academic staff design, implement, and maintain quality learning tasks and environments. We suggest that the core tasks required to do this is to gather and weave disparate strands of knowledge, ways of knowing (especially various forms of design and contextual knowledge and knowing), and technologies (broadly defined). For example, a course site is the result of gathering and weaving together such disparate strands as: content knowledge (e.g. learning materials); administrative information (e.g. due dates, timetables etc); design knowledge (e.g. pedagogical, presentation, visual etc); and information & functionality from various technologies (e.g. course profiles, echo360, various components of the LMS etc).

An LMS migration is a variation on this work. It has a larger (all courses) and more focused purpose (migrate from one LMS to another). But still involves the same core tasks of gathering and weaving. Our argument is that to maximise the cost efficiency, accessibility, and quality of this work you must do the same to the core tasks of gathering and weaving. Early in our LMS migration it was obvious that this was not the case. The presentation included a few illustrative examples. There were many more that could’ve been used. Both from the migration and business as usual. All illustrating the overly manual and repetitive nature of gathering and weaving required by contemporary institutional learning environments.

Three principles for automating & augmenting gathering & weaving  – what we did

Digital technology has long been seen as a key enabler for improving productivity through its ability to automate processes and augment human capabilities. Digital technology is increasingly pervasive in the learning and teaching environment, especially in the context of an LMS migration. But none of the available technologies were actively helping automate or augment gathering and weaving. The presentation included numerous examples of how we changed this. From this work we identified three principles.

  1. On-going activity focused (re-)entanglement.
    Our work was focused on high level activities (e.g. analysis, migration, quality assurance, course design of 100s of course sites). Activities not supported by any single technology, hence the manual gathering and weaving. By starting small and continually responding to changes and lessons learned, we stretched the iron triangle by digitally gathering and weaving disparate component technologies into assemblages that were fit for the activities.
  2. Contextual digital augmentation.
    Little to none of the specific contextual and design knowledge required for these activities was available digitally. We focused on usefully capturing this knowledge digitally so it could be integrated into the activity-based assemblages.
  3. Meso-level focus.
    Existing component technologies generally provide universal solutions for the institution or all users of the technology. Requiring manual gathering and weaving to fit contextual needs for each individual variation. By leveraging the previous two principles we were able to provide “technologies that were fit for meso-level solutions. For example, all courses for a program or a school. All courses, that use a complex learning activity like interactive orals.

Connections with other work

Much of the above is informed by or echoes related research and practice in related fields. It’s not just we three. The presentation made explicit connections with the following:

  • Learning and teaching;
    Fawns’ (2022) work on entangled pedagogy as encapsulating the mutual shaping of technology, teaching methods, purposes, values and context (gathering and weaving). Dron’s (2022) re-definition of educational technology drawing on Arthur’s (2009) definition of technology. Work on activity centered design – which understands teaching as a distributed activity – as key to both good learning and teaching (Markauskaite et al, 2023), but also key to institutional management (Ellis & Goodyear, 2019). Lastly – at least in the presentation – the nature and need for epistemic fluency (Markauskaite et al, 2023)
  • Digital technology; and,
    Drawing on numerous contemporary practices within digital technology that break the false dilemma of “buy or build”. Such as the project to product movement (Philip & Thirion, 2021); Robotic Process Automation; Citizen Development; and the idea of lightweight IT development (Bygstad, 2017)
  • Leadership/strategy.
    Briefly linking the underlying assumptions of all of the above as examples of the move away from corporate and reductionist strategies that reduce people to “smooth users” toward possible futures that see us as more “collective agents” (Macgilchrist et al, 2020). A shift seen as necessary to more likely lead – as argued by Markauskaite et al (2023) – to the “even richer convergence of ‘natural’, ‘human’ and ‘digital’ required to respond effectively to global challenges.

There’s much more.

Slides

The presentation does include three videos that are available if you download the slides.

Related Software

Canvas QA is a Python script that will perform Quality Assurance checks on numerous Canvas courses and create a QA Report web page in each course’s Files area. The QA Report lists all the issues discovered and provides some scaffolding to address the issues.

Canvas Collections helps improve the visual design and usability/findability of the Canvas modules page. It is Javascript that can be installed by institutions into Canvas or by individuals as a userscript. It enables the injection of design and context specific information into the vanilla Canvas modules page.

Word2Canvas converts a Word document into a Canvas module to offer improvements to the authoring process in some contexts. At Griffith University, it was used as part of the migration process where Blackboard course site content was automatically converted into appropriate Word documents.  With a slight edit, these Word documents could be loaded directly into Canvas.

References

Arthur, W. B. (2009). The Nature of Technology: What it is and how it evolves. Free Press.

Bessant, S. E. F., Robinson, Z. P., & Ormerod, R. M. (2015). Neoliberalism, new public management and the sustainable development agenda of higher education: History, contradictions and synergies. Environmental Education Research, 21(3), 417–432. https://doi.org/10.1080/13504622.2014.993933

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT. Journal of Information Technology, 32(2), 180–193. https://doi.org/10.1057/jit.2016.15

Cassidy, C. (2023, April 10). ‘Appallingly unethical’: Why Australian universities are at breaking point. The Guardian. https://www.theguardian.com/australia-news/2023/apr/10/appallingly-unethical-why-australian-universities-are-at-breaking-point

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Fawns, T. (2022). An Entangled Pedagogy: Looking Beyond the Pedagogy—Technology Dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Hagler, B. (2020). Council Post: Build Vs. Buy: Why Most Businesses Should Buy Their Next Software Solution. Forbes. Retrieved April 15, 2023, from https://www.forbes.com/sites/forbestechcouncil/2020/03/04/build-vs-buy-why-most-businesses-should-buy-their-next-software-solution/

Inside Track Staff. (2022, October 19). Citizen developers use Microsoft Power Apps to build an intelligent launch assistant. Inside Track Blog. https://www.microsoft.com/insidetrack/blog/citizen-developers-use-microsoft-power-apps-to-build-intelligent-launch-assistant/

Lodge, J., Matthews, K., Kubler, M., & Johnstone, M. (2022). Modes of Delivery in Higher Education (p. 159). https://www.education.gov.au/higher-education-standards-panel-hesp/resources/modes-delivery-report

Macgilchrist, F., Allert, H., & Bruch, A. (2020). Students and society in the 2020s. Three future ‘histories’ of education and technology. Learning, Media and Technology, 45(0), 76–89. https://doi.org/10.1080/17439884.2019.1656235

Markauskaite, L., Carvalho, L., & Fawns, T. (2023). The role of teachers in a sustainable university: From digital competencies to postdigital capabilities. Educational Technology Research and Development, 71(1), 181–198. https://doi.org/10.1007/s11423-023-10199-z

Mulder, F. (2013). The LOGIC of National Policies and Strategies for Open Educational Resources. International Review of Research in Open and Distributed Learning, 14(2), 96–105. https://doi.org/10.19173/irrodl.v14i2.1536

Philip, M., & Thirion, Y. (2021). From Project to Product. In P. Gregory & P. Kruchten (Eds.), Agile Processes in Software Engineering and Extreme Programming – Workshops (pp. 207–212). Springer International Publishing. https://doi.org/10.1007/978-3-030-88583-0_21

Ryan, T., French, S., & Kennedy, G. (2021). Beyond the Iron Triangle: Improving the quality of teaching and learning at scale. Studies in Higher Education, 46(7), 1383–1394. https://doi.org/10.1080/03075079.2019.1679763

Schmidt, A. (2017). Augmenting Human Intellect and Amplifying Perception and Cognition. IEEE Pervasive Computing, 16(1), 6–10. https://doi.org/10.1109/MPRV.2017.8

Smee, B. (2023, March 6). ‘No actual teaching’: Alarm bells over online courses outsourced by Australian universities. The Guardian. https://www.theguardian.com/australia-news/2023/mar/07/no-actual-teaching-alarm-bells-over-online-courses-outsourced-by-australian-universities

Abstract

The pandemic reinforced higher educations’ difficulty responding to the long-observed challenge of how to sustainably and at scale fulfill diverse requirements for quality learning and teaching (Bennett et al., 2018; Ellis & Goodyear, 2019). Difficulty increased due to many issues, including: competition with the private sector for digital talent; battling concerns over the casualisation and perceived importance of teaching; and, growing expectations around ethics, diversity, and sustainability. That this challenge is unresolved and becoming increasingly difficult suggests a need for innovative practices in both learning and teaching, and how learning and teaching is enabled. Starting in 2019 and accelerated by a Learning Management System (LMS) migration starting in 2021 a small group have been refining and using an alternate set of principles and practices to respond to this challenge by developing reusable orchestrations – organised arrangements of actions, tools, methods, and processes (Dron, 2022) – to sustainably, and at scale, fulfill diverse requirements for quality learning and teaching. Leading to a process where requirements are informed through collegial networks of learning and teaching stakeholders that weigh their objective strategic and contextual concerns to inform priority and approach. Helping to share knowledge and concerns and develop institutional capability laterally and in recognition of available educator expertise.

The presentation will be structured around three common tasks: quality assurance of course sites; migrating content between two LMS; and, designing effective course sites. For each task a comparison will be made between the group’s innovative orchestrations and standard institutional/vendor orchestrations. These comparisons will: demonstrate the benefits of the innovative orchestrations; outline the development process; and, explain the three principles informing this work – 1) contextual digital augmentation, 2) meso-level automation, and 3) generativity and adaptive reuse. The comparisons will also be used to establish the practical and theoretical inspirations for the approach, including: RPA and citizen development; and, convivial technologies (Illich, 1973), lightweight IT development (Bygstad, 2017), and socio-material understandings of educational technology (Dron, 2022). The breadth of the work will be illustrated through an overview of the growing catalogue of orchestrations using a gatherers, weavers, and augmenters taxonomy.

References

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026. https://doi.org/10.1111/bjet.12683

Bygstad, B. (2017). Generative Innovation: A Comparison of Lightweight and Heavyweight IT: Journal of Information Technology. https://doi.org/10.1057/jit.2016.15

Dron, J. (2022). Educational technology: What it is and how it works. AI & SOCIETY, 37, 155–166. https://doi.org/10.1007/s00146-021-01195-z

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning, Strategy and the Academy. Routledge.

Illich, I. (1973). Tools for Conviviality. Harper and Row.

Exploring knowledge reuse in design for digital learning: tweaks, H5P, constructive templates and CASA

The following has been accepted for presentation at ASCILITE’2019. It’s based on work described in earlier blog posts.

Click on the images below to see full size.

Abstract

Higher education is being challenged to improve the quality of learning and teaching while at the same time dealing with challenges such as reduced funding and increasing complexity. Design for learning has been proposed as one way to address this challenge, but a question remains around how to sustainably harness all the diverse knowledge required for effective design for digital learning. This paper proposes some initial design principles embodied in the idea of Context-Appropriate Scaffolding Assemblages (CASA) as one potential answer. These principles arose out of prior theory and work, contemporary digital learning practices and the early cycles of an Action Design Research process that has developed two digital ensemble artefacts employed in over 30 courses (units, subjects). Early experience with this approach suggests it can successfully increase the level of design knowledge embedded in digital learning experiences, identify and address shortcomings with current practice, and have a positive impact on the quality of the learning environment.

Keywords: Design for Learning, Digital learning, NGDLE.

Introduction

Learning and teaching within higher education continues to be faced with significant, diverse and on-going challenges. Challenges that increase the difficulty of providing the high-quality learning experiences necessary to produce graduates of the standard society is expecting (Bennett, Lockyer, & Agostinho, 2018). Goodyear (2015) groups these challenges into four categories: massification and the subsequent diversification of needs and expectations; growing expectations of producing work-ready graduates; rapidly changing technologies, creating risk and uncertainty; and, dwindling public funding and competing demands on time. Reconceptualising teaching as design for learning has been identified as a key strategy to sustainably, and at scale, respond to these challenges in a way that offers improvements in learning and teaching (Bennett et al., 2018; Goodyear, 2015). Design for learning aims to improve learning processes and outcomes through the creation of tasks, environments, and social structures that are conducive to effective learning (Goodyear, 2015; Goodyear & Dimitriadis, 2013). The ability of universities to develop the capacity of teaching staff to enhance student learning through design for learning is of increasing financial and strategic importance (Alhadad, Thompson, Knight, Lewis, & Lodge, 2018).

Designing learning experiences that successfully integrate digital tools is a wicked problem. A problem that requires the utilisation of expert knowledge across numerous fields to design solutions that respond appropriately to the unique, incomplete, contextual, and complex nature of learning (Mishra & Koehler, 2008). The shift to teaching as design for learning requires different skills and knowledge, but also brings shifts in the conception of teaching and the identity of the teacher (Gregory & Lodge, 2015). Effective implementation of design for learning requires detailed understanding of pedagogy and design and places cognitive, emotional and social demands on teachers (Alhadad et al., 2018). The ability of teachers to deal with this load has significant impact on learners, learning, and outcomes (Bezuidenhout, 2018). Academic staff report perceptions that expertise in digital technology and instructional design will be increasingly important to their future work, but that these are also the areas where they have the least competency and the highest need for training (Roberts, 2018). Helping teachers integrate digital technology effectively into learning and teaching has been at or near the top of issues facing higher education over several years (Dahlstrom, 2015). However, the nature of this required knowledge is often underestimated by common conceptions of the knowledge required by university teachers (Goodyear, 2015). Responding effectively will not be achieved through a single institutional technology, structure, or design, but instead will require an “amalgamation of strategies and supportive resources” (Alhadad et al., 2018, pp. 427-429). Approaches that do not pay enough attention to the impact on teacher workload run the risk of less than optimal learner outcomes (Gregory & Lodge, 2015).

Universities have adopted several different strategies to ameliorate the difficulty of successfully engaging in design for digital learning. For decades a common solution has been that course design, especially involving the adoption of new methods and technologies, should involve systematic planning by a team of people with appropriate expertise in content, education, technology and other required areas (Dekkers & Andrews, 2000). The use of collaborative design teams with an appropriate, complementary mix of skills, knowledge and experience mirrors the practice in other design fields (Alhadad et al., 2018). However, the prevalence of this practice in higher education has been low, both then (Dekkers & Andrews, 2000) and now. The combination of the high demand and limited availability of people with the necessary knowledge mean that many teaching staff miss out (Bennett, Agostinho, & Lockyer, 2017). A complementary approach is professional development that provides teaching staff with the necessary knowledge of digital technology and instructional design (Roberts, 2018). However, access to professional development is not always possible and funding for professional development and training has rarely kept up with the funding for hardware and infrastructure (Mathes, 2019). There has been work focused on developing methods, tools and repositories to help analyse, capture and encourage reuse of learning designs across disciplines and sectors (Bennett et al., 2017). However, it appears that design for learning continues to struggle to enter mainstream practice (Mor, Craft, & Maina, 2015) with design work undertaken by teachers apparently not including the use of formal methods or systematic representations (Bennett et al., 2017). There does, however, remain on-going demand from academic staff for customisable and reusable ideas for design (Goodyear, 2005). Approaches that respond to academic concerns about workload and time (Gregory & Lodge, 2015) and do not require radical changes to existing work practices nor the development of complex knowledge and skills (Goodyear, 2005).

If there are limitations with current common approaches, what other approaches might exist? Leading to the research question of this study:

How might the diverse knowledge required for effective design for digital learning be shared and used sustainably and at scale?

An Action Design Research (ADR) process is being applied to develop one answer to this question. ADR is used to describe the design, development and evaluation of two digital artefacts – the Card Interface and the Content Interface – and the subsequent formulation of initial design principles that offer a potential answer to the research question. The paper starts by describing the research context and research method. The evolution of each of the two digital artefacts is then described. This experience is then abstracted into six design principles encapsulated in the concept of Context-Appropriate Scaffolding Assemblages (CASA). Finally, the conclusions and implications of this work are discussed.

Research context and method

This research project started in late 2018 within the Learning and Teaching (L&T) section of the Arts, Education and Law (AEL) Group at Griffith University. Staff within the AEL L&T section work with the AEL’s teachers to improve the quality of learning and teaching across about 1300 courses (units, subjects) and 68 programs (degrees). This work seeks to bridge the gaps between the macro-level institutional and technological vision and the practical, coal-face realities of teaching and learning (micro-level). In late 2018 the macro-level vision at Griffith University consisted of current and long-term usage of the Blackboard Learn Learning Management System (LMS) along with a recent decision to move to the Blackboard Ultra LMS. In this context, a challenge was balancing the need to help teaching staff continue to improve learning and teaching within the existing learning environment while at the same time helping the institution develop, refine, and achieve its new macro-level vision. It is within this context that the first offering of Griffith University’s Bachelor of Creative Industries (BCI) program would occur in 2019. The BCI is a future-focused program designed to attract creatives who aspire to a career in the creative industries by instilling an entrepreneurial mindset to engage and challenge the practice and business of the creative industries. Implementation of the program was supported through a year-long strategic project including a project manager and educational developer from the AEL L&T section working with a Program Director and other academic staff. This study starts in late 2018 with a focus on developing the course sites for the seven first year BCI courses. A focus of this work was to develop a striking and innovative design that mirrored the program’s aims and approach. A design that could be maintained by the relevant teaching staff beyond the project’s protected niche. This raised the question of how to ensure that the design knowledge required to maintain a digital learning environment into the future would be available within the teaching team?

To answer this question an Action Design Research (Sein, Henfridsson, Purao, & Rossi, 2011) process was adopted. ADR is a merging of Action Research with Design Research developed within the Information Systems discipline. ADR aims to use the analysis of the continuing emergence of theory-ingrained, digital artefacts within a context as the basis for developing generalised outcomes, including design principles (Sein et al., 2011). A key assumption of ADR is that digital artefacts are not established or fixed. Instead, digital artefacts are ensembles that arise within a context and continue to emerge through development, use and refinement (Sein et al., 2011). A critical element of ADR is that the specific problem being addressed – design of online learning environment for courses within the BCI program – is established as an example of a broader class of problems – how to sustainably and at scale share and reuse the diverse knowledge required for effective design for digital learning (Sein et al., 2011). This shift moves ADR work beyond design – as practised by any learning designer – to research intending to provide guidance to how others might address similar challenges in other contexts that belong to the broader class of design problems.

Figure 1 provides a representation of the ADR four-stage process and the seven principles on which ADR is based. Stages 1 through 3 represent the process through which ensemble digital artefacts are developed, used and evolved within a specific context. The next two sections of this paper describe the emergence of two artefacts developed for the BCI program as they cycled through the first three ADR stages numerous times. The fourth stage of ADR – Formalisation of Learning – aims to abstract the situated knowledge gained during the emergence of digital artefacts into design principles that provide guidance for addressing a class of field problems (Sein et al., 2011). The third section of this paper formalizes the learning gained in the form of six initial design principles structured around the concept of Contextually Appropriate Scaffolding Assemblages (CASA).

Action Design Research Method: Stages and Pinciples

Figure 1 – ADR Method: Stages and Principles (adapted from Sein et al., 2011, p. 41)

Card Interface (artefact 1, ADR stages 1-3)

In response to the adoption of a trimester academic calendar, Griffith University encourages the adoption of a modular approach to course design. It is recommended that course profiles use modules to group and describe the teaching and learning activities. Subsequently, it has become common practice for this modular structure to be used within the course site using the Blackboard Learn content area functionality. To do this well, is not straight forward. Blackboard Learn has several functional limitations in legibility, design consistency, content arrangement and content adjustment that make it difficult to achieve quality visual design (Bartuskova, Krejcar, & Soukal, 2015). Usability analysis has also found that the Blackboard content area is inflexible, inefficient to use, and creates confusion for teaching staff regardless of their level of user experience (Kunene & Petrides, 2017). Overcoming these limitations requires levels of technical and design knowledge not typically held by teaching staff. Without this knowledge the resulting designs typically range from purely textual (e.g. the left-hand side of Figure 2) through to exemplars of poor design choices including the likes of blinking text, poor layout, questionable colour choices, and inconsistent design. While specialist design staff can and have been used to provide the necessary design knowledge to implement contextually-appropriate, effective designs, such an approach does not scale. For example, any subsequent modification typically requires the re-engagement of the design staff.

To overcome this challenge the Blackboard Learn user community has developed a collection of related solutions (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) that use Javascript to package the necessary design knowledge into a form that can be used by teachers. Griffith University has for some time used one of these solutions, the Blackboard Tweaks building block (Plaisted & Tkachov, 2011) developed at the Queensland University of Technology. One of the tweaks offered by this building block – the Themed Course Table – has been widely used by teaching staff to generate a tabular representation of course modules (e.g. the right-hand side of Figure 2). However, experience has shown that the level of knowledge required to maintain and update the Themed Course Table can challenge some teaching staff. For example, re-ordering modules can be difficult for some, and the dates commonly used within the table must be manually added and then modified when copied from one offering to another. Finally, the inherently text-based and tabular design of the Themed Course Table is also increasingly dated. This was an important limitation for the Bachelor of Creative Industries. An alternative was required.

Example blackboard content area Themed course table
Figure 2 – Example Blackboard Learn Content Areas: Textual versus Themed Course Table

That alternative would use the same approach as the Themed Course Table to achieve a more appropriate outcome. The approach used by the Themed Course Table, other related examples from the Blackboard community, and the H5P authoring tool (Singh & Scholz, 2017) are contemporary examples of constructive templates (Nanard, Nanard, & Kahn, 1998). Constructive templates arose from the hypermedia discipline to encourage the reuse of design knowledge and have been found to reduce cost and improve consistency, reliability and quality while enabling content experts to author and maintain hypermedia systems (Nanard et al., 1998). Constructive templates encapsulate a specific collection of design knowledge required to scaffold the structured provision of necessary data and generate design instances. For example, the Themed Course Table supports the provision of data through the Blackboard content area interface. It then uses design knowledge embedded within the tweak to transform that data into a table. Given these examples and the author’s prior positive experience with the use of constructive templates within digital learning (Jones, 2011), the initial plan for the BCI Course Content area was to replace the Course Theme Table “template” to adopt both a more contemporary visual design, and a forward-oriented view of design for learning. Dimitriadis and Goodyear (2013) argue that design for learning needs to be more forward-oriented and consider what features will be required in each of the lifecycle stages of a learning activity. That is, as the Course Theme Table replacement is being designed, consider what specific features will be required during configuration, orchestration, and reflection and re-design.

The first step in developing a replacement was to explore contemporary web interface practices for a table replacement. Due to its responsiveness to different devices, highly visual presentation, and widespread use amongst Internet and social media services, a card-based interface was chosen. Based on the metaphor of a paper card, this interface brings together all data for a particular object with an option to add contextual information. Common practice with card-based interfaces is to embed into a card memorable images related to the card content (see Figure 3). Within the context of a course module overview such a practice has the potential to positively impact student cognition, emotions, interest, and motivation (Leutner, 2014; Mayer, 2017). A practical advantage of card-based interfaces is that its widespread use means there are numerous widely available resources to aid implementation. This was especially important to the BCI project team, as it did not have significant graphical and client-side design knowledge to draw upon.

Next, a prototype was developed to test how effectively a card-based interface would represent a course’s learning modules. An iterative process was used to translate features and existing practice from the Course Theme Table to a card-based interface. Feedback from other design staff influenced the evolution of the prototype. It also highlighted differences of opinion about some of the visual elements such as the size of the cards, the number of cards per row, and the inclusion of the date in the top left-hand corner. Eventually the prototype card interface was shown to the BCI teaching team for input and approval. With approval given, a collection of Javascript and HTML was created to transform a specifically formatted Blackboard content area into a card interface.

Figure 3 shows just two of the six different styles of card-based interface currently supported by the Card Interface. This illustrates a key feature of the original conception of constructive templates – separation of content from presentation (Nanard et al., 1998) – allowing for different representations of the same content. The left-hand image in Figure 3 and the inclusion of dates on some cards illustrates one way the Card Interface supports a forward-oriented approach to design. Initially, the module dates are specified during the configuration of a course site. However, the dates typically only apply to the initial offering of the course and will need to be manually changed for subsequent offerings. To address this the Card Interface knows the trimester weekly dates from the university academic calendar. Dates to be included on the Card Interface can then be provided using the week number (e.g. Week 1, Week 5 etc.). The Card Interface identifies the trimester a course offering belongs to and translates all week numbers into the appropriate calendar dates.

view ANother card interface
Figure 3 – Two early visualisations of the Card Interface

Despite being designed for the BCI program, the first use of the Card Interface was not in the BCI program. Instead, in late 2018 a librarian working on a Study Skills site learned of the Card Interface from a colleague. Working without any additional support, the librarian was able to use the Card Interface to represent 28 modules spread over 12 content areas. Implementation of the Card Interface in the BCI courses started by drawing on existing learning module content from course profiles. Google Image Search was used to identify visually striking images that could be associated with each module (e.g. the left-hand side of Figure 3). The Card Interface was also used on the BCI program’s Blackboard site. However, the program site had a broader purpose leading to different design decisions and the adoption of a different style of card-based interface (see the right-hand image in Figure 3).

Anecdotal feedback from BCI staff and students suggest that the initial implementation and use of the Card Interface was positive. In addition, the visual improvements offered by the Card Interface over both the standard Blackboard Content Area and the Course Theme Table tweak led to interest from other courses and programs. As of late July 2019, the Card Interface has been used in over 55 content areas in over 30 Blackboard sites. Adoption has occurred at both the program and individual course level led by exposure within the AEL L&T team or by academics seeing it and wanting it. Widespread use has generated different requirements leading to creative uses of the Card Interface (e.g. the use of animated GIFs as card images) and the addition of new functionality (e.g. the ability to embed a video, instead of an image). Requirements from another strategic project led to a customisation of the Card Interface to provide an overview of assessment items, rather than modules.

With its adoption in multiple courses and use for different purposes the Card Interface appears to have successfully encapsulated a collection of design knowledge into a form that can be readily adopted and adapted. Use of that knowledge has improved the resulting design. Contributing factors to this success include: building on existing practice; providing advantages above and beyond existing practice; and, the capability for both teaching and support staff to rapidly customise the Card Interface. Further work is required to gain greater and more objective insight into the impact of the Card Interface on the student experience and outcomes of learning and teaching.

Content Interface (artefact 2, ADR stages 1-3)

The Card Interface provides a visual overview of course modules. The next challenge for the BCI project was the design, implementation and support of the learning activities and resources that form the content of those course modules. A task that is inherently more creative, important and typically involves significantly more content. Also, a task that must be completed using the same, problematic Blackboard interface. This requirement is known to encourage teaching staff to avoid the interface by using offline documents and slides (Bartuskova et al., 2015). This is despite evidence that failing to leverage affordances of the online environment can create a disengaging student experience (Stone & O’Shea, 2019) and that course content is a significant influence on students’ perceptions of course quality (Peltier, Schibrowsky, & Drago, 2007). Adding to the difficulty, the BCI teaching staff either had limited, none, or little recent experience with Blackboard. In the case of contracted staff, they did not have access to Blackboard. This raised the question of how to support the design, implementation and re-design of effective modular, online learning resources and activities for the BCI?

Observation of, and experience with, the Blackboard interface identified three main issues. First, staff did not know how or have access to the Blackboard content interface. Second, the Blackboard authoring interface provides limited authoring functionality. For example, beyond issues identified in the literature (Bartuskova et al., 2015; Kunene & Petrides, 2017) there is no support for standard authoring functionality such as grammar checking, reference management, commenting, and version control. Lastly, once the content is placed within Blackboard the user interface is limited and quite dated. On the plus side, the Blackboard interface does provide the ability to integrate a variety of different activities such as discussion forums, quizzes etc. The intent was to address the issues while at the same time retaining the ability to use the Blackboard activities.

For better or worse, the most common content creation tool for most University staff is Microsoft Word. Anecdotal observation suggests that many staff have adopted the practice of drafting content in Word before copying and pasting it into Blackboard. The Content Interface is designed to transform Word documents into good quality online learning activities and resources (see Figure 4). This is done by using an open source converter to semantically transform Word to HTML that is then copied and pasted into Blackboard. A collection of design knowledge embedded into Javascript then transforms the HTML in several ways. Semantic elements such as activities and readings are visually transformed. All external web links are modified to open in a new tab to avoid a common Blackboard error. The document is transformed into an accordion interface with vertical list of headings that be clicked on to display associated content. This progressive reveal: allows readers to get an overall picture of the module before focusing on the details; provides greater control over how they engage with the content; and is particularly useful on mobile platforms (Budiu, 2015; Loranger, 2014).

Word Content Interface
Figure 4 – Example Module as a Word document and in the Content Interface in Blackboard

To date, the Content Interface has been used to develop over 75 modules in 13 different Blackboard sites, most of these within the seven BCI courses. Experience using the still incomplete Content Interface suggests that there are significant advantages. For example, Library staff have adopted it to create research skills modules that are used in multiple course sites. Experience in the BCI shows that sharing documents through OneDrive and using comments and track changes enables the Word documents to become boundary objects helping the course development team co-create the module learning activities and resources. Where staff are comfortable with Word as an authoring environment, the authoring process is more efficient. The resulting accordion interface offers an improvement over the standard Blackboard interface. However, creating documents with Word is not without its challenges, especially the use of Word styles and templates. Also, the extra steps required can be perceived as problematic when minor edits need to be made, and when direct editing within Blackboard is perceived to be easier and quicker, especially for time-poor teaching staff. Better integration between Blackboard and OneDrive will help. More advantage is possible when the Content Interface is further contextually customized to offer forward-oriented functionality specific to the module learning design.

Initial Design Principles (ADR stage 4)

This section engages with the final stage of the ADR process – formalisation of learning – to produce design principles that help provide actionable insight for practitioners. The following six design principles help guide the development of Contextually-Appropriate Scaffolding Assemblages (CASA) that help to sustainably and at scale share and reuse the design knowledge necessary for effective design for digital learning. The design principles are grouped using the three components of the CASA acronym.

Contextually-Appropriate

1. A CASA should address a specific contextual need within a specific activity
system
.
The highest quality learning and teaching involves the development of appropriate context-specific approaches (Mishra & Koehler, 2006). A CASA should not be implemented at an institutional level. Such top-down projects are unable to pay enough attention to contextually specific needs as they aim for a solution that works in all contexts. Instead, a CASA should be designed in response to a specific need arising in a course or a small group of related courses. Following Ellis & Goodyear (2019) the focus in designing a CASA should not be the needs of individual students, but instead on the whole activity system. That is, consideration should be given to the complex assemblage of learners, teachers, content, pedagogy, technology, organisational structures and the physical environment with an emphasis on encouraging students to successfully engage in intended learning activities. For example, both the Card and Content Interfaces arose from working with a group of seven courses in the BCI program as the result of two separate, but related, needs. While the issues addressed by these CASA apply to many courses, the ability to develop and test solutions at a small scale was beneficial. Rather than a focus primarily on individual learners, the solutions were heavily influenced by an analysis of the available tools (e.g. Blackboard Tweaks, Office365), practices (e.g. modularisation and learning activities described in course profiles), and other components of the activity systems.

2. CASA should be built using and result in generative technologies. To maximise and maintain contextual appropriateness, a CASA must be able to be designed and redesigned as easily as possible. Zittrain (2008) labels technologies as generative or sterile. Generative technologies have a “capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences” (Zittrain, 2008, p. 70). Sterile technologies prevent this. Generative technologies enable convivial systems where people can be “actively engaged in generating creative extensions to the artefacts given to them” (Fischer & Girgensohn, 1990, p. 183). It is the end-user modifiability of generative technology that is crucial to knowledge-based design environments and enables response to unanticipated, contextual requirements (Fischer & Girgensohn, 1990). Implementing CASA using generative technologies allows easy design for specific contexts. Ensuring that CASA are implemented as generative technologies enables easy redesign for other contexts. Generativity, like other technological affordances, arises from the relationship between the technology and the people using the technology. Not only is it necessary to use technology that is easier to modify, it is necessary to be able to draw upon appropriate technological skills. This could mean having people with those technological skills available to educational design teams. It could also mean having a network of intra- and inter-institutional CASA users and developers collaboratively sharing CASA and the knowledge required for use and development; like that available in the H5P community (Singh & Scholz, 2017).

For example, development of the Card and Content Interfaces was only possible due to Blackboard Learn supporting the embedding of Javascript. The value of this generative capability is evident through the numerous projects (Abhrahamson & Hillman, 2016; Plaisted & Tkachov, 2011) from the Blackboard community that leverage this capability; a capability that has been removed in Blackboard’s next version LMS, Ultra. The use of Office365 by the Content Interface illustrates the rise of digital platforms that are generative and raise questions that challenge how innovation through digital technologies are enabled and managed (Yoo, Boland, Lyytinen, & Majchrzak, 2012). Using the generative jQuery library to implement the Content Interface’s accordion enables modification of the accordion look and feel through use of jQuery’s theme roller and library of existing themes. The separation of content from presentation in the Card Interface has enabled at least six redesigns for different purposes. This work was possible because the BCI development team had ready access to the necessary technological skills and was able to draw upon a wide collection of open source software and online support.

3. CASA development should be strategically aligned and supported. Services to support design for learning within Australian universities are limited and insufficient for the demand (Bennett et al., 2017). Services capable of supporting the development of CASA are likely to be more limited. Hence appropriate decisions need to be made about how and what CASA are designed, re-designed and supported. Resources used to develop CASA are best allocated in line with institutional strategic projects. CASA development should proceed with consideration to the “manageably small set of particularly valued activity systems” (Ellis & Goodyear, 2019, p. 188) within the institution and be undertaken with institutionally approved and supported generative technologies. For example, the Card and Content Interfaces arose from an AEL strategic project. Both interfaces were focused on providing contextually-appropriate customization and support for the institutionally important activity system of creating modular learning activities and resources. Where possible these example CASA have used institutionally approved digital technologies (e.g. OneDrive and Blackboard). The sterile nature of existing institutional infrastructure has made it necessary to use more generative technologies (e.g. Amazon Web Services) that are neither officially approved or supported. However, the approach used does build upon an approach from an existing institutional approved technology – Blackboard Tweaks (Plaisted & Tkachov, 2011).

Scaffolding

4. CASA should package appropriate design knowledge to enable (re-)use by teachers and students. Drawing on ideas from constructive templates (Nanard et al., 1998), CASA should package the diverse design knowledge required to respond to a contextually-appropriate need in a way that this design knowledge can be easily reused in different instances. CASA enable the sustainable reuse of contextually applied design knowledge in learning activity systems and subsequently reduce cost and improve quality and consistency. For example, the Card Interface combines the knowledge from web design and multimedia learning research (Leutner, 2014; Mayer, 2017) in a way that has allowed teaching staff to generate a visual overview of the modules in numerous course sites. The Content Interface combines existing knowledge of the Microsoft

Word ecosystem with web design knowledge to improve the design, use and revision of modular content.

5. CASA should actively support a forward-oriented approach to design for learning.

To “thrive outside of the protective niches of project-based innovation” (Dimitriadis & Goodyear, 2013, p. 1) the design of a CASA must not focus only on initial implementation. Instead, CASA design must explicitly consider and include functionality to support the configuration, orchestration, and reflection and re-design of the CASA. For example, the Card Interface leverages contextual knowledge to enable dates to be specified independent of the calendar to automate re-design for subsequent course offerings. As CASA tend to embody a learning design, it should be possible to improve each CASA’s support for orchestration by implementing checkpoint and process analytics (Lockyer, Heathcote, & Dawson, 2013) specific to the CASA’s embedded learning design.

Assemblages

6. CASA are conceptualised and treated as contextual assemblages. Like all technologies, CASA are assemblies of other technologies (Arthur, 2009) where technologies are understood to include techniques such as organisational processes and pedagogies, as well as hardware and software. But a contextual assemblage is more than just technology. It includes consideration of and connections with the policies, practices, funding, literacies and discourse across levels from societal and down through sector, organisational, personal, individual, formal and informal. These are elements that make up the mess and nuance of the context, where the practice of educational technology gets complex (Cottom, 2019). A CASA must be generative in order to be designed and re-designed to respond to this contextual complexity. A CASA needs to be inherently heterogeneous, ephemeral, local, and emergent. A need that is opposed and ill-suited to the dominant rational system view underpinning common digital learning practice which sees technologies as planned, structured, consistent, deterministic, and systematic. Instead, connecting back to design principle one, CASA should be designed in recognition of and as the importance and complex intertwining of the human, social and organisational elements in any attempt to use digital technologies. It should play down the usefulness of distinctions between developer and user, or pedagogy and technology. For example, the Card Interface does not use the Lego approach to assembly that informs the Next Generation Digital Learning Environment (NGDLE) (Brown, Dehoney, & Millichap, 2015) and underpins technologies such as the Learning Tools Interoperability (LTI) standard. Instead of combining clearly distinct blocks with clearly defined connectors the Card and Content Interface is intertwined with and modifies the Blackboard user interface to connect with the specifics of context. Suggesting that the Lego approach is useful, perhaps even necessary, but not sufficient.

Conclusions, Implications, and Further Work

Universities are faced with the strategically important question of how to sustainably and at scale leverage the knowledge required for effective design for digital learning. The early stages of an Action Design Research (ADR) process has been used to formulate one potential answer in the form of six design principles encapsulated in the idea of Context-Appropriate Scaffolding Assemblages (CASA). To date, the ADR process has resulted in the development and use of two prototype CASA within a suite of 7 courses and within 6 months their subsequent adoption in another 24 courses. CASA draw on the idea of constructive templates to capture diverse design knowledge in a form that enables use of that knowledge by teachers and students to effectively address contextually specific needs. By adopting a forward-oriented view of design for learning CASA offer functionality to support configuration, orchestration, and reflection and re-design in order to encourage on-going use beyond the protected project niche of initial implementation. The use of generative technologies and an assemblage perspective enables CASA development to be driven by and re-designed to fit the specific needs of different activity systems and contexts. Such work will be most effective when it is strategically aligned and supported with the aim of supporting and refining institutionally valued activity systems.

Use of the Card and Content Interfaces within and beyond the original project suggest that these CASA have successfully encapsulated the necessary design knowledge to address shortcomings with current practice and had a positive impact on the quality of the digital learning environment. But it’s early days. These CASA can be improved by more completely following the CASA design principles. For example, the Content Interface currently offers only generic support for module design. Significantly greater benefits would arise from customising the Content Interface to support specific learning designs and provide contextually appropriate forward-oriented functionality. More experience is needed to provide insight into how this can be done effectively. Further work is required to establish if, how and what impact the use of CASA has on the quality of the learning environment and the experience and outcomes of both learning and teaching. Further work could also explore the questions raised by the CASA design principles about existing digital learning practice. The generative principle raises questions about whether moves away from leveraging the generativity of web technology – such the design of Blackboard Ultra and the increasing focus on mobile apps – will make it more difficult to integrate contextually specific design knowledge? Do reported difficulties accessing student engagement data with H5P activities (Singh & Scholz, 2017) suggest that the H5P community could fruitfully pay more attention to supporting a forward-oriented design approach? Does the assemblage principal point to potential limitations with some conceptualisations and implementation of next generation of digital learning environments?

References

Abhrahamson, A., & Hillman, D. (2016). Cutomize Learn with CSS and Javascript injection. Presented at the BBWorld 16, Las Vegas, NV. Retrieved from https://community.blackboard.com/docs/DOC-2103

Alhadad, S. S. J., Thompson, K., Knight, S., Lewis, M., & Lodge, J. M. (2018). Analytics-enabled Teaching As Design: Reconceptualisation and Call for Research. Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 427–435.

Arthur, W. B. (2009). The Nature of Technology: what it is and how it evolves. New York, USA: Free Press.

Bartuskova, A., Krejcar, O., & Soukal, I. (2015). Framework of Design Requirements for E-learning Applied on Blackboard Learning System. In M. Núñez, N. T. Nguyen, D. Camacho, & B. Trawiński (Eds.), Computational Collective Intelligence (pp. 471–480). Springer International Publishing.

Bennett, S., Agostinho, S., & Lockyer, L. (2017). The process of designing for learning: understanding university teachers’ design work. Educational Technology Research & Development, 65(1), 125–145.

Bennett, S., Lockyer, L., & Agostinho, S. (2018). Towards sustainable technology-enhanced innovation in higher education: Advancing learning design by understanding and supporting teacher design practice. British Journal of Educational Technology, 49(6), 1014–1026.

Bezuidenhout, A. (2018). Analysing the Importance-Competence Gap of Distance Educators with the Increased Utilisation of Online Learning Strategies in a Developing World Context. International Review of Research in Open and Distributed Learning, 19(3), 263–281.

Brown, M., Dehoney, J., & Millichap, N. (2015). The Next Generation Digital Learning Environment: A

Report on Research (p. 11). Louisville, CO: EDUCAUSE.

Budiu, R. (2015). Accordions on Mobile. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/mobile-accordions/

Cottom, T. M. (2019). Rethinking the Context of Edtech. EDUCAUSE Review, 54(3). Retrieved from
https://er.educause.edu/articles/2019/8/rethinking-the-context-of-edtech

Dahlstrom, E. (2015). Educational Technology and Faculty Development in Higher Education. Retrieved from ECAR website: https://library.educause.edu/resources/2015/6/educational-technology-and-faculty-development-in-higher-education

Dekkers, J., & Andrews, T. (2000). A meta-analysis of flexible delivery in selected Australian tertiary institutions: How flexible is flexible delivery? In L. Richardson & J. Lidstone, (Eds.), Proceedings of

ASET-HERDSA 2000 Conference, (pp. 172-182)

Dimitriadis, Y., & Goodyear, P. (2013). Forward-oriented design for learning: illustrating the approach. Research in Learning Technology, 21, 1–13.

Ellis, R. A., & Goodyear, P. (2019). The Education Ecology of Universities: Integrating Learning,

Strategy and the Academy. Routledge.

Fischer, G., & Girgensohn, A. (1990). End-user Modifiability in Design Environments. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 183–192.

Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1). https://doi.org/10.14742/ajet.1344

Goodyear, P. (2015). Teaching As Design. HERDSA Review of Higher Education, 2, 27–59.

Goodyear, P., & Dimitriadis, Y. (2013). In medias res: reframing design for learning. Research in

Learning Technology, 21, 1–13.

Gregory, M. S. J., & Lodge, J. M. (2015). Academic workload: the silent barrier to the implementation of technology-enhanced learning strategies in higher education. Distance Education, 36(2), 210–230.

Jones, D. (2011). An Information Systems Design Theory for E-learning (PhD, Australian National University). Retrieved from https://openresearch-repository.anu.edu.au/handle/1885/8370

Kunene, K. N., & Petrides, L. (2017). Mind the LMS Content Producer: Blackboard usability for improved productivity and user satisfaction. Information Systems, 14.

Leutner, D. (2014). Motivation and emotion as mediators in multimedia learning. Learning and

Instruction, 29, 174–175.

Lockyer, L., Heathcote, E., & Dawson, S. (2013). Informing Pedagogical Action: Aligning Learning Analytics With Learning Design. American Behavioral Scientist, 57(10), 1439–1459.

Loranger, H. (2014). Accordions for Complex Website Content on Desktops. Retrieved July 18, 2019, from Nielsen Norman Group website: https://www.nngroup.com/articles/accordions-complex-content/

Mathes, J. (2019). Global quality in online, open, flexible and technology enhanced education: An analysis of strengths, weaknesses, opportunities and threats. Retrieved from International Council for Open and Distance Education website:
https://www.icde.org/knowledge-hub/report-global-quality-in-online-education

Mayer, R. E. (2017). Using multimedia for e-learning. Journal of Computer Assisted Learning,
33(5), 403–423.

Mishra, P., & Koehler, M. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Mor, Y., Craft, B., & Maina, M. (2015). Introduction – Learning Design: Definitions, Current Issues and Grand Challenges. In M. Maina, B. Craft, & Y. Mor (Eds.), The Art & Science of Learning Design (pp. ix–xxvi). Rotterdam: Sense Publishers.

Nanard, M., Nanard, J., & Kahn, P. (1998). Pushing Reuse in Hypermedia Design: Golden Rules, Design

Patterns and Constructive Templates. 11–20. ACM.

Peltier, J. W., Schibrowsky, J. A., & Drago, W. (2007). The Interdependence of the Factors Influencing the Perceived Quality of the Online Learning Experience: A Causal Model. Journal of Marketing Education; Boulder, 29(2), 140–153.

Plaisted, T., & Tkachov, N. (2011). Blackboard Tweaks: Tools for Academics, Designers and Programmers. Retrieved July 2, 2019, from http://tweaks.github.io/Tweaks/index.html

Roberts, J. (2018). Future and changing roles of staff in distance education: A study to identify training and professional development needs. Distance Education, 39(1), 37–53.

Sein, M. K., Henfridsson, O., Purao, S., & Rossi, M. (2011). Action Design Research. MIS Quarterly,
35(1), 37–56.

Singh, S., & Scholz, K. (2017). Using an e-authoring tool (H5P) to support blended learning: Librarians’ experience. In H. Partridge, K. Davis, & J. Thomas (Eds.), Me, Us, IT! Proceedings ASCILITE2017: 34th International Conference on Innovation, Practice and Research in the Use of Educational Technologies in Tertiary Education (pp. 158–162).

Stone, C., & O’Shea, S. (2019). Older, online and first: Recommendations for retention and success.

Australasian Journal of Educational Technology, 35(1). https://doi.org/10.14742/ajet.3913

Yoo, Y., Boland, R. J., Lyytinen, K., & Majchrzak, A. (2012). Organizing for Innovation in the Digitized World. Organization Science, 23(5), 1398–1408.

Zittrain, J. (2008). The Future of the Internet–And How to Stop It. Yale University Press.

Leadership as defining what's successful

After spending a few days visiting friends and family in Central Queensland – not to mention enjoying the beach – a long 7+ hour drive home provided an opportunity for some thinking. I’ve long had significant qualms about the notion of leadership, especially as it is increasingly being understood and defined by the current corporatisation of universities and schools. The rhetoric is increasingly strong amongst schools with the current fashion for assuming that Principals can be the saviour of schools that have broken free from the evils of bureaucracy. I even work within an institution where a leadership research group is quite active amongst the education faculty.

On the whole, my experience of leadership in organisations has been negative. At the best the institution bumbles along through bad leadership. I’m wondering whether or not questioning this notion of leadership might form an interesting future research agenda. The following is an attempt to make concrete some thinking from the drive home, spark some comments, and set me up for some more (re-)reading. It’s an ill-informed mind dump sparked somewhat by some early experiences on return from leave.

Fisherman’s beach by David T Jones, on Flickr

In the current complex organisational environment, I’m thinking that “leadership” is essentially the power to define what success is, both prior to and after the fact. I wonder whether any apparent success attributed to the “great leader” is solely down to how they have defined success? I’m also wondering how much of that success is due to less than ethical or logical definitions of success?

The definition of success prior to the fact is embodied in the current model of process assumed by leaders, i.e. telological processes. Where the great leader must define some ideal future state (e.g. adoption of Moodle, Peoplesoft, or some other system; an organisational restructure that creates “one university”; or, perhaps even worse, a new 5 year strategic plan etc.) behind which the weight of the institution will then be thrown. All roads and work must lead to the defined point of success.

This is the Dave Snowden idea of giving up the evolutionary potential of the present for the promise of some ideal future state. A point he’ll often illustrate with this quote from Seneca

The greatest loss of time is delay and expectation, which depend upon the future. We let go the present, which we have in our power, and look forward to that which depends upon chance, and so relinquish a certainty for an uncertainty.

Snowden’s use of this quote comes from the observation that some systems/situations are examples of Complex Adaptive Systems (CAS). These are systems where traditional expectations of cause and effect don’t hold. When you intervene in such systems you cannot predict what will happen, only observe it in retrospect. In such systems the idea you can specify up front where you want to go is little more than wishful thinking. So defining success – in these systems – prior to the fact is a little silly. It questions the assumptions of such leadership, including that they can make a difference.

So when the Executive Dean of a Faculty – that includes programs in information technology and information systems – is awarded “ICT Educator of the Year” for the state because of the huge growth in student numbers, is it because of the changes he’s made? Or is it because he was lucky enough to be in power at (or just after) the peak of the IT boom? The assumption is that this leader (or perhaps his predecessor) made logical contributions and changes to the organisation to achieve this boom in student numbers. Or perhaps they made changes simply to enable the organisation to be better placed to handle and respond to the explosion in demand created by external changes.

But perhaps rather than this single reason for success (great leadership), it was instead there were simply a large number of small factors – with no central driving intelligence or purpose – that enabled this particular institution to achieve what it achieved. Similarly, when a few years later the same group of IT related programs had few if any students, it wasn’t because this “ICT Educator of the Year” had failed. Nor was it because of any other single factor, but instead hundreds and thousands of small factors both internally and externally (some larger than others).

The idea that there can be a single cause (or a single leader) for anything in a complex organisational environment seems to be faulty. But because it is demanded of them, leaders must spend more time attempting to define and convince people of their success. In essence then, successful leadership becomes more about your ability to define and promulgate widely acceptance of this definition of success.

KPIs and accountability galloping to help

This need to define and promulgate success is aided considerably by simple numeric measures. The number of student applications; DFW rates; numeric responses on student evaluation of courses – did you get 4.3?; journal impact factors and article citation metrics; and, many many more. These simple figures make it easy for leaders to define specific perspectives on success. This is problematic and it’s many problems are well known. For example,

  • Goodhart’s law – “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s law – “The more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • the Lucas critique.

For example, you have the problem identified by Tutty et al (2008) where rather than improve teaching, institutional quality measures “actually encourage inferior teaching approaches” (p. 182). It’s why you have the LMS migration project receiving an institutional award for quality etc, even though for the first few weeks of the first semester it was largely unavailable to students due to dumb technical decisions by the project team and required a large additional investment in consultants to fix.

Would this project have received the award if a senior leader in the institution (and the institutional itself) heavily reliant upon the project being seen as a success?

Would the people involved in giving the project the award have reasonable reasons for thinking it award winning? Is success of the project and of leadership all about who defines what perspective is important?

Some other quick questions

Some questions for me to consider.

  • Where does this perspective sit within the plethora of literature on leadership and organisational studies? Especially within the education literature? How much of this influenced by earlier reading of “Managing without Leadership: Towards a Theory of Organizational Functioning”
  • Given the limited likelihood of changing how leadership is practiced within the current organisational and societal context, how do you act upon any insights this perspective might provide? i.e. how the hell do I live (and heaven forbid thrive) in such a context?

References

Tutty, J., Sheard, J., & Avram, C. (2008). Teaching in the current higher education environment: perceptions of IT academics. Computer Science Education, 18(3), 171–185.

Oh Academia

It’s been one of those weeks in academia.

Earlier in the week the “I quit academia” meme went through my Twitter stream. Perhaps the closest this meme came to me was @marksmithers “On leaving academia” post.

That was about the day when I had to pull the pin on a grant application. Great idea, something we could do and would probably make a difference, but I didn’t have the skills (or the time) to get it over the line.

As it happened, I was reading Asimov’s “Caves of Steel” this week and came across the following quote about the “Medievalists”, a dissaffected part of society

people sometimes mistake their own shortcomings for those of society and want to fix the Cities because they don’t know how to fix themselves

On Tuesday night it was wonder if you could replace “Cities” with “Universities” and capture some of drivers behind the “I quit academia” meme.

And then I attended a presentation today titled “Playing the research game well”. All the standard pragmatic tropes – know your H-Index (mine’s only 16), know the impact factor for journals, only publish in journals with an impact factor greater than 3, meta-analysis get cited more etc.

It is this sort of push for KPIs and objective measures that is being created by the corporatisation of the Australian University sector. The sort of push which makes me skeptical of Mark’s belief

that higher education institutions can and will find their way back to being genuinely positive friendly and enjoyable places to work and study.

If anything these moves are likely to increase the types of experiences Mark reports.

So, I certainly don’t think that the Asimov quote applies. That’s not to say that academics don’t have shortcomings. I have many – the grant application non-submission is indicative of some – but by far the larger looming problem (IMHO) is the changing nature of universities.

That said, it hasn’t been all that bad this week. I did get a phone call from a student in my course. A happy student. Telling stories about how he has been encouraged to experiment with the use of ICTs in his teaching and how he’s found a small group at his work who are collaborating.

Which raises the question, if you’re not going to quit academia (like Leigh commented on Mark’s post, I too am “trapped in wage slavery and servitude”) do you play the game or seek to change it?

Or should we all just take a spoonful?

IRAC – Four questions for learning analytics interventions

The following is an early description of work arising out of The Indicators Project an ongoing attempt to think about learning analytics. With IRAC (Information, Representation, Affordances and Change) Colin Beer, Damien Clark and I are trying to develop a set of questions that can guide the use of learning analytics to improve learning and teaching. The following briefly describes:

  • Why we’re doing this?
  • Introduces some of our assumptions.
  • Touches on the origins of IRAC.
  • Describes the four questions.
  • A very early and rough attempt to use the four questions to think about existing approaches to learning analytics.

Why?

The spark for this work is based on observations made in a presentation from last year. In summary, the argument is that learning analytics has become a management fashion/fad in higher education and how this generally means most implementation of learning analytics is not likely to be very mindful. In turn it is very likely to be limited in its impact on learning and teaching. Having much in common with the raft of expenditure in data warehouses some years ago. Let alone examples such as: graduate attributes, eportfolios, the LMS, open learning, learning objects etc. It would nice to avoid this yet again.

There are characteristics of learning analytics that make the difficulties associated with developing appropriate innovations beyond the faddish adoption of analytics. One of the major contributors is that the use of learning analytics encompasses many different bodies of literature both within and outside learning and teaching. In fact, many of these different bodies of literature have developed important insights that can directly help inform the use of learning analytics to improve learning and teaching. What’s worse is that early indications are that – not unsurprisingly – most institutional projects around learning analytics are apparently ignorant of the insights and lessons gained from this prior work.

In formulating IRAC – our four questions for learning analytics interventions – we’re attempting to aid institutions consider the insights from this earlier work and thus enhance the quality of their learning analytics interventions. We’re also hoping that these four questions will inform our attempts to explore the effective use of learning analytics to improve learning and teaching. For me personally, I’m hoping this work can provide me with the tools and insights necessary to make my own teaching manageable, enjoyable and effective.

Assumptions

Perhaps the largest assumption underpinning the four questions is that the aim of learning analytics interventions is to encourage and enable action by a range of stakeholders. If no action (use) results from a learning analytics project, then there can’t be any improvement to learning and teaching. This is simlar to the argument by Clow (2012) that the key to learning analytics is action in the form of appropriate interventions. Also, Elias (2011) describes two steps that are necessary for the advancement of learning analytics

(1) the development of new processes and tools aimed at improving learning and teaching for individual students and instructors, and (2) the integration of these tools and processes into the practice of teaching and learning (p. 5)

Earlier work has found this integration into practice difficult. For example, Dawson & McWilliam (2008) identify a significant challenge for learning analytics being able to “to readily and accurately interpret the data and translate such findings into practice” (p. 12). Adding further complexity is the observation from Harmelen & Workman (2012) that learning analytics are part of a socio-technical system where success relies as much on “human decision-making and consequent action…as the technical components” (p. 4). The four questions proposed here aim to aid in the design of learning analytics interventions that are integrated into the practice of learning and teaching.

Audrey Watters’ friday night rant serves a slightly similar perspective more succinctly and effectively.

Foundations

In thinking about the importance of action and of learning analytics tools being designed to aid action we were led to the notion of Electronic Performance Support Systems (EPSS). EPSS embody a “perspective on designing systems that support learning and/or performing” (Hannafin et al., 2001, p. 658). EPSS are computer-based systems that “provide workers with the help they need to perform certain job tasks, at the time they need that help, and in a form that will be most helpful” (Reiser, 2001. p. 63).

All well and good, in reading about EPSS we came across the notion of the performance zone. In framing the original definition of an EPSS, Gery (1991) identifies the need for people to enter the performance zone. The performance zone is defined as the metaphorical area where all of the necessary information, skills, dispositions, etc. come together to ensure successful completion of a task (Gery, 1991). For Villachica, Stone & Endicott (2006) the performance zone “emerges with the intersection of representations appropriate to the task, appropriate to the person, and containing critical features of the real world” (p. 550).

This definition of the performance zone is a restatement of Dickelman’s (1995) three design principles for cognitive artifacts drawn from Norman’s (1993) book “Things that make us smart”. In this book, Norman (1993) argues “that technology can make us smart” (p. 3) through our ability to create artifacts that expand our capabilities. At the same time, however, Norman (1993) argues that the “machine-centered view of the design of machines and, for that matter, the understanding of people” (p. 9) results in artifacts that “more often interferes and confuses than aids and clarifies” (p. 9).

Given our recent experience with institutional e-learning systems this view resonates quite strongly as a decent way of approaching the problem.

While the notions of EPSS, the Performance Zone and Norman’s (1993) insights into the design of cognitive artifacts form the scaffolding for the four questions, additional insight and support for each question arises from a range of other bodies of literature. The description of the four questions given below includes very brief descriptions of some of this literature. There is significantly more useful insights to be gained and extending this will form a part of our on-going work.

Our proposition is that effective consideration of these four questions with respect to a particular context, task and intervention will help focus attention on factors that will improve the implementation of a learning analytics intervention. In particular, it will increase the chances that the intervention will be integrated into practice and subsequently have a positive impact on the quality of the learning experience.

IRAC – the four questions

The following table summarises the four questions with a bit more of an expansion below.

Label Question
Information Is all the relevant information and only the relevant information available and being used appropriately?
Representation Does the representation of this information aid the task being undertaken?
Affordances Are there appropriate affordances for action?
Change How will the information, representation and the affordances be changed?

Information

While there is an “information explosion”, the information we collect is usually about “those things that are easiest to identify and count or measure” but which may have “little or no connection with those factors of greatest importance” (Norman, 1993, p. 13). Leading to Verhulst’s observation (cited in Bollier & Firestone, 2010) that “big data is driven more by storage capabilities than by superior ways to ascertain useful knowledge” (p. 14). Potential considerations include whether the information required is technically and ethically available for use? How is the information cleaned, analysed and manipulated during use? Is the information sufficient to fulfill the needs of the task? (and many, many more).

Representation

A bad representation will turn a problem into a reflective challenge, while an appropriate representation can transform the same problem into a simple, straight forward task (Norman, 1993). Representation has a profound impact on design work (Hevner et al., 2004), particularly on the way in which tasks and problems are conceived (Boland, 2002). How information is represented can make a dramatic difference in the ease of a task (Norman, 1993). In order to maintain performance, it is necessary for people to be “able to learn, use, and reference access necessary information within a single context and without breaks in the natural flow of performing their jobs.” (Villachica, Stone, & Endicott, 2006, p. 540). Considerations here are how easy is it for people to understand and analyse the implications of the findings from learning analytics? (and many, many more).

Action

A poorly designed or constructed artifact can greatly hinder its use (Norman, 1993). For an application of information technology to have a positive impact on individual performance then it must be utilised and be a good fit for the task it supports (Goodhue and Thompson, 1995). Human beings tend to use objects in “ways suggested by the most salient perceived affordances, not in ways that are difficult to discover” (Norman, p. 106). The nature of such affordances are not inherent to the artifact, but instead co-determined by the properties of the artifact in relation to the properties of the individual, including the goals of that individual (Young et al., 2000). Glassey (1998) observes that through the provision of “the wrong end-user tools and failing to engage and enable end users” even the best implemented data warehouses “sit abandoned” (p. 62). The consideration here is whether or not the tool provides support for action that is appropriate to the context, the individuals and the task.

Change

The idea of evolutionary development has been central to the theory of decision support systems (DSS) sinces its inception in the early 1970s (Arnott & Pervan, 2005). Rather than being implemented in linear or parallel, development occurs through continuous action cycles involving significant user participation (Arnott & Pervan, 2005). Beyond the systems or tools to under go change, there is a need for the information being captured to change. Buckingham-Shum (2012) identifies the risk that research and development based on data already being gathered will tend to perpetuate the existing dominant approaches through which the data was generated. Another factor is Bollier’s and Firestone’s (2010) observation that once “people know there is an automated system in place, they may deliberately try to game it” (p. 6). Finally, is the observation that universities are a complex system (Beer et al. 2012). Complex systems require reflective and adaptive approaches that seek to identify and respond to emergent behaviour in order to stimulate increased interaction and communication (Boustaini, 2010). Potential considerations here include who is able to implement change? Which of the three prior questions can be changed? How radical can those changes be? Is a diversity of change possible?

Using the four questions

It is not uncommon for Australian Universities to rely on a data warehouse system to support learning analytics interventions. This in part is due to the observation that data warehouses enable significant consideration of the information (question 1). This is not surprising given that the origins and purpose of data warehouses was to provide an integrated set of databases to provide information to decision makers (Arnott & Pervan, 2005). Data warehouses provide the foundation for learning analytics. However, the development of data warehouses can be dominated by IT departments with little experience with decision support (Arnott & Pervan, 2005) and a tendency to focus on technical implementation issues at the expense of user experience (Glassey, 1998).

In terms of consideration of the representation (question 2) the data warehouse generally provides reports and dashboards for ad hoc analysis and standard business measurements (van Dyk, 2008). In a learning analytics context, dashboards from a data warehouse will typically sit outside of the context in which learning and teaching occurs (e.g. the LMS). For a learner or teacher to consult the data warehouse requires the individual to break away from the LMS, open up another application and expend cognitive effort in connecting the dashboard representation with activity from the LMS. Data warehouses also provide a range of query tools that offer a swathe of options and filters for the information they hold. While such power potentially offers good support for change (question 4) that power comes with an increase in difficulty. At least one institution mandates the completion of training sessions to assure competence with the technology and ensure the information is not misinterpreted. This necessity could be interpreted as evidence of limited consideration of representation (question 2) and affordances (question 3). The source of at least some of these limitations arise from the origins of data warehouse tools in the management of businesses, rather than learning and teaching.

Harmelen and Workman (2012) use Purdue University’s Course Signals and Desire2Learn’s Student Success System (S3) as two examples of the more advanced learning analytics applications. The advances offered by these systems arise from greater considerations being given to the four questions. In particular, both tools provide a range of affordances (question 3) for action on the part of teaching staff. S3 goes so far as to provide a “basic case management tool for managing interventions” (Harmelen & Workman, 2012, p. 12) and has future intentions of using this feature to measure intervention effectiveness. Course Signals offers advancements in terms of information (question 1) and representation (question 2) by moving beyond simple tabular reporting of statistics, toward a traffic lights system based on an algorithm drawing on 44 different indicators from a range of sources to predict student risk status. While this algorithm has a history of development, Essa and Ayad (2012) argue that the reliance on a single algorithm contains “potential sources of bias” (n.p.) as it is based on the assumptions of a particular course model from a particular institution. Essa and Ayad (2012) go onto to describe S3’s advances such as an ensemble modeling strategy that supports model tuning (information and change); inclusion of social network analysis (information); and, a range of different visualisations including interactive visualisations allowing comparisons (representation, affordance and change).

References

Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems research. Journal of Information Technology, 20(2), 67–87. doi:10.1057/palgrave.jit.2000035

Beer, C., Jones, D., & Clark, D. (2012). Analytics and complexity : Learning and leading for the future. In M. Brown, M. Hartnett, & T. Stewart (Eds.), Future Challenges, Sustainable Futures. Proceedings of ascilite Wellington 2012 (pp. 78–87). Wellington, NZ.

Boland, R. J. (2002). Design in the punctuation of management action. In R. Boland (Ed.), . Weatherhead School of Management.

Bollier, D., & Firestone, C. (2010). The promise and peril of big data. Washington DC: The Aspen Institute.

Buckingham Shum, S. (2012). Learning Analytics. UNESCO. Moscow. http://iite.unesco.org/pics/publications/en/files/3214711.pdf

Clow, D. (2012). The learning analytics cycle. Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12, 134–138. doi:10.1145/2330601.2330636

Dawson, S., & McWilliam, E. (2008). Investigating the application of IT generated data as an indicator of learning and teaching performance. Canberra: Australian Learning and Teaching Council.

Elias, T. (2011). Learning Analytics: Definitions, Processes and Potential. http://learninganalytics.net/LearningAnalyticsDefinitionsProcessesPotential.pdf.

Essa, A., & Ayad, H. (2012). Student success system: risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge – LAK’12 (pp. 2–5). Vancouver: ACM Press.

Glassey, K. (1998). Seducing the End User. Communications of the ACM, 41(9), 62–69.

Goodhue, D., & Thompson, R. (1995). Task-technology fit and individual performance. MIS quarterly, 19(2), 213. doi:10.2307/249689

Hevner, A., March, S., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.

Norman, D. A. (1993). Things that make us smart: defending human attributes in the age of the machine. Cambridge, Mass: Perseus. Reading, MA: Addison Wesley.

Harmelen, M. Van, & Workman, D. (2012). Analytics for Learning and Teaching. http://publications.cetis.ac.uk/2012/516

Van Dyk, L. (2008). A data warehouse model for micro-level decision making in higher education. The Electronic Journal of e-Learning, 6(3), 235–244.

Villachica, S., Stone, D., & Endicott, J. (2006). Performance Suport Systems. In J. Pershing (Ed.), Handbook of Human Performance Technology (Third Edit., pp. 539–566). San Francisco, CA: John Wiley & Sons.

Schools and computers: Tales of a digital romance

It’s the last week of semester, EDC3100 ICTs and Pedagogy is drawing to a close and I’m putting together the last bit of activities/resources for the students in the course. Most are focused on the last assignment and in particular a final essay that asks them to evaluate their use of ICTs while on their three week Professional Experience where they were in schools and other locations teaching. Perhaps the most challenging activity I’d like them to engage in is questioning their assumptions around learning, teaching and the application of ICTs. A particularly challenging activity given that much of what passes for the use of ICTs – including much of my own work – in formal education hasn’t been very effective at questioning assumptions.

As one of the scaffolds for this activity I am planning to point the students toward Bigum (2012) as one strategy to illustrate questioning of assumptions. The following is a summary of my attempt to extract some messages from Bigum (2012) that I think are particularly interesting in the context of EDC3100. It also tracks some meanderings around related areas of knowledge.

Background

The rapid pace of change in terms of computing is made through some stats from Google’s CEO – every two days the world produces more information “than had been produced in total from the origin of the species to 2003” (p. 16)

Yet, if you return to 30 years ago, schools had more computers than the general community. A situation that is now reversed. Later in the paper Finger and Lee (2010) is cited as finding

For the class of 30 children the total home expenditure for computing and related technologies was $438,200. The expenditure for the classroom was $24,680. Even allowing for the sharing in families, the difference between the two locations is clearly significant.

Rather than transform or revolutionise the processes and outcomes of schooling, “it is hard to suggest that anything even remotely revolutionary has actually taken place”.

But once schools adjusted to these initial perturbations,schooling continued on much as it always had. More than this, schools learnt how to domesticate new technologies (Bigum 2002 ) , or as Tyack and Cuban ( 1995 , p. 126) put it, “computers meet classrooms, classrooms win.”

This observation fits with the expressed view that

schools have consistently attempted to make sense of “new” technologies by locating them within the logics and ways of doing things with which schools were familiar. (p. 17)

and the broader view that the “grammar of school”, in particular some of Papert’s observations. In particular, the interpretation of the computer/ICTs as a “teaching machine” rather than other interpretations, in Papert’s case constructionist related.

(Side note: in revisiting Papert’s “Why School Reform is Impossible” I’ve become more aware of this distinction Papert made

“Reform” and “change” are not synonymous. Tyack and Cuban clinched my belief that the prospects really are indeed bleak for deep change coming from deliberate attempts to impose a specific new form on education. However, some changes, arguably the most important ones in social cultural spheres, come about by evolution rather than by deliberate design — by what I am inspired by Dan Dennett (1994) to call “Darwinian design.”

This has some significant implications for my own thinking that I need to revisit.)

Budding romance

The entry of micro-computers into schools around the 80s was in part enabled by their similarity to calculators that had been used since the mid 1970s.

The similarities allowed teachers to imagine how to use the new technologies in ways consistent with the old…..for a technology to fi nd acceptance it has to generate uses.

which led to the development of applications for teaching and administrative work.

This led to the rise of vendors selling applications and the marketing of computers as “an unavoidable part of the educational landscape of the future”. At this stage, computers may have become like television, radio and video players – other devices already in classrooms (connecting somewhat here with Papert’s “computers as teaching machine” comment above). But a point of difference arose from the increasing spread of computers into other parts of society as solutions to a range of problems. ICTs were increasingly linked “with such seemingly desirable characteristics as ‘improvement’, ‘efficiency’ and, by extension, educational status” (p. 19).

Perhaps the strongest current indicator of this linkage (at least for EDC3100 students) is the presence of the ICT Capability in the Australian Curriculum. Not something that has happened with the other “teaching machines”.

Hence it became increasingly rational/obvious that schools had to have computers. What was happening with computers outside schools became an “evidence surrogate” for schools, i.e.

if ICTs are doing so much for banking, newspapers, or the military, it stands to reason that they are or can do good things in schooling. (p. 20)

This leads to comparison studies, each new wave of ICTs (e.g. iPads) come hand in hand with a new raft of comparison studies. Studies that are “like comparing oranges with orangutans”.

However, despite the oft-cited “schools + computers = improvement” claim, what computers are used for in schools is always constrained by dominant beliefs about how schools should work. (p. 20)

Domestic harmony

This is where the “grammar of school” or the schema perspective comes in.

Seeing new things in terms of what we know is how humans initially make sense of the new. When cars fi rst appeared they were talked about as horseless carriages. The fi rst motion pictures were made by filming actors on a stage and so on.

School leaders and teachers make decisions about which technologies fit within schools current routines and structures. If there is no fit, then ban it. Not to mention that “the more popular a particular technology is with students the greater the chance it will be banned”.

While the adoption of ICTs into schools begins with an aim of improvement, it often ends up with “integrating them into existing routines, deploying them to meet existing goals and, generally, failing to engage with technologies in ways consistent with the world beyond the classroom” (p. 22).

Summarising the pattern

Schools enter a cycle of identifying, buying and domesticating the “next best thing” on the assumption that there will be improvements to learning. With the increasing time/cost of staying in this game, there are more attempts to measure the improvement. Factors that are not measurable get swept under the carpet.

The folly of looking for improvement

The focus on improvement “reduces much debate about computers in schools to the level of right/wrong; good/bad; improved/not improved”.

Beyond this is the idea that “ICTs change things”. Sproull and Kiesler’s (1991) research

clearly demonstrates that when you introduce a technology, a new way of doing things into a setting, things change and that seeking to “assess” the change or compare the new way of doing things with the old makes little sense

An approach that is holistic, that does not separate social and technological allows a shift from looking at what has improved to looking to see what has changed. Changes that “may have very little to do with what was hoped for or imagined”.

Three different mindsets

This type of approach enables two mindsets currently informing current debates/practice to be questioned. Those mindsets are

  1. Embrace ICTs to improve schools

    This mindset sees schools doing well in preparing students for the future. The curriculum is focused on getting the right answer and teaching is focused on how to achieve this. Research here performs comparison studies, looking for improvement and the complexities of teaching with ICTs is embodied in concepts such as TPACK.

    This is the mindset that underpins much of what is in EDC3100.

  2. Schools cannot be improved, by ICTs or any other means.

    The idea that ICTs herald a change as significant as movable type. Connections with the de-schooling movement in terms of schools, that are based on a broadcast logic, will face the same difficulties facing newspapers, record companies etc. A mindset in which improving schools is a waste of time.

Proposes a different mindset, summarised as

  • Schools face real challenges and need to change.
  • Rather replace the current single solution with another, there is a need to “encourage a proliferation of thinking about and doing school differently”.
  • There is a need to focus on change and not measurement, on the social and not just the technical.
  • That this can help disrupt traditional relationships including those between: schools and knowledge, knowledge and children, children and teachers, and learners and communities.

References

Bigum, C. (2012). Schools and computers: Tales of a digital romance. In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and student diversity in futures oriented classrooms: Future Proofing Education (pp. 15–28). London: Springer.

One example of industrial e-learning as "on the web" not "of the web"

The following arises from some recent experiences with the idea of “minimum course sites” and this observation from @cogdog in this blog post

I have no idea if this is off base, but frankly it is a major (to me) difference of doing things ON the web (e.g. putting stuff inside LMSes) and doing things OF the web.

It’s also an query to see if anyone knows of an institution that has implemented a search engine across the institutional e-learning systems in a way that effectively allows users to search for resources in a course centric way.

The symptom

There’s a push on at my current institution’s central L&T folk to develop a minimum course site standard. Some minimum set of services, buttons etc. that will achieve the nirvana of consistency. Everything will be the same.

The main espoused reason as to why this is a good thing is that the students have been asking for it. There has been consistent feedback from students that none of the course sites are the same.

The problem

Of course, the real problem isn’t that students want everything to be the same. The real problem is that they can’t find what they are looking for. Sure, if everything was the same then they might have some ideas about where to find things, but that has problems including:

  • The idea that every course site at a university can be structured the same is a misunderstanding of the diversity inherent in course. Especially as people try to move away from the traditional models such as lecture/tutorial etc.
  • The idea that one particular structure will be understandable/appropriate to all people also is questionable.
  • Even if all the sites are consistent and this works, it won’t solve the problem of when the student is working on a question about “universal design” and wants to find where that was mentioned amongst the many artefacts in the course site.

The solution

The idea that the solution to this problem is to waste huge amounts of resources in the forlorn attempt to achieve some vaguely acceptable minimum standards that is broadly applicable seems to be a perfect example of “doing things ON the web, rather than doing things OF the web”.

I can’t remember the last time I visited a large website and attempted to find some important information by navigating through the site structure. Generally, I – like I expect most people – come to a large site almost directly to the content I am interested in either through a link provided by someone or via a search engine.

Broader implications

To me the idea of solving this problem through minimum standards is a rather large indication of the shortcomings of industrial e-learning. Industrial e-learning is the label I’ve applied to the current common paradigm of e-learning adopted by most universities. It’s techno-rational in its foundations and involves the planned management of large enterprise systems (be they open source or not). I propose that “industrial e-learning” is capable and concerned primarily with “doing things On the web, rather than doing things OF the web”.

Some potential contributing factors might include:

  1. Existing mindsets.
    At this institution, many of the central L&T folk come from a tradition of print-based distance education where consistency of appearance was a huge consideration. Many of these folk are perhaps not “of the web”.
  2. Limitations of the tools.
    It doesn’t appear that Moodle has a decent search engine, which is not surprising given the inspiration of its design and its stated intent of not being an information repository.
  3. The nature of industrial e-learning, its product and process.
    A key characteristic of industrial e-learning is a process that goes something like this
    1. Spend a long time objectively selecting an appropriate tool.
    2. Use that tool for along time to recoup the cost of moving to the new tool.
    3. Aim to keep the tool as vanilla as possible to reduce problems with upgrades from the vendor.
      This applies to open source systems as much as proprietary systems.
    4. Employ people to help others learn how to best use the system to achieve their ends.
      Importantly, the staff employed are generally not their to help others learn how to “best achieve their ends”, the focus definitely tends to be on ho to “best use the system to achieve their ends”.
    5. Any changes to the system have to be requested through a long-scale process that involves consensus amongst most people and the approval of the people employed in point d.

    This means that industrial e-learning is set up to do things the way the chosen systems work. If you have to do something that isn’t directly supported by the system, it’s very, very hard. e.g. add a search engine to Moodle.

All of these make it very hard for industrial e-learning to be “doing things OF the web”

Lessons for the meta-level of networked learning?

This semester I’m teaching EDU8117, Networked and Global Learning, one of the Masters level courses here at USQ. It’s been an interesting experience because I’m essentially supporting the design – a very detailed “constructive alignment” design – prepared by someone else. The following is a belated start of my plan to engage in the course at some level like a student. The requirement was to use one of a few provided quotes attempting to define either networked learning or global learning and link it to personal experience. A first step in developing a research article in the topic.

Networked learning

As a nascent/closet connectivist, networked learning is the term in this pair that is of most interest – though both are increasingly relevant to my current practice. All of the three quotes around networked learning spoke to various aspects of my experience, however, the Bonzo and Parchoma (2010, p. 912) quote really resonated, especially this part (my emphasis added)

that social media is a collection of ideas about community, openness, flexibility, collaboration, transformation and it is all user-centred. If education and educational institutions can understand and adopt these principles, perhaps there is a chance for significant change in how we teach and learn in formal and informal settings. The challenge is to discover how to facilitate this change.

At the moment I have yet to read the rest of the article – it is somewhat ironic that I am focusing on networked learning, whilst struggling with limited network access due to the limitations of a local telecommunications company – so I will have to assume that Bonzo and Parchoma are using this collection of ideas from social media as important ideas for networked learning.

What stikes me about this quote is that I think the majority of what passes for institutional support for networked learning – in my context I am talking about Australian Universities (though I believe there is significant similarities in universities across the world) – is failing, or at least struggling mightly “to discover how to facilitate this change”.

This perspective comes from two main sources:

  1. my PhD thesis; and,
    The thesis argued that how universities tend to implement e-learning is completely wrong for the nature of e-learning and formulated an alternate design theory. Interestingly, a primary difference between the “wrong” (how they are doing it now) and the “right” (my design theory) way is how well they match (or don’t) Bonzo and Parchoma’s (2010) collection of ideas from social media.
  2. my recent experience starting work as a teaching academic at a new university.
    In my prior roles – through most of the noughties I was in an environment where I had significant technical knowledge and access. This meant that when I taught I was able to engage in an awful lot on bricolage 1. In the main because the “LMS” I was using was one that I had designed to be user-centered, flexible and open and I still had the access to to make changes.

    On arriving at my new institution, I am now just a normal academic user of the institutional LMS, which means I’m stuck with what I’m given. What I’ve been given – the “LMS” and other systems – are missing great swathes of functionality and there is no way I can engage in bricolage to transform an aspect of the system into something more useful or interesting.

Meta-networked learning

Which brings me to a way in which I’m interested in extending this “definition” of networked learning to a community. Typically networked learning – at least within an institutional setting – is focused on how the students and the teachers are engaging in networked learning. More specifically, how they are using the LMS and associated institutional systems (because you can get in trouble for using something different). Whilst this level of interest in networked learning is important and something I need to engage in as a teaching academic within an institution. I feel what I can do at this level is being significantly constrained because the meta-level of networked learning is broken.

I’m defining the meta-level of networked learning as how the network of people (teaching staff, support staff, management, students), communities, technologies, policies, and processes within an institution learn about how to implement networked learning. How the network of all these elements work (or not) together to enable the other level of networked learning.

Perhaps the major problem I see with the meta-level of networked learning is that it isn’t though of as a learning process. Instead it is widely seen as the roll-out of an institutional, enterprise software system under the auspices of some senior member of staff. A conception that does not allow much space for being about “community, openness, flexibility, collaboration, transformation and it is all user-centred” (Bonzo and Parchoma, 2010, p. 912). Subsequently, I wonder “If education and educational institutions can understand and adopt these principles” (Bonzo and Parchoma, 2010, p. 912) and apply them to the meta-level of networked learning, then “perhaps there is a chance for significant change in how we teach and learn in formal and informal settings” (Bonzo and Parchoma, 2010, p. 912). As always, “The challenge is to discover how to facilitate this change” (Bonzo and Parchoma, 2010, p. 912). Beyond that, I wonder what impact such a change might have on the experience of the institution’s learners, teachers, other staff. Indeed, what impact it might have on the institutions.

References

Bonzo, J., & Parchoma, G. (2010). The Paradox of Social Media and Higher Education Institutions. Networked Learning: Seventh International Conference (pp. 912–918). Retrieved from http://lancaster.academia.edu/GaleParchoma/Papers/301035/The_Paradox_of_Social_Media_and_Higher_Education_Institutions

Hovorka, D., & Germonprez, M. (2009). Tinkering, tailoring and bricolage: Implications for theories of design. AMCIS’2009. Retrieved from http://aisel.aisnet.org/amcis2009/488

1 Hovorka and Germonprez (2009) cite Gabriel (2002) and Ciborra (2002) as describing bricolage as “as a way of describing modes of use characterized by tinkering, improvisation, and the resulting serendipitous, unexpected outcomes”.

Schemata and the source of dissonance?

The following is intended to be an illustration of one of the potential origins of the gap between learning technologists and educators. It picks up on the idea of schemata from this week’s study in one course and connects to my point about the dissonance between how educational technology is implemented in universities and what we know about how people learn.

I’m sure folk who have been around the education discipline longer than I will have seen this already. But it is a nice little activity and not one that I’d seen previously.

An experiment

Read the following paragraph and fill in the blanks. If you’re really keen add a comment below with what you got. Actually, gathering a collection of responses from a range of people would be really interesting.

The questions that p________ face as they raise ch________ from in_________ to adult are not easy to an _________. Both f______ and m________ can become concerned when health problems such as co_________ arise anytime after the e____ stage to later life. Experts recommend that young ch____ should have plenty of s________ and nutritious food for healthy growth. B___ and g____ should not share the same b______ or even be in the same r______. They may be afraid of the d_____.

Now, take a look at the original version of this paragraph.

Is there any difference between it and what you got? Certainly was for me.

Schemata

This problem was introduced in a week that was looking at Piaget and other theories about how folk learn. In particular, this example was used as an example of the role schemata play in how people perceive and process the world and what is happening within it.

I am a father of three wonderful kids. So, over the last 10+ years I’ve developed some significant schemata around raising kids. When I read the above paragraph, the words that filled the blanks for me were: parents, children, infant, answer, fathers, mothers,….and it was hear that I first paused. None of my children really suffered from colic, so that didn’t spring to mind, but I started actively searching for ways I could make this paragraph fit the schemata that I had activated. i.e. I was thinking “parent”, so I was trying to make these things fit.

Schemata are mental representations of an associated set of perceptions etc. The influence how you see what is going on.

I’m somewhat interested in seeing what words others have gotten from the above exercise, especially those without (recent) experience of parental responsibilities.

A difference of schemata

Learning technologists (or just plain innovative teachers) have significantly different schemata than your plain, everyday academic. Especially those that haven’t had much experience of online learning, constructivist learning, *insert “good” teaching practice of your choice*. Even within the population of learning technologists there is a vast difference in schemata.

Different schemata means that these folk see the world in very different ways.

A triumph of assimilation of accommodation

The on-going tendency of folk to say things like (as in an article from the Australian newspaper’s higher education section) Online no substitute for face to face teaching says something about their schemata and (to extend the (naive/simplistic) application of Piaget) the triumph of assimilation over accommodation.

For Piaget people are driven to maintain an equilibrium between what the know and what they observe in the outside world. When they perceive something new in the world they enter a state of disequilibrium and are driven to return to balance. For Piaget, there are two ways this is done.

  1. Assimilation – where the new insight is fitted into existing schemata.
  2. Accommodation – where schemata are changed (either old are modified or new are created) to account for the new insights.

I’d suggest that for a majority of academic staff (and senior management) when it comes to new approaches to learning and teaching their primary coping mechanism has been assimilation. Forcing those new approaches into the schemata they already have. i.e. the Moodle course site is a great place to upload all my handouts and have online lectures.

As I’ve argued before I believe this is because the approaches used to introduce new learning approaches in universities have had more in common with behaviourism than constructivism. Consequently the approaches have not been all that successful in changing schemata.

Some stories from teaching awards

This particular post tells some personal stories about teaching awards within Australian higher education. It’s inspired by a tweet or two from @jonpowles

Some personal success

For my sins, I was the “recipient of the Vice-Chancellor’s Award for Quality Teaching for the Year 2000”. The citation includes

in recoginition of demonstrated outstanding practices in teaching and learning at…., and in recognition of his contribution to the development of online learning and web-based teaching within the University and beyond

I remain a little annoyed that this was pre-ALTC. The potential extra funds from a national citation would have help the professional development fund. But the real problems with this award was the message I received about the value of teaching to the institution from this experience. Here’s a quick summary. (BTW, the institutional teaching awards had been going for at least 2 or 3 years before 2000, this was not the first time they’d done this.)

Jumping through hoops

As part of the application process, I had to create evidence to justify that my teaching was good quality. That’s a fairly standard process.

What troubled me then and troubles me to this day, is that the institution had no way of knowing. It’s core business is learning and teaching and it had no mechanisms in place that could identify the good and the bad teachers.

In fact, at that stage the institution didn’t have a teaching evaluation system. One of my “contributions to the development of online learning” was developing a web-based survey mechanism that I used in my own publications. This publication reports response rates of between 29-41% in one of my courses.

It is my understanding that the 2010 institutional evaluation system still dreams about reaching percentages that high.

Copy editing as a notification mechanism

Want to know how I found out I’d won the award? It was when an admin assistant from the L&T division rang me up and asked me to approve the wording of the citation.

Apparently, the Vice-Chancellor had been busy and/or away and hadn’t yet officially signed off on the result, or that I could be officially notified. However, the date for the graduation ceremony at which the award was to be given was fast approaching. In order to get the citation printed, framed and physically available at the ceremony the folk responsible for implementation had to go ahead and ask me to check the copy.

Seeing the other applications

I actually don’t remember exactly how this happened. I believe it was part of checking the copy of the citation, however it happened I ended up with a package that contained the submissions from all of the other applicants.

Double dipping

The award brought with it some financial reward, both at the faculty level (winning the faculty award was the first step) and the university level. The trouble was that even this part of the process was flawed. Though it was flawed in my favour. I got paid twice!

The money went into a professional development fund that was used for conference travel, equipment etc. Imagine my surprise and delight when my professional development fund received the reward, twice.

You didn’t make a difference

A significant part of the reason for the reward was my work in online learning and, in particular, the development of the Webfuse e-learning system. Parts of which are still in use at the institution and the story is told in more detail in my thesis.

About 4 years after receiving this award, recognising the contribution, a new Dean told me not to worry about working on Webfuse anymore, it had made no significant different to learning and teaching within the faculty.

Mixed messages and other errors

Can you see how the above experience might make someone a touch cynical about the value of teaching awards? It certainly didn’t appear to me that the recognition of quality teaching was so essential to the institution’s operations that they had efficient and effective processes. Instead it felt that the teaching award was just some add on. Not to mention a very subjective add on at that.

But the mixed messages didn’t stop there. They continued on with the rise of the ALTC. Some additional observed “errors”.

Invest at the tail end

With the rise of the ALTC it became increasingly important that an institution and its staff be seen to receive teaching citations. The number of ALTC teaching citations received became a KPI on management plans. Resources started to be assigned to ensuring the awarding of ALTC citations.

Obviously those resources were invested at the input stage of the process. Into the teaching environment to encourage and enable university staff to engage in quality learning and teaching. No.

Instead it was invested in hiring part-time staff to write the assist in the writing of the ALTC citation applications. It was invested in performing additional teaching evaluations for the institutional teaching award winners to cover up the shortcomings (read absence) of an effective broad-scale teaching evaluation system. It was invested in bringing ALTC winners onto campus to give “rah-rah” speeches about the value of teaching quality and “how I did it” pointers.

Reward the individual, not the team

Later in my career I briefly – in-between organisational restructures – was responsible for the curriculum design and development unit at the institution. During that time, a very talented curriculum designers worked very hard and very well with a keen and talented accounting academic to entirely re-design an Accounting course. The re-design incorporated all the right educational buzz words – “cognitive apprenticeship” – and the current ed tech fads – Second Life – and was a great success. Within a year or two the accounting academic received an institutional award and then an ALTC citation.

The problem was that the work the citation was for, could never have been completed by the academic alone. Without the curriculum designer involved – and the sheer amount of effort she invested in the project – the work would never have happened. Not unsurprisingly, the curriculum designer was somewhat miffed.

But it goes deeper than that. The work would not also have been possible without the efforts of a range of staff within the curriculum design unit, not to mention a whole range of other teaching staff (this course often has 10s of teaching staff at multiple campuses).

I know there are some ALTC citations that have been awarded to teams, but most ALTC citations are to individuals and this is certainly one example where a team missed out.

Attempt to repeat the success and fail to recognise diversity

But it goes deeper than that. The work for this course was not planned. It did not result from senior management developing a strategic plan that was translated into a management plan that informed decision making of some group that decided to invest X resources in Y projects to achieve Z goals.

It was all happenstance. There were the right people in the right place at the right time and they were encouraged and enable to run with their ideas. Some of the ideas were a bit silly, they had to be worked around, manipulated and cut back, but it was through a messy process of context-sensitive, collaboration between talented people that this good work arose.

Ignoring this perception, some folk then mistakenly tried to transplant the approach taken in this course into other courses. The failed to recognise that “lightning doesn’t strike twice”. You couldn’t transplant a successful approach from one course context into another. What you really had to do was start another messy process of context-sensitive, collaboration between talented people.

Quality teaching has to be embedded

This bring me back to some of the points that I made about the demise of the ALTC. Quality teaching doesn’t arise from external bodies and their actions, it arises from conditions within an university that enable and encourage messy processes of context-sensitive, collaboration between talented people.

Situated shared practice, curriculum design and academic development

Am currently reading Faegri et al (2010) as part of developing the justificatory knowledge for the final ISDT for e-learning that is meant to be the contribution of the thesis. The principle from the ISDT that this paper connects with is the idea of a “Multi-skilled, integrated development and support team” (the name is a work in progress). The following is simply a placeholder for a quote from the paper and a brief connection with the ISDT and what I think it means for curriculum design and academic development.

The quote

The paper itself is talking about an action research project where job rotation was introduced into a software development firm with the aim of increasing the quality of the knowledge held by software developers. The basic finding was that in this case, there were some benefits, however, the problems outweighed them. I haven’t read all the way through, I’m currently working through the literature review. The following quote is from the review.

Key enabling factors for knowledge creation is knowledge sharing
and integration [36,54]. Research in organizational learning has emphasized the value of practice; people acquire and share knowledge in socially situated work. Learning in the organization occurs in the interplay between tacit and explicit knowledge while it crosses boundaries of groups, departments, and organizations as people participate in work [17,54]. The process should be situated in shared practice with a joint, collective purpose [12,14,15].

Another related quote

The following is from a bit more related reading, in particular Seely Brown & Duguid (1991) – emphasis added

The source of the oppositions perceived between working, learning, and innovating lies primarily in the gulf between precepts and practice. Formal descriptions of work (e.g., “office procedures”) and of learning (e.g., “subject matter”) are abstracted from actual practice. They inevitably and intentionally omit the details. In a society that attaches particular value to “abstract knowledge,” the details of practice have come to be seen as nonessential, unimportant, and easily developed once the relevant abstractions have been grasped. Thus education, training, and technology design generally focus on abstract representations to the detriment, if not exclusion of actual practice. We, by contrast, suggest that practice is central to understanding work. Abstractions detached from practice distort or obscure intricacies of that practice. Without a clear understanding of those intricacies and the role they play, the practice itself cannot be well understood, engendered (through training), or enhanced (through innovation).

Relevance?

I see this as highly relevant to the question of how to improve learning and teaching in universities, especially in terms of the practice of e-learning, curriculum design and academic development. It’s my suggestion that the common approaches to these tasks in most universities ignore the key enabling factors mentioned in the above quote.

For example, the e-learning designers/developers, curriculum designers and academic developers are generally not directly involved with the everyday practice of learning and teaching within the institution. As a result the teaching academics and these other support staff don’t get the benefit of shared practice.

A further impediment to shared practice is the divisions between e-learning support staff, curriculum designers and academic developers that are introduced by organisational hierarchies. At one stage, I worked at a university where the e-learning support people reported to the IT division, the academic staff developers reported to the HR division, the curriculum designers reported to the library, and teaching academics were organised into faculties. There wasn’t a common shared practice amongst these folk.

Instead, any sharing that did occur was either at high level project or management boards and committees, or in design projects prior to implementation. The separation reduce the ability to combine, share and create new knowledge about what was possible.

The resulting problem

The following quote is from Seely Brown and Duiguid (1991)

Because this corporation’s training programs follow a similar downskilling approach, the reps regard them as generally unhelpful. As a result, a wedge is driven between the corporation and its reps: the corporation assumes the reps are untrainable, uncooperative, and unskilled; whereas the reps view the overly simplistic training programs as a reflection of the corporation’s low estimation of their worth and skills. In fact, their valuation is a testament to the depth of the rep’s insight. They recognize the superficiality of the training because they are conscious of the full complexity of the technology and what it takes to keep it running. The corporation, on the other hand, blinkered by its implicit faith in formal training and canonical practice and its misinterpretation of the rep’s behavior, is unable to appreciate either aspect of their insight.

It resonates strongly with some recent experience of mine at an institution rolling out a new LMS. The training programs around the new LMS, the view of management, and the subsequent response from the academics showed some very strong resemblances to the situation described above.

An alternative

One alternative, is what I’m proposing in the ISDT for e-learning. The following is an initial description of the roles/purpose of the “Multi-skilled, integrated development and support team”. Without too much effort you could probably translate this into broader learning and teaching, not just e-learning. Heaven forbid, you could even use it for “blended learning”.

An emergent university e-learning information system should have a team of people that:

  • is responsible for performing the necessary training, development, helpdesk, and other support tasks required by system use within the institution;
  • contains an appropriate combination of technical, training, media design and production, institutional, and learning and teaching skills and knowledge;
  • through the performance of its allocated tasks the team is integrated into the everyday practice of learning and teaching within the institution and cultivates relationships with system users, especially teaching staff;
  • is integrated into the one organisational unit, and as much as possible, co-located;
  • can perform small scale changes to the system in response to problems, observations, and lessons learned during system support and training tasks rapidly without needing formal governance approval;
  • actively examines and reflects on system use and non-use – with a particular emphasis on identifying and examining what early innovators – to identify areas for system improvement and extension;
  • is able to identify and to raise the need for large scale changes to the system with an appropriate governance process; and
  • is trusted by organisational leadership to translate organisational goals into changes within the system, its support and use.

References

Faegri, T. E., Dyba, T., & Dingsoyr, T. (2010). Introducing knowledge redundancy practice in software development: Experiences with job rotation in support work. Information and Software Technology, 52(10), 1118-1132.

Seely Brown, J., & Duguid, P. (1991). Organizational learning and communities-of-practice: Toward a unified view of working, learning, and innovation. Organization Science, 2(1), 40-57.

The rider, elephant, and shaping the path

Listened to this interview of Chip Heath, a Stanford Professor in Organizational Behaviour about his co-authored book Switch: How to change things when change is hard. My particular interest in this arises from figuring out how to improve learning and teaching in universities. From the interview and the podcast this seems to be another one in a line of “popular science” books aimed at making clear what science/research knows about the topic.

The basic summary of the findings seems to be. If you wish to make change more likely, then your approach has to (metaphorically):

  • direct the rider;
    The rider represents the rational/analytical decision making capability of an individual. This capability needs to be appropriately directed.
  • engage the elephant; and
    The elephant represents the individual’s emotional/instinctive decision making approach. From the interview, the elephant/rider metaphor has the express purpose of showing that the elephant is far stronger than the rider. In typical situations, the elephant is going to win, unless there’s some engagement.
  • shape the path.
    This represents the physical and related environment in which the change is going to take place. My recollection is that the shaping has to support the first two components, but also be designed to make it easier to traverse the path and get to the goal.

There are two parts of the discussion that stuck with me as I think they connect with the task of improving learning and teaching within universities.

  1. The over-rationalisation of experts.
  2. Small scale wins.

Over-rationalisation of experts

The connection between organisational change and losing weight seems increasingly common, it’s one I used and it’s mentioned in the interview. One example used in the interview is to show how a major problem with change is that it is driven by experts. Experts who have significantly larger “riders” (i.e. rational/analytical knowledge) of the problem area/target of change than the people they are trying to change. This overly large rider leads to change mechanisms that over complicate things.

The example they use is the recently modified food pyramid from the United States that makes suggestions something like, “For a balanced diet you should consume X tablespoons of Y a day”. While this makes sense to the experts, a normal person has no idea of how many tablespoons of Y is in their daily diet. In order to achieve the desired change, the individual needs to develop all sorts of additional knowledge and expertise. Which is just not likely.

They compare this with some US-based populariser of weight loss who proposes much simpler suggestions e.g. “Don’t eat anything that comes through your car window”. It’s a simpler, more evocative suggestion that appears to be easier for the rider to understand and helps engage the elephant somewhat.

I can see the equivalent of this within learning and teaching in higher education. Change processes are typically conceived and managed by experts. Experts who over rationalise.

Small scale wins

Related to the above is the idea that change always consists of barriers or steps that have to be stepped over. Change is difficult. The suggestion is that when shaping the path you want to design it in such a way so that the elephant can almost just walk over the barrier. The interviewer gives the example of never being able to get her teenage sons to stop taking towels out of the bathroom and into their bedroom. Eventually what worked was “shaping the path” by storing the sons’ underwear in the bathroom, not their bedroom.

When it comes to improving learning and teaching in universities, I don’t think enough attention is paid to “shaping the path” like this. I think this is in part due to the process being driven by the experts, so they simply don’t see the need. But it is also, increasingly, due to the fact that the people involved can’t shape the path. Some of the reasons the path can’t be shaped include:

  • Changing the “research is what gets me promoted” culture in higher education is very, very difficult and not likely to happen effectively if just one institution does it.
  • When it comes to L&T path (e.g. the LMS product model or the physical infrastructure of a campus) it is not exactly set up to enable “shaping”.
  • The people involved at a university, especially in e-learning, don’t have the skills or the organisational structure to enable “shaping”.

Nobody likes a do-gooder – another reason for e-learning not mainstreaming?

Came across the article, “Nobody likes a do-gooder: Study confirms selfless behaviour is alienating” from the Daily Mail via Morgaine’s amplify. I’m wondering if there’s a connection between this and the chasm in the adoption of instructional technology identified by Geoghegan (1994)

The chasm

Back in 1994, Geoghegan draw on Moore’s Crossing the Chasm to explain why instructional technology wasn’t being adopted by the majority of university academics. The suggestion is that there is a significant difference between the early adopters of instructional technology and the early majority. That what works for one group, doesn’t work for the others. There is a chasm. Geoghegan (1994) also suggested that the “technologists alliance” – vendors of instructional technology and the university folk charged with supporting instructional technology – adopt approaches that work for the early adopters, not the early majority.

Nobody likes do-gooders

The Daily Mail article reports on some psychological research that draws some conclusions about how “do-gooders” are seen by the majority

Researchers say do-gooders come to be resented because they ‘raise the bar’ for what is expected of everyone.

This resonates with my experience as an early adopter and more broadly with observations of higher education. The early adopters, those really keen on learning and teaching are seen a bit differently by those that aren’t keen. I wonder if the “raise the bar” issue applies? Would imagine this could be quite common in a higher education environment where research retains its primacy, but universities are under increasing pressure to improve their learning and teaching. And more importantly show to everyone that they have improved.

The complete study is outlined in a journal article.

References

Geoghegan, W. (1994). Whatever happened to instructional technology? Paper presented at the 22nd Annual Conferences of the International Business Schools Computing Association, Baltimore, MD.

University e-learning systems: the need for new product and process models and some examples

I’m in the midst of the horrible task of trying to abstract what I think I know about implementing e-learning information systems within universities into the formal “language” required of an information systems design theory and a PhD thesis. This post is a welcome break from that, but is still connected in that it builds on what is perhaps fundamentally different between what most universities are currently doing, and what I think is a more effective approach. In particular, it highlights some more recent developments which are arguably a step towards what I’m thinking.

As it turns out, this post is also an attempt to crystalise some early thinking about what goes into the ISDT. So some of the following is a bit rough. Actually, writing this has identified one perspective that I hadn’t thought of, which is potentially important.

Edu 2.0

The post arises from having listened to this interview with Graham Glass the guy behind Edu 2.0, which is essentially a cloud-based LMS. It’s probably one of a growing number out there. What I found interesting was his description of the product and the process behind Edu 2.0.

In terms of product (i.e. the technology used to provide the e-learning services), the suggestion was that because Edu 2.0 is based in the cloud – in this case Amazon’s S3 service – it could be updated much more quickly than more traditional institutionally hosted LMSs. There some connection here with Google’s approach to on-going modifications to live software.

Coupled with this product flexibility was a process (i.e. the process through which users were supported and the system evolved) that very much focused on the Edu 2.0 developers interacting with the users of the product. For example, releasing proposals and screenshots of new features within discussion forums populated with users and getting feedback; and also responding quickly to requests for fixes or extensions from users. To such an extent that Glass reports users of Edu 2.0 feeling like it is “there EDU 2.0” because it responds so quickly to them and their needs.

The traditional Uni/LMS approach is broken

In the thesis I argue that when you look at how universities are currently implementing e-learning information systems (i.e. selecting and implementing an LMS) the product (the enterprise LMS, the one ring to rule them all) and the process they use are not a very good match at all for the requirements of effectively supporting learning and teaching. In a nut shell, the product and the process is aimed at reducing diversity and the ability to learn, while diversity is a key characteristic of learning and teaching at a university. Not to mention that when it comes to e-learning within universities, it’s still very early days and it is essential that any systemic approach to e-learning have the ability to learn from its implementation and make changes.

I attempted to expand on this argument in the presentation I gave at the EDUCAUSE’2009 conference in Denver last year.

What is needed

The alternative I’m trying to propose within the formal language of the ISDT is that e-learning within universities should seek to use a product (i.e. a specific collection of technologies) that is incredible flexible. The product must, as much as possible, enable rapid, on-going, and sometimes quite significant changes.

To harness this flexibility, the support and development process for e-learning should, rather than be focused on top-down, quality assurance type processes, be focused on closely observing what is being done with the system and using those lessons to modify the product to better suit the diversity of local needs. In particular, the process needs to be adopter focused, which is described by Surry and Farquhar (1997) as seeing the individual choosing to adopt the innovation as the primary force for change.

To some extent, this ability to respond to the local social context can be hard with a software product that has to be used in multiple different contexts. e.g. an LMS used in different institutions.

Slow evolution but not there yet

All university e-learning implementation is not the same. There has been a gentle evolution away from less flexible products to more flexible produces, e.g.

  1. Commercial LMS, hosted on institutional servers.
    Incredibly inflexible. You have to wait for the commercial vendor to see the cost/benefit argument to implement a change in the code base, and then you have to wait until your local IT department can schedule the upgrade to the product.
  2. Open source LMS, hosted on institutional servers.
    Less inflexible. You still have to wait for a developer to see your change as an interesting itch to scratch. This can be quite quick, but it can also be slow. It can be especially quick if your institution has good developers, but good developers cost big money. Even if the developer scratches your itch, the change has to be accepted into the open source code base, which can take some time if its a major change. Then, finally, after the code base is changed, you have to wait for your local IT shop to schedule the upgrade.
  3. Open source LMS, with hosting outsourced.
    This can be a bit quicker than the institutional hosted version. Mainly because the hosting company may well have some decent developers and significant knowledge of upgrading the LMS. However, it’s still going to cost a bit, and it’s not going to be real quick.

The cloud-based approach used by EDU 2.0 does offer a product that is potentially more flexible than existing LMS models. However, apart from the general slowness in the updating, if the change is very specific to an individual institution, it is going to cause some significant problems, regardless of the product model.

Some alternative product models

The EDU 2.0 model doesn’t help the customisation problem. In fact, it probably makes it a bit worse as the same code base is being used by hundreds of institutions from across the globe. The model being adopted by Moodle (and probably others), having plugins you can add, is a step in the right direction in that institutions can choose to have different plugins installed.
However, this model typically assumes that all the plugins have to use the same API, language, or framework. If they don’t, they can’t be installed on the local server and integrated into the LMS.

This requirements is necessary because there is an assumption for many (but not all) plugins that they provide the entire functionality and must be run on the local server. So there is a need for a tighter coupling between the plugin and the LMS and consequently less local flexibility.

A plugin like BIM is a little different. There is a wrapper that is tightly integrated into Moodle to provide some features. However, the majority of the functionality is provided by software (in this case blogging engines) that are chosen by the individual students. Here the flexibility is provided by the loose coupling between blog engine and Moodle.

Mm, still need some more work on this.

References

Surry, D., & Farquhar, J. (1997). Diffusion Theory and Instructional Technology. e-Journal of Instructional Science and Technology, 2(1), 269-278.

How people learn and implications for academic development

While I’m traveling this week I am reading How people learn. This is a fairly well known book that arose out of a US National Academy of Science project to look at recent insights from research about how people learn and then generate insights for teaching. I’ll be reading it through the lens of my thesis and some broader thinking about “academic development” (one of the terms applied to trying to help improve the teaching and learning of university).

Increasingly, I’ve been thinking that the “academic development” is essentially “teaching the teacher”, though it would be better phrased as creating an environment in which the academics can learn how to be better at enabling student learning. Hand in hand with this thought is the observation and increasing worry that much of what passes for academic development and management action around improving learning and teaching is not conducive to creating this learning environment. The aim of reading this book is to think about ways which this situation might be improved.

The last part of this summary of the first chapter connects with the point I’m trying to make about academic development within universities.

(As it turns out I only read the first chapter while traveling, remaining chapters come now).

Key findings for learning

The first chapter of the book provides three key (but not exhaustive) findings about learning:

  1. Learners arrive with their own preconceptions about how the world exists.
    As part of this, if the early stages of learning does not engage with the learner’s understanding of the world, then the learner will either not get it, or will get it enough to pass the test, but then revert to their existing understanding.
  2. Competence in a field of inquiry arises from three building blocks
    1. a deep foundation of factual knowledge;
    2. understand these facts and ideas within a conceptual framework;
    3. organise knowledge in ways that enable retrieval and application.

    A primary idea here is that experts aren’t “smart” people. But they do have conceptual frameworks that help apply/understand much quicker than others

  3. An approach to teaching that enables students to implement meta-cognitive strategies can help them take control of their learning and monitor their progress.
    Meta-cognitive strategies aren’t context or subject independent.

Implications for teaching

The suggestion is that the above findings around learning have significant implications for teaching, these are:

  1. Teachers have to draw out and work with pre-existing student understandings.
    This implies lots more formative assessment that focuses on demonstrating understanding.
  2. In teaching a subject area, important concepts must be taught in-depth.
    The superficial coverage of concepts (to fit it all in) needs to be avoided, with more of a focus on the those important subject concepts.
  3. The teaching of meta-cognitive skills needs to be integrated into the curriculum of a variety of subjects.

Four attributes of learning environments

A latter chapter expands on a framework to design and evaluate learning environments, it includes four interrelated attributes of these environments:

  1. They must be learner centered;
    i.e. a focus on the understandings and progress of individual students.
  2. The environment should be knowledge centered with attention given to what is taught, why it is taught and what competence or mastery looks like
    Suggests too many curricula fail to support learning because the knowledge is disconnected, assessment encourages memorisation rather than learning. A knowledge-centered environment “provides the necessary depth of study, assessing student understanding rather than factual memory and incorporates the teaching of meta-cognitive strategies”.

    There’s an interesting point here about engagement, that I’ll save for another time.

  3. Formative assessments
    The aim is for assessments that help both students and teachers monitor progress.
  4. Develop norms within the course, and connection with the outside world, that support core learning values.
    i.e. pay attention to activities, assessments etc within the course that promote collaboration and camaraderie.

Application to professional learning

In the final section of the chapter, the authors state that these principles apply equally well to adults as they do to children. They explain that

This point is particularly important because incorporating the principles in this volume into educational practice will require a good deal of adult learning.

i.e. if you want to improve learning and teaching within a university based on these principles, then the teaching staff will have to undergo a fair bit of learning. This is very troubling because the authors argue that “approaches to teaching adults consistently violate principles for optimizing learning”. In particular, they suggest that professional development programs for teachers frequently:

  • Are not learner centered.
    Rather than ask what help is required, teachers are expected to attend pre-arranged workshops.
  • Are not knowledge centered.
    i.e. these workshops introduce the principles of a new technique with little time spent to the more complex integration of the new technique with the other “knowledge” (e.g. the TPACK framework) associated with the course
  • Are not assessment centered.
    i.e. when learning these new techniques, the “learners” (teaching staff) aren’t given opportunities to try this out, get feedback and even to give teachers the skills to know whether or not they’ve implemented the new technique effectively.
  • Are not community centered.
    Professional development consists more of ad hoc, separate events with little opportunity for a community of teachers to develop connections for on-going support.

Here’s a challenge. Is there any university out there were academic development doesn’t suffer from these flaws? How has that been judged?

The McNamara Fallacy and pass rates, academic analytics, and engagement

In some reading for the thesis today I came across the concept of McNamara’s fallacy. I hadn’t heard this before. This is somewhat surprising as it points out another common problem with some of the more simplistic approaches to improving learning and teaching that are going around at the moment. It’s also likely to be a problem with any simplistic implementation of academic analytics.

What is it?

The quote I saw describes McNamara’s fallacy as

The first step is to measure whatever can be easily measured. This is ok as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.

The Wikipedia page on the McNamara fallacy describes it as referring to Robert McNamara’s – the US Secretary of Defense from 1961 through 1968 – explanation of the USA’s failure in Vietnam down to a focus on quantifying success through simply indicators such as enemy body count, while at the same time ignoring other more important factors. Factors that were more difficult to measure.

The PhD thesis which I saw the above quote ascribes it to Yankelovich (1972), a sociologist. Wikipedia ascribes it to Charles Handy’s “The Empty Raincoat”. Perhaps indicating that the quote is from McNamara himself, just presented in different places.

Pass rates

Within higher education it is easy to see “pass rates” as an example of McNamara’s fallacy. Much of the quality assurance within higher education institutions is focused on checking the number of students who do (or don’t) pass a course. If the pass rate for a course isn’t too high, everything is okay. Much easier to measure this than the quality of student learning experience, the learning theory which informs the course design, or the impact the experience has on the student, now and into the future. This sort of unquestioning application of McNamara’s fallacy sometimes make me think we’re losing the learning and teaching “war” within universities.

What are the more important, more difficult to measure indicators that provide a better and deeper insight into the quality of learning and teaching?

Analytics and engagement

Student engagement is one of the buzz words on the rise in recent years, it’s been presented as one of the ways/measures to improve student learning. After all, if they are more engaged, obviously they must have a better learning experience. Engagement has become an indication of institutional teaching quality. Col did a project last year in which he looked more closely at engagement, the write up of that project gives a good introduction to student engagement. It includes the following quote

Most of the research into measuring student engagement prior tot he widespread adoption of online, or web based classes, has concentrated on the simple measure of attendance (Douglas & Alemanne, 2007). While class attendance is a crude measure, in that it is only ever indicative of participation and does not necessarily consider the quality of the participation, it has nevertheless been found to be an important variable in determining student success (Douglas, 2008)

Sounds a bit like a case of McNamara’s fallacy to me. A point Col makes when he says “it could be said that class attendance is used as a metric for engagement, simply because it is one of the few indicators of engagement that are visible”.

With the move to the LMS, it was always going to happen that academic analytics would be used to develop measures of student engagement (and other indicators). Indeed, that’s the aim of Col’s project. However, I do think that academic analytics is going to run the danger of McNamara’s fallacy. So busy focused on what we can measure easily, we miss the more important stuff that we can’t.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén

css.php